id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
52818269
pes2o/s2orc
v3-fos-license
368 Better Protection from Eczema Among Turkish Migrants' Children Carries Over from Preschool Age to Adolescence Background In a cross-sectional study of preschool children born in Germany we found significantly reduced rates of atopy, eczema and asthma among Turkish migrants' children than among their domestic peers (Clin Exp Allergy. 2002;32:526–531). About 10 years later we re-examined children from this study population in order to investigate whether better protection from atopy among Turkish migrants' children persists into adolescence. Methods The setting of the original survey was screening for school eligibility in an inner-city district of Berlin/Germany. The participants were preschool children with double German or double Turkish citizenship. The main outcome measures were IgE to common aeroallergens (CAP system Phadia, Phadiatop 3 0.35 kU/L) and 1-year prevalence of allergic disease symptoms (ISAAC questionnaire in German and Turkish language). All available adolescents from the first survey were included in the follow-up survey. Results 147 German and 154 Turkish adolescents were included. Rates of allergic sensitization tended to be lower among Turkish migrants' children than among domestic children at preschool age (7.0% vs 13.8%) and in adolescence (33.1% vs 41.7%). Likewise, lower rates of eczema among Turkish migrants' children at preschool age (7.8% vs 18.4%; P = 0.010) carry over to adolescence (8.7% vs 22.4%, P = 0.008). Rates of asthma also tended to be lower at preschool age (2.6% vs 6.1%) and in adolescence (14.0% vs 16.3%). By contrast, hay fever at any time point was not lower among the Turkish migrants' children (preschool age, 3.9% vs 3.4%; adolescent, 19.1% vs 16.1%). Conclusions This prospective study demonstrates that rates of allergic sensitization and of allergic diseases emerging earlier in life (eczema, asthma) tend to be persistently lower among Turkish migrants' children than among their German peers growing up in a very similar inner-city macro-environment. Further study is under way to examine potential allergy-protective factors in this cohort. 368 Better Protection from Eczema Among Turkish Migrants' Children Carries Over from Preschool Age to Adolescence Christoph Grüber, MD, PhD, Paolo Matricardi, MD, PhD, Christopher Theissen, MD, and Ulrich Wahn, Prof. Dr. Pediatric Pneumology/Immunology, Charité -Universitätsmedizin Berlin, Berlin, Germany. Background: In a cross-sectional study of preschool children born in Germany we found significantly reduced rates of atopy, eczema and asthma among Turkish migrants' children than among their domestic peers (Clin Exp Allergy. 2002;32:526-531). About 10 years later we re-examined children from this study population in order to investigate whether better protection from atopy among Turkish migrants' children persists into adolescence. Methods: The setting of the original survey was screening for school eligibility in an inner-city district of Berlin/Germany. The participants were preschool children with double German or double Turkish citizenship. The main outcome measures were IgE to common aeroallergens (CAP system Phadia, Phadiatop 3 0.35 kU/L) and 1-year prevalence of allergic disease symptoms (ISAAC questionnaire in German and Turkish language). All available adolescents from the first survey were included in the follow-up survey. Results: 147 German and 154 Turkish adolescents were included. Rates of allergic sensitization tended to be lower among Turkish migrants' children than among domestic children at preschool age (7.0% vs 13.8%) and in adolescence (33.1% vs 41.7%). Likewise, lower rates of eczema among Turkish migrants' children at preschool age (7.8% vs 18.4%; P ¼ 0.010) carry over to adolescence (8.7% vs 22.4%, P ¼ 0.008). Rates of asthma also tended to be lower at preschool age (2.6% vs 6.1%) and in adolescence (14.0% vs 16.3%). By contrast, hay fever at any time point was not lower among the Turkish migrants' children (preschool age, 3.9% vs 3.4%; adolescent, 19.1% vs 16.1%). Conclusions: This prospective study demonstrates that rates of allergic sensitization and of allergic diseases emerging earlier in life (eczema, asthma) tend to be persistently lower among Turkish migrants' children than among their German peers growing up in a very similar inner-city macro-environment. Further study is under way to examine potential allergy-protective factors in this cohort. Comparison of Skin and Conjunctival Reactivity to Aeroallergens Elizabeth Maria Mercer Mourão, MD, MSc, and Nelson Rosario, MD, PhD. Allergy and Immunology, Federal University of Paraná, Curitiba, Brazil. Background: Diagnosis of allergic conjunctivitis (AC) is based on symptoms and positive skin prick test (SPT) to common aeroallergens. Allergens identified by SPT may not be clinically relevant to the eye. This study aims to compare the skin and conjunctival allergic responses to dust mites and grass pollen. Conclusions: Reactivity to aeroallergens in provocation tests requires higher allergen dose for CPT than SPT. Positive SPT with standardized allergenic extracts is predictive of clinical relevance in the diagnosis of allergic conjunctivitis. Correlation Between Skin Prick Test and Mast Results in Patients with Chronic Rhinitis Young Ha Kim, MD, and Jin Hee Cho, MD, PhD. Department of Otolaryngology-HNS, Yeouido St. Mary's Hospital, Seoul, South Korea. Background: Among methods to confirm the allergic causes of chronic rhinitis, the most common and the most reliable method is skin prick test, followed by MAST, which is reported to be compatible to skin prick test, with acceptable sensitivity and specificity. This study was designed to confirm whether MAST is reliable test in diagnosing allergic rhinitis. Methods: Retrospective chart review was conducted with chronic rhinitis patients who visited Yeouido St. Mary's Hospital between January 2010 and June 2011. Subjects were selected with whom the results of skin prick test and MAST were found. Results: One hundred and ninety three subjects, 111 male and 82 females, were included and the mean age was 30.08 (range 6w77). MAST was performed for 42 inhalant allergens and skin prick test was performed for 56 allergens including histamine and control.Subjects who have one or more positive allergen in skin prick test were 132, and positive in MAST were 104. Sensitivity was 63.16%, specificity was 65.57% and efficiency was 63.92%.Number of positive allergen in skin prick test was 2.42 in average and among positive subjects, 3.53. In MAST, positive allergen count was 2.1 in average and among positive subjects, 4.0. Positive rates per common allergens in skin prick test were as follow; Dermatophagoides farinae 79.69% (106 subjects), Dermatophagoides pteronyssinus 68.42% (91 subjects), oak pollen 12.78% (17 subjects). Positive rates per common allergens in MAST were as follow; Dermatophagoides farinae 69.52% (73 subjects), Dermatophagoides pteronyssinus 59.05% (62 subjects), housedust 50.48% (53 subjects).Skin prick test result was analyzed as from negative to 61, according to relative size of the allergen wheal compared with histamine wheal and MAST result was analyzed as from negative to class 6, according to the concentration of the solution. When we defined correlation as difference between positive count in skin prick test and class in MAST were less than 2, the correlation rate in Df was 65.80%, 59.07% in Dp. WAO Journal February 2012 Abstracts Ó 2012 World Allergy Organization S135 Copyright @ World Allergy Organization. Unauthorized reproduction of this article is prohibited. 2012
2018-05-08T17:46:24.207Z
2012-02-01T00:00:00.000
{ "year": 2012, "sha1": "f7c65e7fe2dc957b4cde731dfa128c7a290fe16e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/01.wox.0000412131.64392.a2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7c65e7fe2dc957b4cde731dfa128c7a290fe16e", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
52814089
pes2o/s2orc
v3-fos-license
(k,p)-Planarity: A Relaxation of Hybrid Planarity We present a new model for hybrid planarity that relaxes existing hybrid representations. A graph $G = (V,E)$ is $(k,p)$-planar if $V$ can be partitioned into clusters of size at most $k$ such that $G$ admits a drawing where: (i) each cluster is associated with a closed, bounded planar region, called a cluster region; (ii) cluster regions are pairwise disjoint, (iii) each vertex $v \in V$ is identified with at most $p$ distinct points, called \emph{ports}, on the boundary of its cluster region; (iv) each inter-cluster edge $(u,v) \in E$ is identified with a Jordan arc connecting a port of $u$ to a port of $v$; (v) inter-cluster edges do not cross or intersect cluster regions except at their endpoints. We first tightly bound the number of edges in a $(k,p)$-planar graph with $p<k$. We then prove that $(4,1)$-planarity testing and $(2,2)$-planarity testing are NP-complete problems. Finally, we prove that neither the class of $(2,2)$-planar graphs nor the class of $1$-planar graphs contains the other, indicating that the $(k,p)$-planar graphs are a large and novel class. Introduction Visualization of non-planar graphs is one of the most studied graph-drawing problems in recent years.In this context, an emerging topic is hybrid representations (see, e.g., [1,2,5,6,9]).A hybrid representation simplifies the visual analysis of a non-planar graph by adopting different visualization paradigms for different portions of the graph.The graph is divided into (typically dense) subgraphs called clusters which are restricted to limited regions of the plane.Edges between vertices in the same cluster are called intra-cluster edges, and edges between vertices in different clusters are called inter-cluster edges.Intercluster edges are represented according to the classical node-link graph drawing paradigm, while the clusters and their intra-cluster edges are represented by adopting alternative paradigms.A hybrid representation thus reduces the number of inter-cluster edges and the visual complexity of much of the drawing at the cost of creating cluster regions of high visual complexity.As a result, a hybrid representation provides an easy to read overview of the graph structure and it admits a "drill-down" approach when a more detailed analysis of some of its clusters is needed. Different representation paradigms for clusters give rise to different types of hybrid representations.For example, Angelini et al. [1] introduce intersectionlink representations, where clusters are represented as intersection graphs of sets of rectangles, while Henry et al. [9] introduce NodeTrix representations, where dense subgraphs are represented as adjacency matrices (see Fig. 1).Batagelj et al. employ hybrid representations in the (X, Y )-clustering model [2], where Y and X define the desired topological properties of the clusters and of the graph connecting the clusters, respectively.For instance, in a (planar, k-clique)-clustering of a graph each cluster is a k-clique and the graph obtained by contracting each cluster into a single node (called the graph of clusters) is planar.Given a graph G and a hybrid representation paradigm P, the hybrid planarity problem asks whether G can be represented according to P with no inter-cluster edge crossings.Variants of the problem may or may not assume that the clustering is given as part of the input. In this paper, we present a general hybrid representation paradigm that relaxes the described hybrid paradigms.Given a graph G = (V, E), a (k,p) representation Γ of G is a hybrid representation in which: (i) each cluster of G contains at most k vertices and is identified with a closed, bounded planar region; (ii) cluster regions are pairwise disjoint, (iii) each vertex v ∈ V is represented by at most p distinct points, called ports, on the boundary of its cluster region; (iv) each inter-cluster edge (u, v) ∈ E is represented by a Jordan arc connecting a port of u to a port of v.A (k, p) representation is (k, p)-planar if edge curves do not cross and do not intersect cluster regions except at their endpoints.We say that a graph G is (k, p)-planar if it can be clustered so that it admits a (k, p)-planar representation. The definition of a (k, p) representation leaves the representation of clusters and intra-cluster edges intentionally unspecified.It is thus a relaxation of hybrid representation paradigms where the number of ports used by the inter-cluster edges depends on the geometry of the cluster regions.For example, in a NodeTrix representation, the squared boundary of each matrix allows four ports for every vertex except for the vertex in the first row/column of the matrix and the vertex in the last row/column of the matrix, which both have only three ports.Hence, a NodeTrix representation can be regarded as a constrained (k, 4) representation (four ports for every vertex except for two, the vertices appear in the order imposed by the matrix); see Fig. 1(a).Similarly, a (k, 2) representation relaxes an intersection-link representation with clusters represented as isothetic unit squares with their upper-left corners along a common line with slope 1; see Fig. 1(b).We also remark that the use of different ports to represent a vertex can be regarded as an example of vertex splitting [7,8]; however, while in the papers that use vertex splitting to remove crossings the multiple copies of each vertex can be placed anywhere in the drawing, in our model they are forced to lay within the boundary of the same cluster region. The results of this paper are the following: -In Section 2, we give an upper bound on the edge density of a (k, p)-planar graph and prove that this bound is tight for p < k. -In Section 3, we observe that the class of (4, 1)-planar graphs coincides with the class of IC-planar graphs, from which the NP-completeness of testing (4, 1)-planarity follows.We then prove that testing (2, 2)-planarity is NPcomplete.These results imply that computing the minimum k such that a graph is (k, p)-planar is NP-hard for both p = 1 and p = 2. Recall that a graph is 1-planar if it admits a drawing where every edge is crossed at most once, and that an IC-planar graph is a 1-planar graph that admits a drawing where no two pairs of crossing edges share a vertex.-The NP-completeness of the (2, 2)-planarity testing problem naturally suggests to further investigate the combinatorial properties of (2, 2)-planar graphs. Sections of certain proofs are removed to the appendix.These statements are marked with [*]. Edge Density of (k, p)-Planar Graphs In this section we give a tight bound on the number of edges of a (k, p)-planar graph when p < k.First, given a (k, p)-planar representation Γ , we define a skeleton of Γ to be a planar drawing Γ S obtained by the following transformation.We first replace each port in Γ with a vertex.Each cluster region R i of Γ is now an empty convex space surrounded by up to kp vertices.We connect these vertices in a cycle and triangulate the interior.For our purposes any triangulation is equivalent.The resulting representation is Γ S .Figure 2 Proof.Let Γ be a (k, p)-planar representation of G and let N be the number of clusters of G.As each cluster contains at most k vertices, G has at most N • k(k−1) 2 intra-cluster edges.Let R i be a cluster region in Γ with p i ports in total.Let Γ S be a skeleton of Γ , and let n S and m S denote the number of vertices and the number of edges of Γ S , respectively.When Γ S is created, R i is replaced with p i vertices and 2p i − 3 edges if p i > 1, or 0 edges if p i = 1.Letting m inter be the number of inter-cluster edges in G and s be the number of clusters in G containing a single vertex, we have, In other words, the total number of edges in Γ S is equal to the number of intercluster edges in G plus the number of edges added for each cluster.Note that m S ≤ 3n S −6, as Γ S is a planar drawing.As N i=1 p i = n S , rearranging generates m inter + 2n S − 3N + s ≤ 3n S − 6 and thus, As m is equal to the sum of the number of inter-cluster and intra-cluster edges in G, we have If all clusters contain k vertices, then N = n k and Theorem 1 holds.Appendix A completes the proof that m ≤ n(p ) − 6 in the case where some clusters contain fewer than k vertices. In order to show that the bound is tight for p < k, we describe a (k, p)-planar representation Γ k,p with N = n k clusters and (kp + 3)N − 6 inter-cluster edges.Γ k,p is possible for any pair of positive integers p and k such that p < k and for any N > 2. Γ k,p has N clusters each with k vertices and thus kp ports.Let R 1 and R 2 be two cluster regions.We say that R 1 and R 2 are kp-connected if they are connected by kp+1 edges as shown in Fig. 3(a).(Note that, since the number of inter-cluster edges between two k-clusters is at most k 2 , we can create kp + 1 edges between R 1 and R 2 only if p < k).More precisely, R 1 , which we refer to as the small end of the kp-connection, is connected by means of p + 1 consecutive ports; the first p ports have k incident edges each, and the last port has an additional edge.R 2 , which we refer to as the large end of the kp-connection, is connected by means of p(k − 1) + 1 consecutive ports, each connected to one or two edges.Notice that, since we use p(k − 1) + 1 ports for the large end, p + 1 for the small end and two ports can be shared by the two ends, each cluster region can be the small end of one kp-connection and the large end of another kp-connection.Thus, we can create a cycle with N clusters as shown in Fig. 3(b). In the resulting representation there are two faces of degree N : One is the outer face and the other one is inside the cycle.By triangulating these two faces with N − 3 edges for each face, we obtain the (k, p)-representation Γ k,p .The number of inter-cluster edges of Γ k,p is thus (kp + 1)N + 2N − 6 = (kp + 3)N − 6. Recognition of (k, p)-Planar Graphs This section considers the problem of testing (k, p)-planarity for the cases in which p = 1 and p = 2. [*] (k, 1)-planarity testing can be performed in linear time for k ≤ 3, and it is NP-complete for k = 4. Proof.The first part of Theorem 2 follows from the fact that the class of (k, 1)planar graphs coincides with the class of planar graphs for k = 1, 2, 3.The second part follows from the fact that the (4, 1)-planar graphs coincide with the IC-planar graphs [13].Testing IC-planarity is known to be NP-complete [4].Appendix B proves both equivalencies. Corollary 1.The problem of computing the minimum value of k such that a graph is (k, 1)-planar is NP-hard. We now focus on the (2, 2)-planarity testing problem, hereafter referred to as (2, 2)-Planarity.We show that (2, 2)-Planarity is NP-complete by a reduction from the NP-complete problem Planar Monotone 3-SAT [3].We say that an instance of 3-SAT is monotone if every clause consists solely of positive literals (a positive clause) or solely of negative literals (a negative clause).A rectilinear representation of a 3-SAT instance is a planar drawing where each variable and clause is represented by a rectangle, all the variable rectangles are drawn along a horizontal line, and vertical segments connect clauses with their constituent variables.A rectilinear representation is monotone if it corresponds to a monotone instance of planar 3-SAT where positive clauses are drawn above the variables and negative clauses are drawn below the variables, as shown in Fig. 4(a).Given a monotone rectilinear representation Φ corresponding to a boolean formula F , the problem Planar Monotone 3-SAT asks if F has a satisfying assignment. We denote by K − 8 the graph created by removing two adjacent edges from the complete graph K 8 .In our reduction we make use of the following transformation.Let v be a vertex of G. we replace v with a copy of K − 8 by identifying v with the vertex of K − 8 with degree 5.After performing this operation we say that v is a K-vertex.The following lemma, whose proof is in Appendix C, states a useful property of the K-vertices.For each variable v i of F (with i = 1, . . ., n) create in G a K-vertex v i and connect such K-vertices in a cycle, in the order implied by Φ (refer to Fig. 4(b)).Split each edge (v i , v i+1 ) of the cycle with a K-vertex c i,i+1 .Split the edge (v 1 , v n ) with the vertices c 0,1 and c n,n+1 .Finally, duplicate the edge (c 0,1 , c n,n+1 ) and split the duplicated edges with the K-vertices plus and minus.We refer to this subgraph as the variable cycle.Given a variable v i , let p i be the number of positive clauses and q i be the number of negative clauses of F in which v i appears.For 1 ≤ i ≤ n, connect c i−1,i to c i,i+1 with a path of ordinary vertices of length equal to max(p i , q i ).We refer to these paths as false literal boundaries. For each clause C j = (l j,1 ∨ l j,2 ∨ l j,3 ) in F , create a corresponding clause gadget in G. Create ordinary vertices l j,1 , l j,2 , l j,3 and open j , create a K-vertex closed j , and add an edge between any pair of vertices, as in Fig. 5(a).Observe that in any (2, 2)-planar representation of a clause gadget, two of the four vertices l j,1 , l j,2 , l j,3 and open j must be arranged in one cluster of size 2.This is due to the fact that by Lemma 1, closed j must be clustered within its K − 8 subgraph.If l j,1 , l j,2 , l j,3 and open j were all clustered separately, the graph of clusters of G would contain a K 5 minor.Also, any 2-clustering of a clause gadget in which a literal vertex is clustered with open j is (2, 2)-planar, as shown in Fig. 5(b).Now, connect the clause gadgets with a tree structure corresponding to the positions of clause rectangles in Φ.Let C j be a clause rectangle in Φ with l 1 , l 2 , and l 3 corresponding to the vertical segments descending from C j from left to right.If C j is nested between vertical segments corresponding to literals m 1 and m 2 of another clause rectangle C k , split the edges (l j,1 , l j,3 ) and (m k,1 , m k,2 ) with K-vertices and connect the new K-vertices with an edge.If C j is nested under no other clause rectangle, split (l j,1 , l j,3 ) with a K-vertex and connect the new vertex to plus if C j corresponds to a positive clause and to minus otherwise.This procedure leads to a configuration consisting of two trees of clause gadgets connected as in Fig. 5(c).This concludes the construction of G. Appendix D proves that G is (2, 2)-planar if and only if Φ has a satisfying assignment.The NP-completeness of (2, 2)-Planarity suggests further investigation into the combinatorial properties of (2, 2)-planar graphs.In this section, we study the relationship between (2, 2)-planarity and 1-planarity.This is partly motivated by general interest in 1-planar graphs (see, e.g., [10]) and partly by the following observation.Since a 1-planar graph admits a drawing where each edge is crossed by at most one other edge, it seems reasonable to remove each crossing of the drawing by clustering two of the vertices that are involved in the crossing as shown in Fig. 6.An n-vertex 1-planar graph has at most 4n − 8 edges [12].By Theorem 1, a (2, 2)planar graph with n vertices has at most 4n − 6 edges, so it is not immediately clear that there are 1-planar graphs that are not (2, 2)-planar. As we are going to show, however, there is an infinite family of 1-planar graphs that are not (2, p)-planar for any value of p ≥ 1.On the positive side, we demonstrate a large family of 1-planar graphs that are (2, 2)-planar.Theorem 4. For every h > 2, there exists a 1-planar graph with n = 5 • 2 h − 8 vertices and m = 18 • 2 h − 36 edges that is not (2, p)-planar, for any p ≥ 1. Proof.We define a recursive family of 1-plane graphs as follows.Graph H 1 consists of a single kite K, which is a 1-plane graph isomorphic to K 4 drawn so that all the vertices are on the boundary of the outer face.Graph H i , for i = 2, 3, . . ., has 2 i kites in addition to H i−1 ; these kites form a cycle in the outer face of H i−1 , and each kite contains a vertex of the boundary of the outer face of H i−1 (note that H i−1 has 2 i vertices on the boundary of the outer face).See Fig. 7(a) for an example.The kites of H i \ H i−1 are called the external kites of H i .The embedding of H i described in the definition will be called the canonical embedding of H i .We also consider another possible embedding, called the reversed embedding.Let B be the boundary of the outer face in the canonical embedding of H i ; in the reversed embedding of H i the cycle B is the boundary We show that H h is not (2, p)-planar for any p ≥ 1. Suppose that H h has a (2, p)-planar representation Γ for some p ≥ 1 and let G C be the graph of clusters of H h .Since Γ is planar, G C must be planar.G C can be obtained from H h by contracting each pair of vertices that is assigned to each cluster region (and removing multiple edges).Contracting a pair of vertices u and v, the number of vertices reduces by one and the number of edges reduces by the number of paths of length at most 2 connecting u and v (for each path we remove one edge).In H h , there are at most 4 such paths between any pair of vertices.Hence, if we contract q pairs of vertices, the number of vertices in G C is n = n − q, while the number of edges is m ≥ m − 4q.If G C is planar, m ≤ 3n − 6 and thus it must be m − 4q ≤ 3(n − q) − 6, which gives q ≥ m − 3n + 6 = 3 • 2 h − 6, i.e. we must contract at least 3 • 2 h − 6 pairs of vertices.Since there are 5 • 2 h − 8 vertices, we can contract at most 5•2 h −8 2 pairs.Thus, it must be 3 • 2 h − 6 ≤ 5 • 2 h−1 − 4, i.e., 2 h−1 ≤ 2, which can be satisfied only for h ≤ 2. Note that our argument is independent of the 1-planar embedding of H h .This implies that the result holds for 1-planar graphs, not just for 1-plane graphs.Theorem 4 motivates further investigation of the relationship between 1-planar and (2, 2)-planar graphs.Note that there are infinitely many (2, 2)planar graphs that are not 1-planar.For example, observe that every graph obtained by connecting with an edge a planar graph and K 7 has such a property, because K 7 is not 1-planar (it has more than 4n−8 = 20 edges) but it is (2, 2)-planar, as depicted in Fig. 8. In what follows, we describe a non-trivial family of 1-planar graphs that are also (2, 2)-planar.Let G be a 1-plane graph, and let e u = (u 1 , u 2 ) and e v = (v 1 , v 2 ) be a pair of crossing edges of G. Any pair u i , v j , with 1 ≤ i, j ≤ 2, is a representative pair of the edge crossing defined by e u , e v .An independent set of distinct representatives (ISDR for short) of G is a set of representative pairs such that there is exactly one representative pair per crossing and no two representative pairs in the set have a common vertex.Fig. 9(b) shows an ISDR for the graph of Fig. 9(a). We want to show that if a 1-plane graph G has an ISDR then it is (2, 2)planar.The crossing edges graph of G, called ce-graph for short and denoted as CE(G), is the subgraph of G induced by the crossing pairs of G. G is pseudoforestal if CE(G) is a pseudoforest (i.e. it has at most one cycle in each connected component).For example, the 1-planar graph of Fig. 9(a) is pseudoforestal, as shown in Fig. 9(c).The pseudoforestal 1-planar graphs include non-trivial subfamilies of 1-planar graphs, such as IC-planar graphs (whose ce-graph has maximum degree one), or the 1-planar graphs such that each vertex is shared by at most two crossing pairs (whose ce-graph has maximum degree two).Theorem 5. A pseudoforestal 1-plane graph is (2, 2)-planar. Proof.We start by proving that a 1-plane graph G contains an ISDR if and only if G is pseudoforestal.It is known that a graph G can be oriented such that the maximum in-degree is k if and only if its pseudoarboricity is k (i.e. the edges of G can be partitioned into k pseudoforests) [11].Thus, G is pseudoforestal if and only if CE(G) can be oriented so that the maximum in-degree is one.We now show that this is a necessary and sufficient condition for the existence of an ISDR S in G. Assume that an ISDR exists.Let e u = (u 1 , u 2 ) and e v = (v 1 , v 2 ) be two crossing edges and let u i , v j (1 ≤ i, j ≤ 2) be the representative pair of e u and e v .Direct e u towards u i and e v towards v j .Doing this for each pair of crossing edges defines an orientation for all edges of CE(G).In this orientation each vertex of CE(G) has in-degree at most 1, since no two pairs in S share a vertex.Now suppose that CE(G) has an orientation such that each vertex has in-degree at most 1.For each pair of directed crossing edges (u 1 , u 2 ), (v 1 , v 2 ) in CE(G), we add the pair u 2 , v 2 to S. Since each vertex v in CE(G) has indegree at most 1, v is a vertex of at most one pair in S. Thus, the pairs selected for different crossing pairs are distinct and no two of them share a vertex.We now describe how to use an ISDR S of G to construct a (2, 2)-planar representation of G where each pair in S is represented as a 2-cluster that has 2 copies for each of its vertices.Let Γ be a 1-planar drawing of G that respects the 1-planar embedding of G. Consider any two crossing edges e u = (u 1 , u 2 ) and e v = (v 1 , v 2 ) and denote by c the point where they cross in Γ .Without loss of generality, assume that u 1 , v 1 is the representative pair of e u and e v (see Fig. 10 for an illustration).Subdivide the edge e u with a copy v 1 of v 1 placed between u 1 and c along e u ; analogously, subdivide the edge e v with a copy u 1 of u 1 .Add a curve λ 1 connecting u 1 to v 1 and a curve λ 2 connecting u 1 to v 1 .By walking very close to the two edges e u and e v , these two curves can be drawn without crossing any existing edge and so that the closed curve λ formed by λ 1 and λ 2 together with the portion of e u from u 1 to v 1 and the portion of e v from v 1 to u 1 does not contain any vertex of Γ .Curve λ defines the cluster region for the cluster containing u and v. Replace the edge e u with a curve λ u connecting u 2 to u 1 and the edge e v with a curve λ v connecting v 2 to v 1 .Again, by walking very close to the two edges e u and e v , λ u and λ v can be drawn without crossing existing edges and without crossing each other.The replacements of e u with λ u and of e v with λ v remove the crossing between e u and e v .Repeating the described procedure for every pair of crossing edges, all crossings are removed.Since for each pair of crossing edges there is a distinct representative pair and no two pair share a vertex, the result is a (2, 2)-planar representation of G. Open Problems The results in this paper suggest the following open problems: (i) Tightly bound the edge density of (k, p)-planar graphs for p ≥ k; (ii) Study the complexity of (k, p)-planarity testing for larger values of k and p; (iii) Further study the relationship between 1-planar graphs and (2, p)-planar graphs. Appendix A Supplement for Proof of Theorem 1 In this section, we complete the proof of Theorem 1 in the case where some clusters contain fewer than k vertices.Let G be a (k, p)-planar graph, Γ a (k, p)planar representation of G, and N the number of clusters of G.In Section 2 we showed that m = n(p ) − 6 if all clusters contain exactly k vertices.Denote by V 1 , . . ., V N the clusters of G and let k i be the size of cluster V i .We first add non-crossing inter-cluster edges so that the faces of Γ external to the cluster regions are triangles.Let Γ 0 be the resulting (k, p)-planar representation.Notice that Γ 0 can have multiple edges.We then construct a sequence Γ 0 , Γ 1 , . . ., Γ N of (k, p)-planar representations so that Γ N has all clusters of size k and each Γ i is obtained from Γ i−1 by taking into account the cluster V i .We denote by n i and m i the number of vertices and edges of Γ i , respectively.If k i = k cluster V i is not modified and we set Γ i = Γ i−1 .If k i = 1 we remove the single vertex v in V i and we triangulate the face that is created by this removal (see Fig. 11(a) and Fig. 11(b)).Also in this case multiple edges can be introduced.The number of vertices of dummy vertices, we add p•h i ports in between two consecutive ports associated with two different vertices of V i (see Fig. 11(c) and Fig. 11(d)).We then add (k edges internally to V i and p • h i edges externally to V i to triangulate the face enlarged by the insertions (again multiple edges can be created).The number of vertices of We now prove the following claim that together with the fact that m N ≤ n N (p Clearly nothing has to be proven for In order to prove that Claim 1 holds in this case, we show that 3−p− 3 k − k 2 + 1 2 ≤ 0, which can be rewritten as , which is greater than or equal to 0 for any integer value of k.Consider now the case 1 < k i < k; notice that this case is possible only for k ≥ 3. We have which holds for every k ≥ 3. B Supplement for Proof of Theorem 2 In this section, we complete the proof of Theorem 2 by showing that the class of (k, 1)-planar graphs coincides with the class of planar graphs for k = 1, 2, 3 and that the class of 1)-planar graphs coincides with the class of IC-planar graphs. If G is planar, G is trivially (k, 1)-planar for all positive integers k.Let G be a (k, 1)-planar graph for some k ≤ 3, and let Γ be a (k, 1)-planar representation of G. Replace each cluster of G of size h with an h-clique.Since h ≤ 3 the obtained drawing is planar. Recall that an IC-planar graph admits a 1-planar embedding in which no two pairs of crossing edges share a vertex.Let G be an IC-planar graph, and let Γ be an IC-planar embedding of G. Γ can be transformed into a (4, 1)-planar representation of G by replacing the vertices incident to each pair of crossing edges with a cluster. Let G be a (4, 1)-planar graph and let Γ be a (4, 1)-planar representation of G.Each cluster of G is a subgraph of a 4-clique and therefore each cluster region in Γ can be replaced with a drawing that contains at most one pair of crossing edges.As Γ contains no crossing inter-cluster edges, the resulting embedding is IC-planar.Alternatively, suppose the remaining vertices of G are grouped into four clusters, in which case G consists of three 2-clusters and two vertices which may or may not be clustered with additional vertices outside of G .For the purpose of our analysis, we may ignore any vertices outside of G , as their presence cannot affect the possibility of a (2, 2)-planar representation of G . C Proof of Lemma 1 Each 2-cluster can contain at most 1 intra-cluster edge, so any (2, 2)-planar representation of G has 23 inter-cluster edges.However, by Equation 2, we have that m inter ≤ n S + 3N − 6 − s in any (k, p)-planar representation Γ of a graph G = (V, E), where s is the number of clusters consisting of a single vertex and n S is the total number of vertices in the skeleton of Γ .When applied to G , Equation 2 implies that 23 ≤ 14 + 15 − 6 − 2 = 21, a contradiction.Thus any (2, 2)-planar representation of G creates four 2-clusters as shown in Fig. 12(a). D Supplement for Proof of Theorem 3 In this section, we complete the proof of Theorem 3 by proving that our constructed graph G is (2, 2)-planar if and only if the corresponding instance Φ of Planar Monotone 3-SAT is a Yes instance. Let Φ be a Yes instance of Planar Monotone 3-SAT, and let A be an assignment function satisfying Φ.We show that the graph G corresponding to Φ is (2, 2)-planar by constructing a (2, 2)-planar representation of G using Φ as a template. Replace each variable rectangle in Φ with the corresponding vertex of G and draw the variable cycle.We refer to the region defined by the variable cycle and the plus (minus) vertex as the positive side (negative side).For each variable v i , draw its false literal boundary on the negative side if A(v i ) = T rue and on the positive side if A(v i ) = F alse.Fig. 12(b) illustrates a drawing of the variable cycle and false literal boundaries of G 0 according to the assignment of v 2 and v 3 to T rue and v 1 and v 4 to F alse.Let l j,i be the literal vertex corresponding to clause C j and variable v i .Place l j,i at the point of intersection between the rectangle associated with C j and the vertical segment connecting the rectangles C j and v i . Connect the three literal vertices of C j to form a face, and insert closed j and open j on the interior, creating one necessary crossing.Insert the tree structure edges, which by construction can be added without creating crossings.Connect literal vertices to variable vertices, which creates a crossing on a false literal boundary precisely when the value assigned to a variable by A does not match the literal.Fig. 13(a) illustrates such a drawing of G 0 . Resolve each crossing at a false literal boundary by clustering the literal vertex with a vertex on the boundary.The specification that each false literal boundary has at least max(p i , q i ) vertices ensures that this operation can be performed.Because A satisfies F , each clause gadget has at least one literal vertex that can be connected to its variable vertex without crossing a false literal boundary.Cluster this vertex with open j to resolve each clause gadget crossing.The result of this process is a (2, 2) representation of G as illustrated in Fig. 13(b). Let G be a Yes instance of (2, 2)-Planarity corresponding to an instance Φ of Planar Monotone 3-SAT.We show that Φ is a Yes instance of Planar Monotone 3-SAT. Let Γ be a (2, 2)-planar representation of G. First, note that any vertices v 1 and v 2 connected by an edge in G must be drawn on the same side of the variable cycle in any (2, 2)-planar representation of G.This follows from Lemma 1, as neither v 1 nor v 2 can be clustered with any K-vertex in the variable cycle.Thus the positive (negative) clause gadgets must all be drawn on the same side of the variable cycle as they are connected by the tree structure to plus (minus) and the variable vertices.We refer to the sides of the cycle with the positive and negative clause gadgets as the positive and negative sides of the cycle.As a consequence of Lemma 1, each false literal boundary is drawn either on the positive or on the negative side of the cycle as well.Define an assignment function A by setting A(v i ) to T rue (F alse) if the false literal boundary for v i is drawn on the positive (negative) side of the vertex cycle in Γ .We claim that at least one literal vertex of each positive (negative) clause gadget is connected in Γ to a variable vertex with A(v i ) set to True (False). Without loss of generality, consider the case of a positive clause gadget C j with literals l j,1 , l j,2 , and l j,3 connected to variables v 1 , v 2 , and v 3 .Assume for contradiction that every literal vertex of C j is connected in Γ to a variable v with A(v) = F alse, which means that the false literal boundaries of v 1 , v 2 , and v 3 are drawn on the positive side of the variable cycle.We show that any placement of the K-vertex closed j creates an edge crossing in Γ , contradicting our assumption. Suppose first that closed j is placed outside the false literal boundaries.Then each of v 1 , v 2 , and v 3 must be clustered with a boundary vertex and the clause gadget does not admit a (2, 2)-planar representation (see Fig. 14(a)).Now suppose that closed j is drawn inside the false literal boundary of one constituent variable, v 2 for example.In this case, the path (closed j , l j,1 , v 1 ) intersects two false literal boundaries.Because closed j and v 2 are K-vertices, only l j,2 can be clustered with a false literal boundary vertex and thus this placement creates at least one necessary crossing (see Fig. 14(b)).Likewise, suppose that closed j is drawn inside the false literal boundary of two constituent variables, for example, v 1 and v 2 .In this case, the path (closed j , l j,3 , v 3 ) crosses three false literal boundaries and creates a necessary crossing (see Fig. 14(c)).Finally, suppose that closed j is drawn inside all three false literal boundaries (see Fig. 14(d)).In this case, the path (closed j , l j,1 , v 1 ) crosses two false literal boundaries and creates a necessary crossing.Thus, regardless of the position of the a vertex closed j in Γ , at least one of the literal vertices of C j must match the assignment of its associated variable vertex.This concludes the proof of our claim, i.e., that at least one literal vertex l i of each clause gadget C j in Γ is connected to a variable v i with A(v i ) = l i .Thus A is a satisfying assignment for F , and Φ is a Yes instance of Planar Monotone 3-SAT. Fig. 3 . Fig. 3. (a) A kp-connection of two cluster regions R1 and R2 (k = 5, p = 3).(b) A cycle of N = 5 clusters; the bold edges highlight the two faces of degree N . Fig. 4 . Fig. 4. (a) A planar monotone representation of Φ0.(b) The variable cycle of G0 and false literal boundaries. Lemma 1 . [*] Let v be a K-vertex of a graph G and let G be the K − 8 subgraph associated with v.In any (2, 2)-planar representation of G, each vertex of G is clustered with another vertex in G .Theorem 3. [*] (2, 2)-Planarity is NP-complete. Fig. 9 . Fig. 9. (a) A 1-planar graph G. (b) An ISDR of G.For each pair of crossing edges the representative pair is indicated with a dashed line connecting the pair.Vertices shared by different crossing pairs are replicated in each pair.(c) The ce-graph CE(G) of G. Fig. 10 . Fig. 10.(a) Two crossing edges eu and ev; (b) Construction of the cluster region and replacement of eu and ev; (c) The resulting drawing. Fig. 11 . Fig. 11.(a) A cluster Vi of size ki = 1, corresponding to a vertex v; (b) Removal of v and triangulation; (c) A cluster Vi of size ki = 2; (d) Augmentation of Vi with p • hi ports and triangulation of the face enlarged by the insertions. Lemma 1 . [*] Let v be a K-vertex of a graph G and let G be the K − 8 subgraph associated with v.In any (2, 2)-planar representation of G, each vertex of G is clustered with another vertex in G .Proof.Suppose there exists a (2, 2)-planar representation of G that leaves v unclustered or clustered with a vertex outside of G .If the remaining vertices of the G subgraph are grouped into at least five clusters, G does not admit a (2, 2)-planar representation because its graph of clusters includes a K 5 subgraph. Fig. 12 . Fig. 12.(a) A (2, 2)-planar representation of a K-vertex v and its associated K − 8 subgraph.(b) A drawing of the variable cycle of G0 with false literal boundaries oriented according to variable assignment. Fig. 14 . Fig. 14.Possible placements of the clause vertex closedj relative to its three corresponding clause boundaries.
2018-07-12T07:47:01.916Z
2018-06-29T00:00:00.000
{ "year": 2018, "sha1": "cf3e3cb48b2e30ac910de802054b076a4ef0aaee", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1806.11413", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1f08b03697011e36ac8dbe3fa0f53bcbf089379f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
13935796
pes2o/s2orc
v3-fos-license
TAT-Mediated Transduction of MafA Protein In Utero Results in Enhanced Pancreatic Insulin Expression and Changes in Islet Morphology Alongside Pdx1 and Beta2/NeuroD, the transcription factor MafA has been shown to be instrumental in the maintenance of the beta cell phenotype. Indeed, a combination of MafA, Pdx1 and Ngn3 (an upstream regulator of Beta2/NeuroD) was recently reported to lead to the effective reprogramming of acinar cells into insulin-producing beta cells. These experiments set the stage for the development of new strategies to address the impairment of glycemic control in diabetic patients. However, the clinical applicability of reprogramming in this context is deemed to be poor due to the need to use viral vehicles for the delivery of the above factors. Here we describe a recombinant transducible version of the MafA protein (TAT-MafA) that penetrates across cell membranes with an efficiency of 100% and binds to the insulin promoter in vitro. When injected in utero into living mouse embryos, TAT-MafA significantly up-regulates target genes and induces enhanced insulin production as well as cytoarchitectural changes consistent with faster islet maturation. As the latest addition to our armamentarium of transducible proteins (which already includes Pdx1 and Ngn3), the purification and characterization of a functional TAT-MafA protein opens the door to prospective therapeutic uses that circumvent the use of viral delivery. To our knowledge, this is also the first report on the use of protein transduction in utero. Introduction Maf proteins belong to a large class of transcription factors originally described as viral oncogenes [1]. They are characterized by the presence of a basic leucine zipper (b-Zip) domain and the ability to bind to DNA MARE (Maf Recognition Elements) either as homodimers or heterodimers with other b-Zip proteins. These transcription factors have been associated with the regulation of multiple differentiation processes, including hematopoiesis, skin and lens development and hind-brain segmentation [2,3]. The best characterized Maf factors expressed in the pancreas are MafA and MafB [4]. Their role in pancreatic development has been difficult to ascertain, especially because their knockout has no overt effects in the specification of the major lineages of the organ [5,6]. However, MafA 2/2 mice display glucose intolerance and develop age-dependent diabetes [5], and MafB knockouts exhibit some defects on endocrine cell maturation [6]. Since all Maf factors compete for the same MARE sites, their temporal and spatial pattern of expression is likely to affect developmental outcomes. Although MafB has also been recently shown to be essential for the appropriate regulation of Pdx1, Nkx6.1 and GLUT-2 in the final stages of islet b cell maturation [7], recent evidence suggests that a switch from MafB to MafA might be critical for the embryonic maturation and prolonged survival/ function of b cells [4]. MafA has been found to selectively bind to the C1 (human)/ RIPE3b (rat) element of the insulin gene promoter of b cells [8]. This sequence is of fundamental importance in the regulation of glucose-dependent insulin secretion [9]. While MafA is not a strong transactivator of the insulin promoter by itself, a synergistic action with Pdx1 and NeuroD/Beta2 has been demonstrated [10]. These two other factors are not exclusive of b cells, but this particular combination (Pdx1, NeuroD/Beta2 and MafA) is. Therefore, it has been hypothesized that the b cell-restricted expression of insulin is dictated by the concerted action of these three factors [11]. Perhaps not surprisingly, their ectopic expression in hepatocytes (which are ontogenically and physiologically related to b cells [12]) resulted in the activation of insulin expression [13]. More recently, a similar combination of genes (Pdx1, MafA and Ngn3, an upstream regulator of NeuroD/Beta2 [14]) also resulted in the in vivo reprogramming of pancreatic exocrine cells into b cells [15]. In addition to the regulation of insulin secretion, MafA may also be involved in other b cell processes by directly regulating genes such as prohormone convertase 1/3 (PC1/3), the glucagon-like peptide 1 receptor (GLP1-R), the glucose transporter GLUT-2, glucokinase, pyruvate carboxylase and the subunits Kir6.2 and SUR1 of potassium channels, as well as the transcription factors Nkx6.1, NeuroD/Beta2 and Pdx1 [16]. From a translational point of view, the emerging role of this gene as a potential therapeutic target for diabetes has not gone unnoticed. MafA is now acknowledged as a key element in most reprogramming strategies for islet b cell neogenesis [15,17,18]. However, as the use of viral vehicles to deliver reprogramming factors is considered unsafe in the context clinical therapies [19,20], the quest for non-viral alternatives is a timely one. One such alternative is protein transduction, a technology by which membrane-permeable domains (protein transduction domains, or PTDs) enable recombinant proteins to efficiently penetrate into cells. One of the best studied PTDs is the 11-aa peptide derived from the basic domain of the TAT/HIV transactivator protein [21]. TAT is bound by charged heparan sulfate chains of cell membrane proteoglycans, taken up by macropinocytosis and then released to the cytoplasm [22,23]. Due to its ease of engineering and effectiveness, TAT and other synthetic cationic relatives [24] have been used extensively to deliver full-length functional proteins both in vitro and in vivo [21]. Our team has been among the pioneers of protein transduction for pancreatic islet applications [25,26,27,28,29,30,31,32,33]. We have also previously described the generation of TAT-Ngn3, a transducible version of the pro-endocrine transcription factor that choreographs pancreatic endocrine cell specification. The use of this protein on pancreatic progenitor cells in vitro resulted in the targeted stimulation of a and b cell differentiation according to the developmental stage of the cells [34]. Here we report the purification and use of recombinant TAT-MafA in a variety of biological systems, including live developing mouse embryos onto which the protein was injected intracardially by ultrasound guidance techniques developed by our team (Nieto and Pastori, unpublished results). The functionality of the protein in the experimental group was evidenced by a significant increase of MafA target genes (chiefly insulin), as well as by changes in islet cytoarchitecture at birth. These findings are discussed in the context of the potential use of TAT-MafA not only as an important tool for developmental studies, but also as a valuable addition to our reprogramming armamentarium for therapeutic purposes. TAT-MafA effectively transduces living cells in vitro In order to test whether TAT confers MafA the ability to penetrate across cellular membranes, mouse pancreatic insulinoma cells MIN-6 and b-TC3 were incubated with Alexa Fluor 568labeled protein (final concentration in culture medium: 1 mM). Hoechst was used as a nuclear counter-staining in living cells. Figure 1a shows a time course experiment (2-24 h) using the latter cell line. In agreement with our previous experience with other TAT-fused proteins, virtually 100% of the cells were transduced as early as 2 h after addition of the labeled protein to the medium. Most of the TAT-MafA staining remains in cytoplasmic macropinocytotic vesicles at 24 h [23], but nuclear co-localization of released TAT-MafA was also demonstrated by confocal microscopy (figure 1b). This was particularly evident at later time points. MIN-6 cells displayed similar uptake kinetics, although in this cell line overt transduction could not be seen until 4 h of incubation (data not shown). The stability of the protein over longer incubation periods was not studied. The above observations indicate that TAT-MafA can be efficiently transduced in vitro. Recombinant TAT-MafA binds to the insulin promoter TAT-MafA protein was purified as described in the Methods section. Optimization of the process results in high yield (.1 mg)high purity (80-90%) protein. TAT-MafA can be detected as a ,50 kD band by automated gel electrophoresis using a Experion bioanalyzer (Bio-Rad, Hercules, CA) ( fig. 2a). The identity of the band was confirmed by Western blot using a rabbit polyclonal anti-MafA antibody (Abcam ab17976) ( fig. 2b). In order to determine whether TAT-MafA binds to its appropriate DNA target within the Insulin promoter, an electrophoretic mobility shift assay (EMSA) was done as indicated above. The sequence ATGGTCCGGAAATTGCAGCCT-CAGCCCCCAGCCATC (2139 to 2104 of the insulin promoter) was synthesized at Sigma-Aldrich (St. Louis, MO) for use in hybridization studies. This sequence encompasses the C1 box of the insulin promoter, a highly conserved region to which MafA is known to bind in a selective manner [35]. As mutational analysis has shown that the formation of MafA dimers capable of DNA binding is phosphorylation-dependent [36,37], we used the ERK2 kinase (New England Biolabs, Ipswich, MA) to activate TAT-MafA before the reaction. As shown in fig. 2c, the use of phosphorylated TAT-MafA results in a significant mobility shift. With the exception of unphosphorylated TAT-MafA (which in our hands also exhibited basal DNA-binding activity), no other controls induced any band displacement. TAT-MafA has biological activity in vitro A prediction of an effective binding to the insulin promoter in vitro would be an increase in insulin expression in relevant cell substrates. A set of experiments in which mouse insulinoma cells (b-TC3) were cultured for 24 h with the protein showed that purified TAT-MafA increased insulin expression by about 15-fold (data not shown). However, recent transgenic experiments indicate that ectopic expression of MafA too early during pancreatic development arrests the differentiation and proliferation of progenitor cells, possibly by inducing cyclin kinase inhibitors p27 and p57 [38]. A conclusion of these studies was that MafA may be used to enhance maturation, rather than specification, of b cells from their progenitors. We confirmed this observation by treating explanted e14.5 pancreatic buds with TAT-MafA for 48 h. Real-Time qRT-PCR analysis showed a significant down-regulation of the key b cell markers insulin 1, insulin 2 and GLUT-2 compared to vehicle-treated controls ( fig. 2d). This down-regulation did not entail any loss in overall viability. In conclusion, purified TAT-MafA is able to recognize its biological DNA target and induce the expression of the insulin promoter, but has a detrimental effect on pancreatic progenitor cells at an early stage of development (e14.5). These results indicate that TAT-MafA is active and functions in a manner consistent with the predicted biological activity of the native protein. Targeting of the developing pancreas in utero by injection of transducible proteins We decided to test the hypothesis that TAT-MafA has an effect on the maturation of late-stage pancreatic progenitors. However, unlike e14.5 pancreatic buds, late-stage fetal pancreata are large structures whose culture poses significant challenges in terms of appropriate oxygen and nutrient diffusion. Also, despite the known ability of TAT-fused proteins to go across relatively think structures (such as isolated islets), in our experience their penetration is rather limited beyond 300-400 microns in vitro. Because of the above considerations, the observation that we could not detect any significant difference at the gene expression level between TAT-MafA-supplemented and control e17.5 cultured pancreata was not surprising ( Figure S1). However, the development of a novel intrauterine injection technique afforded us the possibility to study the role of TAT-MafA in a much more relevant in vivo model. In essence, using an ultrasound-guided approach, protein can be injected directly into the heart of embryos, from where it is distributed to all tissues, including the pancreas. This system offers a critical advantage over the in vitro setting, as the effects of the treatment can be observed without disturbing the native environment of the developing organ. Such advantage is especially significant for the study of late-stage pancreatic progenitors, because, as mentioned above, the large size of the pancreas at this time makes it difficult to establish viable organotypic cultures. This technology could also be potentially used in the context of human pre-natal therapy for a variety of conditions requiring in utero interventions (see Discussion). We used a labeled TAT-fused version of b-galactosidase (TATbgal) to establish the technique. Thus, Alexa Fluor 568-labeled TAT-bgal was injected directly into the heart of e17.5 embryos as described in Methods. Viability remains high and most embryos go to term normally (data not shown). In these preliminary experiments, the embryos were retrieved 4 h after the injection and their pancreata examined by multi-photon confocal microscopy. Figure S2 shows that the pervasion of TAT-bgal throughout the majority of the organ is quite evident. Next, we repeated these experiments with our protein of interest (TAT-MafA), also labeled in the same fashion. As shown in figure 3, TAT-MafA accumulates effectively both in the liver and the pancreas, but very little in other peripheral organs such as the brain, the kidney or the heart. We therefore demonstrate that intra-cardial injection allows for the relatively selective mobilization of the protein to the hepatopancreatic region. Injection of TAT-MafA to developing embryos in vivo induces changes in pancreatic development In order to test the above hypothesis, we injected e17.5 embryos with purified TAT-MafA and allowed them to go to term. Controls were injected with the same vehicle used for the protein. All the embryos retrieved from each mother (4 females/group, 7-8 embryos/female) were pooled and considered an individual n for statistical analyses. This sample size (n = 4) was determined to have adequate power to detect statistical differences with an alpha error level of 5% and beta error level of 50%. Upon birth, pups were sacrificed and their pancreata collected for further analysis. No significant differences could be observed either in the size or the gross anatomy of the embryos from each group. The pancreata were similar at the macroscopic level. Tissue samples were taken for qRT-PCR, insulin extraction and immunofluorescence (IF). As shown in figure 4a, the level of expression of all the pancreatic markers analyzed by quantitative real-time RT-PCR was elevated in the experimental group vs. the control. This increase was statistically significant for the endocrine cell markers glucagon (P = 0.0254), insulin 1 (P = 0.0043) and insulin 2 (P = 0.0363). The pancreatic content of insulin in the TAT-MafA-treated embryos doubled that found in controls, as measured by ELISA (n = 4; P = 0.0009) (figure 4b). IF analysis confirmed some of these findings. Islets from TAT-MafA-treated embryos are rounded and compact, with bright insulin and glucagon staining (figure 5b and d). Control pancreata, in contrast, exhibit smaller, more ragged and less organized islets, with weaker insulin and glucagon signal (figure 5a and c). No differences in proliferation were noted (data not shown). Immunostaining for prohormone convertase 1 (PC1/ 3, encoded by the Pcsk1 gene), an enzyme critically involved in the biosynthesis of insulin, appears to be similar in both groups despite a trend towards higher gene expression levels in the experimental group ( figure 5c and d). However, staining for the glucose transporter 2 (Glut-2) could be observed in TAT-MafA-treated pancreata co-localizing with hormone-positive cells (figure 5f) but it was virtually undetectable in the control group (figure 5e). This is in agreement with the trends observed in the gene expression analyses. Discussion We present here a novel combination of two techniques (protein transduction and in utero intracardial delivery) with a great potential both for the design of basic developmental research and human therapy. We have effectively purified and demonstrated the biological function of TAT-MafA, a transducible version of a protein that in recent years has garnered significant attention in the field of pancreatic development and reprogramming. The flexibility afforded by TAT-MafA for the design of basic developmental studies is difficult to match by conventional transgenic strategies. Even with the caveat that there is a potential for off-target effects (although accumulation is largely hepatopancreatic, the protein will be delivered to virtually every tissue), the route herein described opens new avenues of research that may complement, and even occasionally replace, experimental designs based on the generation of transgenic animals. Our findings on the effect of MafA on the e17.5 developing pancreas, for instance, would have otherwise required the cumbersome generation of transgenic mice with an inducible promoter. Our approach, in contrast, entailed only the ultrasound-guided intracardial injection of TAT-MafA at the desired developmental stage. Of note, as we have been able to inject embryos as small as e10.5 without significant loss of viability (Nieto et al, Nieto and Pastori, unpublished results), the applicability range of this technique spans organogenesis almost in its entirety. Our observations clearly set the stage for more in-depth studies about the biological role of TAT-MafA in b cell specification. An obvious development of the present work, for instance, would be to examine functional parameters of islets from TAT-MafA-treated animals. However, since our primary objective was to demonstrate that TAT-MafA would work as predicted in the context of a novel delivery approach, such experiments were deemed to be beyond the scope of this study. Having said this, our findings are not only consistent with what is known about the function of the native gene, but also unveil some potential new functions that warrant additional study. Particularly intriguing is the observation that MafA may be associated with the process of islet coalescence and cytoarchitecture reshaping. It is known that neonatal islets typically exhibit a disorganized morphology, with less defined contours than those of adult origin [39]. While our controls adhere to that pattern, TAT-MafA injected embryos have larger, rounder and more adult-like islets. The molecular mechanism behind this phenomenon remains unclear, but it has been proposed that the acquisition of glucose-responsive insulin secretion requires an extensive remodeling of the islet cytoarchitecture affecting crucial regulatory events such as paracrine and cell-to-cell interactions [39]. At the molecular level, it has been shown that the maturation of b cells is accompanied by up-regulation of the expression of tight and adherens junction-associated proteins in islet cells [40], with quantifiable changes in the pattern of connexins, gap junction membrane density and coupling changes [41]. In this context, a simple explanation for our results would be that MafA would contribute to a faster islet remodeling just by accelerating b cell maturation, which is in fact its best-known function [4,38]. Our own observation that functional b cell genes such as PC1/3, Glut-2 or GK are up-regulated upon TAT-MafA treatment would support this conclusion. In addition to its well-documented role in the maturation of b cells, MafA has achieved -somewhat unexpectedly-notoriety as a key component of a triad of transcription factors that also include the better-known master regulators Pdx1 and Ngn3. The joint ectopic expression of these three factors in pancreatic acinar tissue results in their permanent reprogramming to insulin-producing b cells that could be used for the treatment of type 1 diabetes [15]. Although the clinical implications of such biotechnological feat were dampened by the need to use adenoviral vehicles, our findings timely complement the already reported generation of transducible Pdx1 [42] and Ngn3 [34] over the past few years. This opens the door to the controlled in vitro reprogramming of pancreatic exocrine tissue by means of protein transduction, a technique that completely circumvents the safety concerns posed by the use of viruses. The amount of tissue that is now routinely discarded after each clinical islet isolation (80-90% of which is arguably exocrine) would represent an invaluable potential source of insulin-producing cells. The sheer numbers of cells that could be cultured from such discards would make expansion almost unnecessary. Our work also presents another significant innovation for the conduct of developmental biology studies, namely the in vivo intracardial injection of an agent suspected of having an effect on fetal ontogeny. Although there are previous reports describing in utero and fetal intracardial injection [43,44], this is the first report on the use of bioactive transducible proteins. From a therapeutic perspective, the refinement of intra-cardial delivery techniques may also present distinct advantages over umbilical cord injection (typically hindered by poor placental transfer [45]) for a variety of potentially life-threatening prenatal conditions such as fetal cardiac arrhythmia [45], congenital adrenal hyperplasia [46] or fetal hemolytic disease [47,48]. Protein subcloning and purification The gene sequence of human MafA was optimized for expression in E. coli by Genscript (Piscataway, NJ) and then cloned into the multicloning site of pTAT-2.1 (pET28) expression vector (kindly given by S. Dowdy, UCSD) using SacI and HindIII sites. BL21 E. coli bacteria were transformed with the plasmid, which also contains a 6xHis tag for affinity purification. Bacteria were cultured in LB medium with kanamycin (50 mg/ml) at 37uC to OD 600 nm = 0.8. Protein expression was induced with 1 mM IPTG (isopropyl b-D-thiogalactopyranoside) for 4 hours. After induction, the cells were harvested by centrifugation at 5,000 g for 15 min and washed with 16PBS. The pellet was frozen at 280uC and subsequently resuspended in urea Buffer (6 M urea, 20 mM Hepes, 500 mM NaCL, 5 mM imidazole, pH 7.14) plus protease inhibitor (1 mM), lysozyme and benzonase. After incubation on ice (40 min), the pellet was sonicated (21 s pulse with 1 min between each pulse) and then centrifuged for 25 min at 12,000 g. Solubilized protein was purified by affinity chromatography on a 1 ml His-Trap column under denaturing conditions and elution in non-denaturing buffer. Chromatography was performed in three consecutive steps using a GE Healthcare Ä kta purifier System (Waukeska, WI). First step: starting buffer (6 M urea, 20 mM Hepes, 500 mM NaCL, 5 mM imidazole, pH 7.14), end buffer (6 M urea, 20 mM Hepes, 500 mM NaCL, 100 mM imidazole, pH 7.14). Second step: starting buffer (6 M urea, 20 mM Hepes, 500 mM NaCL, 100 mM imidazole, pH 7.14), end buffer (20 mM Hepes, 500 mM NaCL, 100 mM imidazole, pH 7.14). Third step: starting buffer (20 mM Hepes, 500 mM NaCL, 100 mM imidazole, pH 7.14), elution buffer (20 mM Hepes, 250 mM NaCL, 750 mM imidazole, pH 7.14). Electrophoretic mobility shift assay (EMSA) To study the DNA-binding specificity of TAT-MafA, we used the Pierce LightShiftH Chemiluminescent EMSA Kit (Cat. 20148). DNA oligos were custom ordered from Sigma-Aldrich (St Louis, MO). Hybridization of complementary DNA chains was carried out according to the thermocycler method in Technical Tip # 45 from Thermo Scientific. To determine the proper concentration of labeled DNA, the 1 pM stock of biotin-labeled DNA was diluted to various concentrations, 20 ul of which were run in a Lonza PAGErH Gold Precast Gel (Cat. 58525) at 100 V until the dye had migrated 2/ 3 of the way down the gel. The gel was then transferred to a nylon membrane and biotin-labeled DNA was detected by chemiluminiscence. A dilution of 200 fmol/ml was deemed optimal for imaging. Unlabeled DNA was used as a competitor at a concentration of 10 pmol/ml for the entire procedure. TAT-MafA phosphorylation was carried out by treatment with ERK2 kinase [49] (New England Biolabs, Ipswich, MA) according to the manufacturer's instructions. Protein was treated with kinase or PBS (control). In addition, kinase alone, where PBS was used instead of TAT-MafA protein, was also included as a control. These samples were subsequently used for preparation of the three main experimental EMSA binding reactions (PBS treated TAT-MafA; ERK2 kinase treated TAT-MafA; ERK2 kinase alone). 82.15 mg of TAT-MafA protein were used. Incubation with the kinase was done at 37uC for 1 h. During the incubation, the binding reactions were prepared from the Pierce EMSA Kit along with the three kit control reactions. Phosphorylation reaction products were then subjected to an additional 20 minute incubation for protein-DNA binding. 5 ml of loading buffer were added and samples run until the dye front was L of the total length of the gel. Following transfer to a nylon membrane, the samples were crosslinked and analyzed by chemiluminiscence. In vitro culture of embryonic pancreatic buds and intrauterine embryo injection All methods herein described involving the use of animals have been approved by the University of Miami IACUC. C57BL/6 mice (20-25 g body weight) were used for all animal experiments. Mating of the animals was set later in the day, and the following day the females were checked for vaginal plugs. Noon of that day was considered to be gestational time point e0.5. Pancreatic buds at e14.5 were explanted and cultured as described in [34]. For intrauterine embryo injection, pregnancy was confirmed by ultrasound imaging (Vevo 770, VisualSonics, Toronto, ON, Canada) at e17.5. At this time, pregnant females were anesthetized by inhalation of 2% isofluorane and placed on a preheated platform in the supine position with ECG electrodes taped to their legs to monitor the heart rate. Body temperature was maintained with the heating pad between 36uC and 38uC. Hair was removed from the abdomen using a chemical hair remover (Nair, Carter-Horner, Mississauga, ON, Canada). To provide a coupling medium for the transducer, a prewarmed ultrasound gel (Allegiance; Cardinal Health, McGaw Park, Illinois) was spread over the abdominal wall. For TAT-b-galactosidase injection experiments, protein (2 mg/ml) was labeled with Alexa Fluor 568 labeling kit (Invitrogen, Carlsbad, CA). Each visualized e17.5 embryo was injected intracardially with using a 500 mL syringe (Hamilton Company, catalogue No 81242, Reno, NV). The needle was guided using an ultrasound bio-microscope with Bmode image (Vevo 770, VisualSonics, Toronto, ON, Canada). Total volume of protein for each embryo was 10 ml. The mothers were sacrificed 4 h later and the pancreata of the embryos microdissected and examined using a multi-photon confocal microscope (Leica) to study the distribution of labeled TAT-bgalactosidase. For TAT-MafA experiments, seven to eight embryos per animal were injected with TAT-MafA (2.1 ug/ml) or protein vehicle (control). TAT alone was not used in controls because we and others have previously established that the TAT peptide is inert from a biological point of view [26,50]. Pancreata were isolated for analysis immediately after birth (about e19.5-20.5). Quantitative Real-Time PCR Total RNA was purified using Qiagen kits (QIAShredder, RNeasy and DNase-free). The First-Strand system (Roche, Basel, Switzerland) was used to generate cDNA (random oligomers). Relative gene expression was calculated using Taqman assays in either 7500 Fast, StepOne Plus or 7900 Real Time PCR cyclers (Applied Biosystems, Life Technologies, Carlsbad, CA). The latter was used to run custom-made TaqmanH Low Density Microarray (TLDA) cards, which allow the simultaneous qRT-PCR analysis of up to 384 samples. The DCt method for relative quantification was deemed optimal for our application after discussion with Applied Biosystems researchers. All assays are designed to span exon-exon junctions, thus eliminating the possibility of genomic DNA contamination. qRT-PCR results are the average of several independent experiments. In addition, in each experiment each marker was analyzed in triplicates. Gene expression was normalized against 18S rRNA. This endogenous control has been validated in our system and proven extremely stable and more accurate than other standards. Total insulin extraction Pancreata were lysed by ultrafreezing in the presence of 180 ml of T-PER (Thermo Scientific) and 20 ml of anti-protease (Roche), followed by physical maceration. Insulin was quantified by the LincoPlex endocrine panel kit (Linco-Invitrogen) using a Bioplex platform (Bio-Rad Laboratories, Inc). Statistical analyses Results are expressed as mean 6 standard deviation (SD). The statistical significance of differences was assessed by the two-tailed Student's t test. In all comparisons, a value of P,0.05 was considered statistically significant. Sample size was determined by power analysis (5% alpha error level, 50% beta error level).
2017-07-13T04:23:27.486Z
2011-08-04T00:00:00.000
{ "year": 2011, "sha1": "f23034b42ebdac8ce2243f5142334eda38cecddc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0022364&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b084799d5a4d562a6617ea15cdd53abfa1505bb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231643472
pes2o/s2orc
v3-fos-license
Intraosseous Cavernous Hemangioma of the Middle Turbinate: A Case Report Background: We report a case of an intraosseous cavernous hemangioma originating from the middle turbinate that expanded into the anterior skull base, without traversing the cribriform plate. Methods: The mass was found incidentally after a computed tomography head was ordered for unrelated reasons. On questioning, the patient denied any nasal symptoms. Magnetic resonance imaging showed an enhancing mass and the radiological imaging supported a broad differential. Results: The lesion was removed by endoscopic image-guided surgery, and the pathology was that of a benign intraosseous cavernous hemangioma. There was no residual hemangioma on postoperative imaging and the nasal mucosa healed well. This is the first report of an intraosseous cavernous hemangioma of the middle turbinate showing superior expansion to the anterior skull base. Conclusion: This case demonstrates the extent to which cavernous hemangiomas may expand into surrounding tissues. While these lesions are uncommon, they can be considered as part of a broad differential diagnosis of sinonasal tumors. Introduction Vasoformative tumors are a broad spectrum of neoplasms that can be benign or malignant. Hemangiomas are benign localized tumors originating from vascular endothelium. 1 These are further subdivided by the size of the proliferating vascular spaces from smallest to largest: capillary, cavernous, and cellular. The pathologic appearance of cavernous hemangiomas has been described as large, dilated, blood filled spaces with a flattened endothelial lining. 2 Pathologic diagnosis of these lesions is important in order to rule out malignant vasoformative tumors with similar features such as angiosarcoma or spindle cell hemangioma. Sinonasal hemangiomas are uncommon. They are estimated to comprise 12.5% of all head and neck hemangiomas and about one-third of these are cavernous hemangiomas. 3 Here, we present the first case of an intraosseous hemangioma of the middle turbinate with extensive expansion superiorly to the anterior skull base. Case The patient provided informed written consent to use their personal information. Ethics approval was obtained from the Nova Scotia Health Authority Research Ethics Board. A 53-year-old female was referred to the Otolaryngology-Head and Neck Surgery clinic for a large hyperdense nasal mass found incidentally on computed tomography scan following blunt head trauma due to a fall. The mass filled the left nasal cavity, extending superiorly to the left cribriform plate. The patient denied nasal obstruction, discoloration of nasal discharge, or epistaxis. They also denied headache, facial pressure, or visual changes. She is known to have significant loss of vision since childhood in her right eye following a traumatic injury. Anterior rhinoscopy revealed a mass obstructing the left nasal passage. Nasal endoscopy revealed a smooth mass obscuring or encompassing the middle turbinate. Her physical examination was otherwise unremarkable. To rule out invasion of the skull base, the patient was scheduled for a magnetic resonance imaging (MRI) with gadolinium enhancement. The mass was isointense to skeletal muscle on T1-weighted imaging and hyperintense on T2-weighted imaging. The mass showed intense enhancement with gadolinium ( Figure 1). The superior aspect of the mass contacted the cribriform plate on the left, but there was no evidence of dural involvement. The differential diagnosis on the MRI report included inverting papilloma and carcinoma; our concern was that this could also be an esthesioneuroblastoma or a hemangiopericytoma. Because of concern that the mass may be highly vascular (by its marked enhancement on MRI) as well as its close contact to the cribriform plate, the patient was consented to an endoscopic image-guided excision with possible repair of cerebrospinal fluid (CSF) leak if the dura were breached. Intraoperatively, a markedly enlarged left middle turbinate was visualized endoscopically when the patient was placed under general anesthesia (Supplementary Figure 1). The mass was extending superiorly flush with the cribriform plate but not invading or deforming it. The mass was excised by sequential superior, posterior, medial, and lateral incisions. The superior attachment was cauterized then incised leaving a few millimeters below the skull base. No CSF leak was encountered intraoperatively. The mass was freed after ensuring hemostasis by cauterizing its vascular posterior pedicle near the sphenopalatine foramen. A left ethmoidectomy was necessary in order to reach around the markedly expanded turbinate. There was minimal blood loss. The remainder of the expanded turbinate was then crushed and removed piecemeal as it was not possible to deliver it intact through the left nostril. Uncinectomy was done to expose the maxillary sinus ostium. Histopathologic examination showed multiple fragments of tissue with an extensive proliferation of thin-walled vascular structures and underlying thin fibrous septa, surrounding trabecular bone, in keeping with intraosseous cavernous hemangioma ( Figure 2). There was neither endothelial cell atypia nor mitotic activity to suggest a malignant process. Some overlying intact nasal mucosa was identified. The patient recovered from surgery with no complications, only nasal obstruction that resolved. She was relieved to learn that the lesion was benign and that it was completely excised. Upon follow-up, the middle turbinate stump completely healed with no evidence of scabbing or scarring. Discussion Previous cases have been reported of cavernous hemangiomas originating from the inferior turbinate, 4-7 middle turbinate, [8][9][10][11][12] maxillary sinus, 13 and nasal septum. 14,15 Small case series reveal that the nasal septum and inferior turbinate may be the most common sites of origin. 3,16 These sinonasal lesions are unified by presentation with nonspecific symptoms, namely epistaxis or nasal obstruction and possibly by association with trauma. 8,17 The cavernous hemangiomas previously reported as originating from the middle turbinate were localized producing septal deviation and remodeling of the maxillary antrum. Our case is the first showing extensive superior expansion, reaching the anterior skull base. Esthesioneuroblastoma and hemangiopericytoma were considered in the differential of this mass. The proximity of the tumor to the cribriform plate supported the presumptive diagnosis of esthesioneuroblastoma. These tumors arise from the olfactory epithelium and typically present with nasal obstruction. 18 Additionally, the significant vascularity of this tumor in this case was consistent with a diagnosis of hemangiopericytoma which are typically slow-growing and exhibit few symptoms. 19 We recognize that preoperative palpation of the mass in clinic may have been beneficial in showing its bony consistency. This was avoided, however, for fear of causing significant bleeding considering the enhancement of the mass on MRI. 20 Conclusion This case represents a unique presentation of a cavernous hemangioma of the middle turbinate that reached the anterior skull base. While sinonasal cavernous hemangiomas are uncommon, they can be considered part of a broad differential for sinonasal masses. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Supplemental Material Supplemental material for this article is available online.
2021-01-20T06:16:20.956Z
2021-01-19T00:00:00.000
{ "year": 2021, "sha1": "84504723ba8fb09be3daa8eff25e231162aa34d8", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0145561320984581", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "d9ce5aa9496d9705959955a65fcdfca84320e4f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5149239
pes2o/s2orc
v3-fos-license
Bone Health in Patients with Multiple Sclerosis Multiple sclerosis (MS) is a gait disorder characterized by acute episodes of neurological defects leading to progressive disability. Patients with MS have multiple risk factors for osteoporotic fractures, such as progressive immobilization, long-term glucocorticoids (GCs) treatment or vitamin D deficiency. The duration of motor disability appears to be a major contributor to the reduction of bone strength. The long term immobilization causes a marked imbalance between bone formation and resorption with depressed bone formation and a marked disruption of mechanosensory network of tightly connected osteocytes due to increase of osteocyte apoptosis. Patients with higher level of disability have also higher risk of falls that combined with a bone loss increases the frequency of bone fractures. There are currently no recommendations how to best prevent and treat osteoporosis in patients with MS. However, devastating effect of immobilization on the skeleton in patients with MS underscores the importance of adequate mechanical stimuli for maintaining the bone structure and its mechanical competence. The physical as well as pharmacological interventions which can counteract the bone remodeling imbalance, particularly osteocyte apoptosis, will be promising for prevention and treatment of osteoporosis in patients with MS. Introduction Osteoporosis is a condition of impaired bone strength which leads to increased risk of fracture [1]. The enhanced bone fragility reflects the integration of the amount of bone (bone mass) and bone quality. Bone quality depends on its macroand microarchitecture and on the intrinsic properties of the materials that comprise it (e.g., matrix mineralization, microdamage accumulation, or collagen quality) [2]. Bone is continually adapting to changes in its mechanical and hormonal environment via the process of bone remodeling. Bone remodeling maintains bone structure and its mechanical competence by removing damaged bone and replacing it with new bone and thus restores bone's material composition, micro-, and macroarchitecture. This process depends on the normal production, work and lifespan of osteoclasts, osteoblasts, and osteocytes. Thus, diseases and drugs that have an impact on bone cells and bone remodeling will influence bone's structure and its resistance to fracture [3]. Multiple sclerosis (MS) is a chronic progressive disease affecting the myelin sheath covering of nerve fibers in the brain and spinal cord, leading to functional impairments such as visual impairment, abnormal walking mechanics, poor balance, muscle weakness, fatigue, and progressive immobilization [4]. The resultant functional impairments lead to frequently falls [5]. The disease affects mainly young adults (20 to 40 years) and its incidence is more frequent in women (approximately 2 : 1) [6]. Its prevalence ranges from 2 to 150 patients at 100 000 [7]. Impaired mobility or lack of weight-bearing physical activity reduced mechanical stress on bone, which causes a marked imbalance in bone remodeling with a disruption of osteocytes network [8]. Management of MS requires long-term disease-modifying therapy, such as glucocorticoids (GCs) with a further negative effect on bone remodeling and bone strength. Secondary osteoporosis may develop and low-trauma fractures occurring in patients with MS more frequently than in healthy controls [9][10][11][12][13][14][15]. Fractures and their sequelae can have important personal as well as (economic) implications for society. Therefore, the attention on the issue of bone health among patients with MS is warranted. This paper examines the underlying pathogenic mechanisms of osteoporosis in patients with MS as well as its management. Understanding the causes associated with decreased bone strength in patients with MS will help in the optimal therapeutic intervention. Prevalence of Osteoporosis in Patients with Multiple Sclerosis The analysis of a registry of 9029 patients with MS in the USA found that 27.2% responders reported low bone mass, and more than 15% of responders reported a history of fracture [9]. Most studies in patients with MS evaluated BMD in comparison with the control group of healthy subjects and showed significantly lower BMD in patients with MS than in controls [11][12][13][14][15][16]. Several of these studies were shown that vertebral BMD is affected to a lesser degree than femoral BMD [11,12,16]. The low BMD in MS patients involve both sexes [15]. Interestingly, one study in men with MS reported low BMD (osteoporosis) in 37.5% (15 out of 40) and 21% (8 out of 38 patients) had vertebral, rib, or extremities fractures [15]. Patients with progressive forms of MS showed a more severe loss of BMD than those with relapsing-remitting MS [12]. Fracture incidence in patients with MS evaluated only few studies. Cosman group found fracture rates of 22% in patients with MS compared with 2% in controls [11]. It remains unexplained whether all patients with MS are more susceptible to osteoporosis and fractures; for example, there is evidence that patients with a low expanded disability status scale (EDSS) score did not show any significant difference in BMD with comparison with healthy control subjects [17,18]. Therefore, further elucidation is needed to qualify which risk factors are most responsible for a bone loss in patients with MS. Pathogenic Mechanisms Bone remodeling is under way throughout life and maintains bone strength by removing damaged bone and replacing it with new bone and thus restores bone's micro-and macroarchitecture. This process depends on the normal production, work, and lifespan of osteoclasts, osteoblasts, and osteocytes [19]. Chronic diseases, such as MS, may significantly disturb the process of bone modeling and remodeling with resulting bone loss, deterioration of bone's quality and increased frequency of fractures [20,21]. Secondary osteoporosis and low-trauma fractures occur in patients with MS more frequently than in healthy controls [9,11]. The underlying pathogenic mechanisms of the osteoporosis in patients with MS are probably based on the progressive immobilization, long-term GCs treatment, vitamin D deficiency, skeletal muscle atrophy and possibly on the presence of various cytokines involved in the pathogenesis of MS [10]. In addition, chronic use of other drugs, such as antidepressants may contribute to the development of osteoporosis and fractures [22]. The functional impairments also leads to an increased risk of falling that, combined with bone loss and impaired quality of bone mass, can increase the frequency of bone fracture in individuals with MS [11]. Disability. Mechanical loading is an important factor controlling bone mass. Increased bone loss in immobilized subjects is well-recognized complication in patients after spinal cord injury with tetraplegia [23,24], in bedridden patients, or in astronauts [25], whereas localized bone loss is well documented in patients with regional disuse, for example, after fracture itself. Immobilization causes an overall progressive bone loss at a similar rate to osteoporosis caused by estrogen deficiency, but at the same amount of induced bone loss, disuse led to more deteriorated bone structure and mechanical properties than estrogen deficiency [26]. The available studies showed that cortical thinning and substantial decline of trabecular bone density account for increased bone fragility [27][28][29]. The duration and degree of motor disability appears to be a major contributor to the pathogenesis of secondary osteoporosis in patients with MS. The degree of disability measured by the Kurtzke EDSS score significantly correlated with BMD in patients with MS [11]. Specifically, site-specific effects of motor disability were documented in MS patients, and EDSS correlated mainly with BMD in the hip but not in the lumbar spine [14]. In wheelchair-bound patients, an atrophy of hip muscles affects proximal femur, while BMD of lumbar spine is not decreased because of its adequate mechanical stimulation by the trunk and back muscles in the upright position. Similarly, patients with spinal cord injury lose BMD mainly at femoral sites [30]. Also, hemiplegic patients showed a significant loss of BMD in both trabecular and cortical bone at the forearm and at the neck and great trochanter on the paretic hip [31]. Higher total body bone mineral content was documented in ambulatory patients (EDSS score ≤ 6.5) compared with nonambulatory patients (EDSS score ≥ 7.0) [12,13]. Also, higher prevalence of osteoporosis was found in nonambulatory patients [32]. In male patients, a positive correlation has been observed between BMD and both EDSS score (correlation with femoral and also vertebral BMD) and BMI (correlation with femoral BMD only). There was shown that also EDSS score and BMI two years prior to the study could be used as future indicators of low BMD [15]. A reduced mechanical stress on bone causes a marked imbalance in bone remodeling with a transient increase in bone resorption (which occurs initially) and a decrease in bone formation (which is sustained for a longer duration) [25,33,34] (Table 1). The mechanism causing this decrease in bone formation probably lies in the reduction of mechanical stress during immobilization which results in a marked disruption of osteocytes network due to increase of osteocyte apoptosis. Osteocytes represent 95% of all bone cells and form a mechanosensory system which is based on a threedimensional network of tightly interconnected osteocytes entombed in mineralized bone matrix [35]. Disruption of this system affects probably several aspects of bone homeostatic system, such as mechanosensitivity, mechanotransduction, and basic multicellular units responsible for bone remodeling [36]. The immobilization-induced osteocyte apoptosis is followed by osteoclastogenesis and increased bone resorption [37]. While molecular mechanisms of disuse osteoporosis are not well understood, recent evidence found that mechanical unloading caused upregulation of Sost gene in osteocytes and increased levels of sclerostin (product of Sost gene) [38]. Sclerostin is responsible for the inhibition of Wnt/beta-catenin signaling in vivo and for the suppressed viability of osteoblasts and osteocytes. Interestingly, sclerostin-deficient mice (Sost −/−) were resistant to Cortical thickness * Cortical density * Trabecular density * * The anatomic location and function of the bone in the skeleton account for the magnitude of skeletal response to immobilization. mechanical unloading-induced bone loss [38]. Importantly, the administration of sclerostin neutralizing antibody in experimental model of immobilization resulted in a dramatic increase in bone formation and a decrease in bone resorption that led to increased trabecular and cortical bone mass [39]. Osteocytes are also necessary for targeted bone remodeling to avoid microdamage accumulation, which could lead to whole-bone failure. Recently, Waldorff et al. showed that osteocyte apoptosis may be insufficient for repair of microdamage without the stimulation provided through physiologic loading [40]. MS affects a wide range of neurological function and most of patients with MS have abnormal muscle strength, impaired balance, and gait control which leads to frequent falls [5,41] that combined with a bone loss increase the frequency of bone fractures. Imbalance is also often the initial symptom of MS. The pathogenesis is not completely understood yet. It was demonstrated that changes in postural control in most patients with MS are probably the result of slowed afferent proprioceptive conduction in the spinal cord [5]. Disuse, inflammatory changes, as well as GCs treatment or vitamin D deficiency, may also contribute to weakness and loss of muscle strength and thus to frequent falling. Glucocorticoids (GCs). GCs are frequently used to control MS relapses. Oral GCs treatment in patients with MS may increase the risk of osteoporosis. Epidemiological studies showed that fracture risk is increased rapidly after starting oral GCs treatment and is related to the dose and duration of GCs exposure [42]. Doses as low as 2.5-5 mg of prednisolone equivalents per day can be associated with a 2.5-fold increase in vertebral fractures, and the risk is greater with higher doses used for prolonged periods [43]. Bone loss due to GCs treatment is steep during the first 12 months and more gradual but continuous in subsequent years. However, the fracture risk returns towards baseline levels after discontinuation of oral GCs treatment [44]. The mechanism of osteoporosis in patients on GC treatment is complex [45] (Table 2). However, the contribution of other risk factors, such as vitamin D insufficiency and physical disability confounds the assessment of GCs effects on bone in patients with MS. Repeated pulses of high-dose methyprednisolone in MS patients did not result in a subsequent decrease in BMD [18]; however, the risk of osteoporotic fractures remains slightly increased in patients undergoing cyclic GCs treatment at high doses [47]. High-dose, short-term intravenous GC regimens cause an immediate and persistent decrease in bone formation and a rapid and transient increase of bone resorption [48]. In fact, GCs may increase proresorptive IL-6 signaling as well as increase the expression of receptor activator of NF-κB ligand (RANKL) and decrease the expression of its soluble decoy receptor, osteoprotegerin (OPG), in stromal and osteoblastic cells [49]. Moreover, GCs may directly decrease apoptosis of mature osteoclasts [50]. However, discontinuation of such regimens is followed by a high bone turnover phase [48]. In physically active patients with MS treated with low-dose steroids, the bone turnover markers were not different from controls [51]. Addressing the question of whether duration of low-dose GCs use in combination with other immunomodulators in patients with MS increases risk of osteoporosis requires further prospective study by taking into account other risk factors, particularly the level of disability. The Effect of Other Immunomodulatory Drugs. Although no harm effect of low-dose methotrexate was observed in patients with MS, several case reports have described associations between pathological nonvertebral fractures and low-dose methotrexate (MTX) in rheumatoid arthritis (RA) patients [52]. In addition, methotrexate osteopathy, characterized by pain, osteoporosis, and microfractures, has been very rarely observed in patients with low-dose MTX treatment [53]. Other immune-modifying drugs, such as interferon-beta or azathioprine, which are used in conjunction with GCs have not been shown to promote bone loss experimentally or clinically. On the contrary, interferonbeta may have favorable effect on bone metabolism in patients with MS [54], probably due to the inhibitory effect of interferon-beta on osteoclasts development [55]. Experimentally, also treatment with the S1P(1) agonist FTY720, a new and promising drug for the treatment of MS, relieved ovariectomy-induced osteoporosis in mice by reducing the number of mature osteoclasts attached to the bone surface [56]. However, further investigation with regard to their effects on bone health is needed. Vitamin D Insufficiency. The role of vitamin D in bone homeostasis is well understood, and the use of vitamin D to prevent and treat osteoporosis was recently reviewed [57]. There is also evidence from both observational studies and clinical trials that hypovitaminosis D are predisposing conditions for various common chronic diseases. In addition to skeletal disorders, vitamin D deficiency is associated with increase the risk of malignancies, particularly of colon, breast, and prostate gland cancer, of chronic inflammatory and autoimmune diseases (e.g., insulin-dependent diabetes mellitus, inflammatory bowel disease, or multiple sclerosis), as well as of metabolic disorders (metabolic syndrome and hypertension) [58]. Vitamin D intake, decreasing latitude, increased sun exposure, and high serum vitamin D levels have all been shown to be associated with a decreased risk of MS [59]. Patients with MS have more often vitamin D deficiency due to its low intake as well as limited sunlight exposure [12]. Mean 25-hydroxyvitamin D 3 (25OHD) levels in patients with MS are more often lower (below the level of 20 ng/mL) than in age-matched controls [11,14]. There was no significant correlation between 25(OH)D and BMD in patients with MS [11,14]. Thus, while patients with MS are susceptible to low 25OHD levels, the evidence implicating linking levels to reduced BMD and osteoporosis in patients with MS is unclear. Only a few studies have investigated this link [11,12,14]. A low vitamin D state, from inadequate diet intake and decreased exposure to sunlight, contributes to malabsorption of calcium and vitamin D insufficiency in MS patients. Secondary hyperparathyroidism may develop, which can contribute to bone remodeling imbalance and bone loss in patients with MS. Moreover, patients with MS treated with GCs will be at greater risk for an imbalance between bone formation and bone resorption and, therefore, more susceptible to development of osteoporosis due to vitamin D insufficiency/deficiency. GCs treatment is associated with reduced calcium absorption from the gastrointestinal tract by opposing vitamin D action. Furthermore, renal tubular calcium reabsorption is also inhibited by GCs. In addition, GCs may affect PTH secretory dynamic, with a decrease in the tonic release of PTH and an increase in pulsatile burst of the hormone [46]. The Chronic Inflammatory Process of Multiple Sclerosis. MS is an inflammatory disease of the central nervous system (CNS) with a prominent role of immune cells and cytokines in degradation of the myelin sheaths [60]. Recent evidence has indicated that a number of additional cell types, such as T cells, play a key role in bone loss [61]. In inflammatory or autoimmune disease states, activated T-cells produce receptor activator of nuclear factor kappaB ligand (RANKL) and proinflammatory cytokines, such as TNF-α, IL-1, or IL-11, all of which can induce RANKL expression in osteoblasts and bone marrow stromal cells. The systemic or local activation of T-cells may, therefore, trigger bone loss via the expression of RANKL [61]. Osteoprotegerin (OPG), a protein member of the tumor necrosis factor (TNF) receptor family and its ligand RANKL were identified as a key cytokines that regulate osteoclastogenesis [61]. Significantly, higher levels of RANKL and OPG were found in the patients with MS with low mean EDSS as compared to the age-matched controls [62]. Among other cytokines, osteopontin (OPN) has been studied in the shared pathogenesis of MS and osteoporosis. OPN is a member of the SIBLING (small integrin-binding ligand N-binding glycoprotein) family of noncollagenous matricellular proteins [63]. OPN was identified as the most abundantly expressed cytokine in MS lesions, and OPN levels were found to be increased in cerebrospinal fluid of MS patients [64,65] and in the plasma in patients with relapsingremitting MS [66]. However, other studies found that OPN circulating levels are low in patients with MS [67]. It seems likely that further future studies experiments will uncover the role of OPN and additional molecules mediating bone loss in inflammatory diseases, such as MS. Use of Antiepileptic and Antidepressant Drugs. Antiepileptic drug treatment can lead to osteoporosis [68,69]. Meta-analyses have revealed that barbiturate, antidepressant, antipsychotic, and benzodiazepine treatment increases patient's risk of osteoporosis [70]. More recently, current Journal of Osteoporosis 5 use of antidepressant drugs with a high affinity for the 5hydroxytryptamine reuptake transporter (5-HTT) was associated with a higher risk of osteoporotic fractures compared to use of antidepressants with a medium or low affinity [71]. BMD was lower among those reporting current selective serotonin reuptake inhibitors (SSRI) use but not among users of other antidepressants [72,73]. In vivo studies have found that 5-HT could alter bone architecture and could reduce bone mass and density [74]. The 5-HTT has been located in osteoclasts, osteoblasts, and osteocytes, and the the inhibition of 5-HTT using a SSRI (fluoxetine hydrochloride) had antianabolic skeletal effects in rats [74]. Further research is needed to confirm this finding in light of widespread SSRI use and potentially important clinical implications. Diagnosis and Management of Osteoporosis in Patients with MS Despite the fact that patients with MS can develop osteoporosis and fractures more often than their age-matched healthy controls, many patients with MS are not evaluated for their bone status, and there are no clinical guidelines for prevention and treatment of osteoporosis in patients with MS. Patients with MS are also at a higher risk of falls that can increase the frequency of bone fracture combined with bone loss and impaired bone's quality. Clinical evaluation in all patients with MS should include the assessment of the clinical risk factors for osteoporosis and fractures, such as the hereditary disposition of osteoporosis, previous low trauma fractures, and smoking or alcohol habits. The specific risk factors of the osteoporosis in patients with MS are the level of disability (specifically motor disability) and possibly a long-term GCs treatment, vitamin D deficiency, skeletal muscle atrophy, and increased risk of falling. The examination of the motor function using the EDSS score could provide a useful indicator for further evaluation. Cutoff EDSS 6 represents reasonable end of motor performance of the patient; 6.5 means only several meters with bilateral support, and 7 is only the ability of transfer to wheelchair from the bed. The EDSS scores of 6 or greater has been found to correlate well with decreased BMD [12,15], and BMD should be routinely measured in these patients. On the other hand, patients with a good physical activity and low EDSS score (<5) may have normal BMD [14] as well as markers of bone turnover [51]. BMD measurement should be also performed in all patients who are receiving 5 mg of prednisone equivalents daily for more than 3 months. BMD testing using dual-energy X-ray absorptiometry (DXA) should be conducted at the lumbar spine and hip. This measure provides an assessment of fracture risk prior to the occurrence of a fragility fracture as well as monitors the course of the disease and response to therapy. No consensus exists as to how frequently patients at risk osteoporosis should have followup scans. However, BMD should be remeasured after 1 or 2 years to ascertain that it is stable or to identify the patient with ongoing bone loss, especially in patients treated with long-term GCs treatment. In the presence of clinical risk factors, fracture risk may be increased independently from BMD. Therefore, combination of BMD with clinical risk factors is recommended to identify a risk patient and to target pharmacologic therapy. In postmenopausal women and men (between 40 and 90), the assessment of individualized 10-year absolute fracture risk (FRAX, fracture prediction algorithm) is recommended [75]. The identification of previous low-trauma fractures, especially vertebral fractures is important for the decisionmaking process as a previous vertebral fracture is a particularly strong risk factor. Importantly, vertebral fractures may occur in 30-50% of patients receiving chronic GCs therapy [76] and up to 50% of vertebral fractures are asymptomatic and, therefore, do not come to the attention of physicians. Spinal X-rays should be performed in those with localized back pain or a loss of more than 3 cm in height in order to detect prevalent vertebral fractures. Alternatively, the vertebral fracture assessment tool of the bone densitometer, which is associated with low radiation, may be useful screening test for vertebral fractures assessment. Laboratory tests are indicated to exclude other secondary causes of osteoporosis, such as vitamin D deficiency, renal insufficiency, malabsorption, and hypogonadism. Useful biochemical tests include routine standard tests to exclude renal or hepatic impairment, blood count, serum calcium, 24-hour urinary calcium, 25-hydroxyvitamin D 3 (to exclude vitamin D deficiency), and gonadal hormones (to exclude hypogonadism). Nonpharmacological Considerations. Prevention is more effective than treatment of established osteoporosis. For all patients, nonpharmacological therapies should be considered for prevention of skeletal fragility, including adequate weight-bearing exercise, nutrition (protein, calcium, vitamin D), and lifestyle modifications. As reviewed above, disability is the most often cause of bone loss in patients with MS, and mechanical loading and exercise interventions can prevent osteocyte apoptosis and bone loss [77,78]. Exercises have beneficial effects on strength, physical endurance, mobilityrelated activities (transfer, balance, and walking), and on mood, without any evidence of detrimental effects [79]; however, there was no evidence that any particular exercise programs were more effective in improving or maintaining function. Whole-body vibration is a new approach to improve neuromuscular functions and bone strength, but there is limited evidence that whole body vibration provides any additional improvements [80]. Further experimental studies are necessary to identify optimal physical activities for the prevention of osteocyte apoptosis and bone loss. Recurrent falls may be an important risk factor for fracture in disabled patients with MS. In patients with MS, falls are related to the level of disability [81], and possibly other factors may contribute to muscle weakness and imbalance, such as vitamin D deficiency or GCs treatment. Calcium and Vitamin D. Calcium and vitamin D supplementation has been routinely provided in most clinical trials of bone protective therapy for both primary and secondary osteoporosis, for example, in glucocorticoidinduced osteoporosis (GIO). The effect of calcium and vitamin D supplementation is maximized in patients whom baseline intake is low. As patients with MS are at a higher risk of calcium and vitamin D deficiency should have their calcium and vitamin D status checked and intake must be individualized. Those with a personal or family history of nephrolithiasis must be screened with 24-h urinary calcium. In immobilized patients, an increase in serum calcium is provoked by bed rest alone and additional calcium intake would not be helpful and might be harmful and provoke an increased risk of kidney stone formation. However, calcium and vitamin D should be used as an adjunct treatment, because a low calcium intake may exacerbate calcium loss during low mechanical loading [82]. In general, the amount of vitamin D supplementation should aim at achieving serum 25OHD levels above 50 nmol/l in >95% of adults without causing vitamin D toxicity. A daily dose of 800-1000 IU of vitamin D3 should be able to obtain this minimal 25OHD target. Due to vitamin D resistance in patients receiving GCs, those patients may require amounts of 1000-2000 IU of vitamin D3 daily [83]. Measurement of serum 25OH vitamin D is recommended, especially in GCs-treated patients. Although some evidence suggests that daily supplemental intake of 2000-4000 IU colecalciferol is required to obtain at least 75 nmol/l 25OHD, which may be optimum for many health outcomes [84], prospective trials showing that higher 25OHD levels (>75-80 nmol/l) are conveying additional benefits without new risk are needed. Pharmacological Interventions. The ultimate goal of all pharmacological interventions is prevention of fractures. Although a number of drugs have been evaluated for the prevention and treatment of postmenopausal osteoporosis and GIO, the evidence of their efficacy in patients with MS, especially in premenopausal women and younger men is less strong. As osteoporosis in MS patients have multiple pathogenesis, medical interventions used in women with postmenopausal osteoporosis may not be similarly efficient. Patients requiring long-term GCs treatment and those being immobilized may require pharmacological therapy to prevent excessive bone loss and fractures. Options for treatment include antiresorptive drugs, such as estrogen, or aminobisphosphonates, or anabolic agents such as teriparatide. Aminobisphosphonates (BPs). Although the use of BPs may be appropriate, the etiology of osteoporosis in patients with MS is fundamentally different from the osteoporosis commonly found in the postmenopausal women for whom these drugs were originally developed. As immobilization in patients with MS can cause substantial bone loss and increase in the risk of fractures [20], BPs may be option for treatment for those patients. Although BPs have not been systematically evaluated in the therapy of these conditions, some studies support the potential benefit of BPs in the management of bone loss associated with immobilization [24,85,86]. In immobilized patients, BPs is known to reduce immobilization-induced hypercalcaemia by inhibiting bone resorption of calcium. An immobilization-related elevated serum calcium level may inhibit parathyroid hormone (PTH) secretion, and hence renal 1, 25(OH) 2 D 3 production, in disabled long-standing MS patients. If oral therapy of BPs cannot be tolerated or excluded due to gastroesophageal disease, intravenous route of administration of ibandronate or zolendronate may be applied. However, acute phase reaction with fever, particularly after the first application of BP, may occur. BPs (alendronate, risendronate, or zolendronate) were also approved for the treatment of GIO. These drugs were shown to improve BMD, whereas the data on fractures were scanty in GIO, particularly in premenopausal or younger men. The mechanism by which BPs reduce the adverse skeletal effects of GCs have not been elucidated. The disadvantage of long-term BPs treatment is that it may lead to a reduction in bone turnover to a level inadequate to support normal bone remodeling. Although experimental data showed that BPs also prevents osteocyte apoptosis, there is also experimental evidence of increased accumulation of microdamage with long-term BPs therapy [87]. Also, as BPs accumulate in the skeleton (with a long-term residual time), they cross the placenta, accumulate in fetal skeleton, and cause toxic effects in pregnant rats. Therefore, BPs should be used with caution in women who may become pregnant. Anabolic Drugs. Drugs, such as BPs, that suppress bone resorption have been proposed as interventions for prevention of GIO as well as disuse osteoporosis. The disadvantage of this approach is that it may lead to a reduction in bone turnover to a level inadequate to support normal bone remodeling. An alternative approach is to maintain a normal level of bone formation using a bone anabolic agent such as PTH. The human recombinant Nterminal parathyroid hormone (PTH 1-34 or teriparatide) is a potent osteoanabolic agent, which decreases osteoblast and osteocyte apoptosis and increases bone formation and bone strength. Because of GCs-induced decrease in the number of osteoblasts and rate of bone remodeling, anabolic, and antiapoptotic treatment with teriparatide may directly counteracts the key pathogenetic mechanisms of GCs excess on bone, thus, it may be a more effective treatment than BPs [88]. The same rationale applies to immobilizationinduced osteoporosis, as progressive immobilization as well as long-term GCs exposure results in osteocyte apoptosis and reduced bone formation [89]. Future Options. As sclerostin augments osteocyte apoptosis, the antibody-mediated blockade of sclerostin represents a promising new therapeutic approach for the anabolic treatment of immobilization-induced osteoporosis and probably also for GCs-induced osteoporosis. Indeed, more recently, experimental data showed that administration of sclerostin neutralizing antibody in rat model of right hindlimb immobilization resulted in a dramatic increase in bone formation and a decrease in bone resorption that led to increased trabecular and cortical bone mass [39]. Summary We have described a spectrum of pathogenetic factors which may contribute to the development of osteoporosis and Journal of Osteoporosis 7 low-trauma fractures in patients with MS. Whilst there is evidence to support an important role for many of the risk factors, the most significant etiology of bone loss in patients with MS seems to be the level of motor disability and reduced bone load within individual patients. Other risk factors, such as long-term GCs treatment, hypovitaminosis D, or inflammation, may also play an important part in subset of patients with MS; however, further examinations in prospective studies are required. With regard to diagnostic as well as therapeutic interventions, there are currently no specific recommendations in patients with MS; however, identification and treatment of underlying cause should be the goal of therapeutic management. Optimally, the patients in a higher risk of osteoporosis should be early identified and preventively promptly treated to avoid the bone loss and fractures. Because the long-term disability and longterm GCs are probably two most significant etiologic risk factors for osteoporosis development in the majority of the patients with MS, the interventions which can counteract the osteocyte apoptosis as well as loss of muscle mass and muscle weakness will be promising.
2014-10-01T00:00:00.000Z
2011-03-30T00:00:00.000
{ "year": 2011, "sha1": "b8d6cce6834c97404eeca03504fd2e666fba5bfa", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jos/2011/596294.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "321dd3c4a56f0d70618908f815e1c9fac4a27c79", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234990719
pes2o/s2orc
v3-fos-license
3D Visualizations of Multiple Coronaviruses on Whole Genomes COVID-19 triggered by SARS-CoV-2 has become a common problem faced by people all over the world. With the development of bioinformatics and the breakthrough progress of gene technologies. It is a challenging topic to use genomic datasets for SARS-CoV-2 research. In this paper, a 3D visualization method is proposed to show the A9 module of the metagenomic analysis system MAS. Seven coronaviruses of genera were illustrated and briefly analyzed. Comparing the visualization results, various SARS-CoV-2 genomes were represented as 2D and 3D maps under different conditions. Through related specific projections, the characteristics of the coronavirus can be observed intuitively from the projection results to provide an effective viewpoint for studying viral genomes. Introduction In December 2019, a group of people with new coronary infections were discovered.The full genome sequence of the virus was obtained on January 29, 2020, with a total length of 29847 bp.On February 11, the World Health Organization named the new coronary pneumonia disease "COVID-19", and the International Committee of Viral Taxonomy named the virus "SARS-CoV-2". As of April 28, the number of diagnoses worldwide has exceeded 3026981.SARS-CoV-2 [1] has now seriously threatened the health of the global public, and it has attracted widespread attention from people around the world.Research on SARS-CoV-2 at home and abroad is also increasing. The outer layer of the coronavirus has an envelope, and the shape is spherical or elliptical, with polymorphism.The genome is a linear single-stranded RNA virus, which is a large class of viruses that are ubiquitous in nature [2][3][4][5][6]. Studying the similarities and differences between these seven coronaviruses from the perspective of genome sequence [11][12][13][14][15] visualization plays a vital role in preventing and controlling new coronaviruses and preventing the spread of disease. Data visualization technology is a technology that displays abstract data in an intuitive graph or image, thereby facilitating research and analysis [16,17].There are many visualization methods for genomic sequences: most of the genomic sequence visualization models are implemented by DNA walking technology. For example, the Gates-Nandy model [18] has an information degradation problem.To solve the problem of degradation and data loss, the researchers proposed a CGR model [19], a three-dimensional visualization model [20], and a worm model based on the Gates-Nandy model [21].In 2003, Randic [22] proposed a spectral visualization model, which is different from the Gates-Nandy model.It consists of four parallel lines with the same distance, and four bases (adenine A, thymine T, bird The connection of purine G and pyrimidine C) also solves the problem of information degradation. However, the above visualization models for genomic sequences are not suitable for processing long DNA sequence data, and the analysis methods for visualization results are not universal.In 2014, Feng Haiqing et al. proposed a grayscale image-based DNA sequence visualization model [23].This model converts onedimensional DNA sequence information into two-dimensional 256-color gray by encoding four bases.The degree of image greatly compresses the length of the DNA sequence visualization, and has a high spatial tightness.However, the visualization of the model has noise, which is not conducive to researchers observing more effective information. According to the properties of the genome sequence, a new visualization method of the genome sequence is developed based on the research idea of data visualization technology and based on a variant logic system [24][25][26][27][28].By adjusting the parameters, it can adapt to the genomic sequence with a large amount of data.By processing the seven coronavirus genome sequences, new graphical results can be obtained, and the resulting graphs clearly converge.From the results, we can find the correlation between the data and clarify the changing rules and phenomena, so that we can observe some characteristics of coronavirus from the perspective of variant. Materials and Methods Structurally speaking, variant logic is composed of four primitives.The four primitives represent four different states.There are symmetrical states and complementary states between two pairs.All states are combined to form the entire space.Segmentation is a major feature of variants.Dividing the data into equal segments can quickly process large amounts of data.The state value of each segment can be obtained by calculation.The visual model based on the variant system is to illustrate the state set of each segment. The visualization model processing flow is shown in Figure 1: The processing flow is as follows: input the viral genome sequence, and segment the genome sequence according to the segmentation value m to obtain the segmentation result.Calculate the distribution of {A, G, C, T} in the segmentation results respectively to obtain the elementary states {X A , X G , X C , X T }.After the elementary states are obtained, two combinations can be selected between the elementary states to obtain superposition statesX (A+T ) , X (A+C) , X (A+G) , X (G+C) , X (G+T ) , X (C+T ) .Choosing the values in the primitive state and the superposition state to project can obtaint the graphical results. Introduction of the main modules in the model: (1) Segmentation result: The segmentation result is affected by the segmentation value.The segmentation value is usually represented by m, which represents the number of bases in each segment after the entire sequence is divided equally.The genome sequence is divided into n/msegments by changing the size of the segment value m, and each segment has m bases.Controlling the size of m can adjust the resolution and effective area of the result. (2) Primitive state: the proportion of four bases in each segment is the primitive state, which represents the proportion of four bases in the genome sequence.The four primitive states in the variable-value system have a substitution and complementarity relationship, which perfectly fits the principle of complementary pairing between bases in the genome.The four symbols X A , X G , X C , andX T are used to represent the states of the four bases A, G, C, and T in each segment. (3) Superposition state: Each primitive state can be combined with each other, and can also be combined with three or four primitive states.A state such as this is called a superposition state.Each superposition state has a different meaning.There are six superposition states combined by two elementary states, which areX (A+T ) , X (A+C) , X (A+G) , X (G+C) , X (G+T ) , X (C+T ) said. (4) Graphical results: This experiment uses three-dimensional charts and twodimensional projection charts to analyze the data. Three-dimensional graph: It can display the features of primitive states and superimposed states.The three-dimensional map has a larger space capacity and can display more features.Two kinds of projections are randomly selected from the four primitive states and superposition states, respectively, as the x-axis and y-axis, and their values are accumulated at the corresponding positions to generate the z-axis.Finally, a three-dimensional diagram can be generated.Here select X (A+T ) andX (A+G) to generate a three-dimensional map. Two-dimensional projection map: The three-dimensional image can clearly see the overall structure of the viral genome sequence, and the specific details can be projected onto a two-dimensional plane for observation.Projecting the 3D map onto other planes of the coordinate system generates a 2D projection of the 3D map. Projecting various genomic sequences into three-dimensional space can observe its overall features from various angles, and the details contained in the overall features can also be displayed in the two-dimensional projection diagram generated by it. Data introduction In the variant visualization model, better analyze the spatial distribution characteristics and cycle characteristics of the seven coronavirus genome sequences, they are expressed in two-dimensional and three-dimensional maps, respectively.To ensure the reliability of the data, the whole genome sequences of seven viruses were downloaded from NCBI.The corresponding sequences and sizes of virus names are as follows: From the data information, it can be seen that the lengths of the whole genome sequences of the seven coronaviruses are not much different, on an order of magnitude.The genome sequences of living organisms are all in the millions.Although the genome sequences of coronaviruses are not long, they are approximately 30,000.Using variant results to project data into an n × n matrix, useful information can be observed simply and intuitively. Projection selection In the process of variant processing, each link can be adjusted, and this feature makes the method adapt to various data.However, the impact of each variable on the results makes it necessary to unify the comparison.Therefore, the appropriate value is calculated by the control variable method, and then the results obtained by the fixed parameters are compared.The two main parameters that affect the result are the m value and the other is the selection and combination. The three-dimensional space is affected by the value of m in all aspects.The most direct impact is the size of the resulting graph and the speed of processing the data.We use the mean, variance and standard deviation as indicators to determine the effect of the m value on the three-dimensional space.The table lists the mean, variance and standard deviation of the seven coronavirus sequences at m = 26. The mean value reflects the amount of information contained in the unit space in the three-dimensional space, and the variance and standard deviation represent the degree of dispersion of the data points in the data set.It can be seen from the table that the indicators of the seven coronaviruses are not much different.Therefore, it has better observation effect when m = 26. By comparing the display area of the results, select the combination of X (A+T ) and X (A+G) to obtain a three-dimensional graphical result.Therefore, for the threedimensional graphical result, select the result of the combination of X (A+T ) and X (A+G) when m = 26. Results Show Under the fixed parameters m = 26, X ( A + T ) and X ( A + G) combination, seven kinds of coronavirus obtain a three-dimensional map: Figure 2 Rotate and project the 3D image to obtain the projection results of the 3D image on other planes.The specific value of the three-dimensional projection can be seen by observing the image of the plane projection. Figure 3 is the result of projecting a three-dimensional image onto the xy plane.Figure 4 is the result of projecting a three-dimensional image onto the xz plane, Figure 5 is the result of projecting a three-dimensional image onto the yz plane. Result analysis As seen from the three-dimensional graphical results, the graphical results of the seven coronaviruses are clustered in the middle and distributed around.They are all distributed in the upper left corner of the space.In particular, the peak value of e is obviously lower than that of the other six coronaviruses.Acf has a clear single peak.The result of mapping the three-dimensional graphic results to the xy coordinate plane.From the figure, it is more accurate to observe the single peak position of acf.The distribution of the double peaks of several other coronaviruses is also different. The result of mapping the three-dimensional graphic results to the xz coordinate surface and the yz coordinate result supplements the deficiencies of other surfaces.It is precisely seen that the result value of e is 21, and the result value of c is 42 is twice that of e. Conclusion A variant visualization model was established for seven kinds of coronaviruses, and the data were converted into easy-to-understand graphs.The two-dimensional and three-dimensional graphs were suitable for different variant calculation stages.Calculate the m value to adjust the best visual effect.Use three-dimensional projection to observe the distribution of superimposed states.The three-dimensional image is proposed to two-dimensional to accurately observe the state of the superimposed state at various angles in the three-dimensional image.The results show that the results obtained by the visualization method based on the variant system can clearly show the connection and difference between the seven viruses.The study of new coronaviruses provides a new non-biological method. Figures Figure 1 Figures Table 1 Whole genome sequence information of seven viruses Table 2 Mean, Variance and Standard deviation of seven coronaviruses
2020-10-28T18:33:52.446Z
2020-09-16T00:00:00.000
{ "year": 2020, "sha1": "599f7cfb1ddd9f1fc5e6bd84895a4baae475354c", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-76302/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "790cb62ed5be35806aa267de4bd4bfc537e03961", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
270031846
pes2o/s2orc
v3-fos-license
Experimental Investigation and Theoretical Analysis of Flame Spread Dynamics over Discrete Thermally Thin Fuels with Various Inclination Angles and Gap Sizes : Flame spread over discrete fuels is a typical phenomenon in fire scenes. Experimental and theoretical research on flame spread over discrete thermally thin fuels separated by air gaps with different inclination angles was conducted in the present study. Experiments with six inclination angles ranging from 0° to 85° and various fuel coverage rates from 0.421 to 1 were designed. The flame spread behavior, the characteristic flame size , and the flame spread rate were analyzed. The results show that t he flow pattern, stability, and flame size exhibit different characteristics with different inclination angles and gap sizes. As the inclination angle increases, particularly with smaller gaps, turbulent and oscillating flames are observed, while larger gap sizes promote flame stability. The mechanism of flame propagation across the gap depends on the interplay between the flame jump effect and heat transfer, which evolves with gap size. Average flame height, average flame width, and flame spread rate initi ally increase and then decline with the increase in fuel coverage, peaking at fuel coverage rate s between 0.93 and 0.571 for different inclination angles. A theoretical model is proposed to predict the flame spread rate and the variation in the flame spread rate with inclination angle and fuel coverage. Furthermore, the map determined by inclination angle and fuel coverage is partitioned into distinct regions, comprising the accelerated flame spread region, the flame spread weakening region, and the failed flame spread region. These findings provide valuable insights into flame spread dynamics over discrete thermally thin fuels under diverse conditions. Introduction Fire dynamics and its prevention strategies have long been a key topic of in-depth research and ongoing interest for fire protection personnel [1].For example, flame spread over discrete solid fuel is a common phenomenon in various fire scenarios, including warehouse shelf fires, wildland fires, and high-rise building balcony fires, garnering increasing attention within the scientific community.Discrete fuel configurations consist of multiple fuel segments separated by inert materials or air gaps, often representing a more realistic fire load in practical fire scenarios compared with continuous solid fuel combustion.For instance, in the context of a warehouse fire, flames can swiftly traverse between stored commodities through vertical or horizontal gaps in the shelving system, giving rise to a three-dimensional fire of significant scale.In the scene of wildland fires, flame propagation over discrete biomass fuels consistently involves the transition of flames from one element to the succeeding element [2].These gaps effectively serve as barriers between combustible materials, reducing the likelihood of flame spread [3].However, recent research has indicated that under certain conditions, such as increased separation distances between fuel elements [4,5] or decreased fuel coverage rates [6], flames can propagate more rapidly over discrete fuels compared with continuous ones.In these fire scenarios, the inclination angle of the fuel has a significant impact on flame spread.Wildland fires are often characterized by sloped terrain, while racks and the exterior facades of balconies are typically oriented vertically upward.Consequently, discrete solid combustibles at different inclination angles may pose a heightened fire safety risk compared with their continuous counterparts.Systematically conducting in-depth investigations into flame spread behavior over discrete combustible materials with various inclinations is of paramount importance for advancing fire safety design and regulations. The phenomenon of flame propagation across solid surfaces has been the subject of extensive research over the years, resulting in an extensive body of literature on the topic.Fernandez et al. [6] delved into the mechanisms of laminar flame spread over flat PMMA surfaces in different orientations, proposing a theoretical model that emphasizes solid heat conduction and the thermal runaway of gas-phase ignition reactions.R.J. Santoro [7] explored the mechanism of flame spread, offering fundamental insights into the governing principles of the process.By combining experimental studies with theoretical analyses, R.S. Magee [8] revealed the underlying mechanisms behind flame propagation on solid surfaces.Additionally, other authors [9][10][11] have conducted a detailed examination of dominant heat transfer mechanisms in flame spread through experimental and theoretical analyses.These studies have specifically proposed theoretical models addressing continuous flame spread over thermally thick fuels.For the discrete fuel configuration proposed in this study, the flame spread characteristics in discrete scenarios may diverge from those observed in continuous flame spread.Hence, investigating whether classical flame spread models proposed by previous researchers align with discrete flame spread is a subject worthy of exploration.Investigating flame spread on discrete fuels requires the consideration of factors such as the geometric shape of the fuel.This is essential to enhancing the applicability of research findings to practical fire protection engineering. A substantial body of literature places particular emphasis on the phenomenon of flame spread over matrices of spaced matchsticks (without heads) [4,[12][13][14][15][16][17][18][19].Notably, M. Vogel and F. A. Williams [12] developed a comprehensive theoretical thermal model, integrating their experimental data, which revealed the pivotal role of convective heat transfer in driving flame propagation over these matchstick arrays.Further insights into upward flame propagation were gained by Gollner et al. [4], who investigated horizontally oriented match rods affixed to a vertical steel wall.Their findings demonstrated that both the flame spread rate and the rate of sample mass loss increased in correlation with greater spacing between the match rods.Meanwhile, Finney et al. [13] conducted a series of laboratory experiments involving artificial fuel beds where they controlled the structural characteristics of gaps, their depth, and the slope of the surface.The results uncovered that fire spread was constrained by the gap distance, with fuel particles across the gap igniting only upon direct contact with the flame.Numerous researchers have delved into the phenomenon of discrete solid flame spread, examining various variables, such as applied ambient wind speed, wood size, fuel element height, fuel bed tilt angle, and fuel moisture content [14][15][16][17][18][19].These investigations consistently indicate that flame spread is more likely to occur in deeper fuel beds [14,15], at higher wind speeds [14,16,17], and on steeper fuel bed surfaces [18,19]. In addition to examining matrix fuel arrangements, several studies have explored flame propagation over discrete solid materials in the form of flat plates.Y. Watanabe et al. [2] presented findings on the flame spread rate with respect to fuel load over paper samples featuring randomly distributed pores, revealing a non-monotonic trend.Gollner and Miller [4] conducted experiments to investigate flame spread over vertical PMMA blocks separated by insulation.Their study revealed a maximum flame spread rate occurring at f = 0.67, possibly attributable to a delayed thickening of the boundary layer or increased air entrainment.Park et al. [20] conducted numerical investigations into flame spread phenomena for discrete thermally thin solids.Their model illustrated a non-linear impact of air gaps on the burning rate.Cui and Liao [21] performed experimental studies on upward flame spread over discrete combustibles separated by air gaps.Their results demonstrated a non-monotonic relationship between the flame spread rate and burning rate with respect to gap size.Luo et al. [22] explored the effect of gap size between discrete fuels on opposed flame spread.Their findings reveal that heat transfer from the flame to the discrete fuel decreased as the gap size increased.Furthermore, Z. Wang et al. [23] conducted experimental research on various characteristics of upward flame spread, including flame rate, shape, height, temperature field, and heat transfer behavior over discrete XPS materials.Their study aimed to uncover the mechanisms behind the influence of fuel coverage on flame spread behavior. It is worth noting that prior experiments on discrete solid materials have predominantly focused on vertically upward flame spread.However, there has been relatively limited research on the flame spread behavior of discrete fuels in inclined configurations.Notable contributions in this direction include studies by Gollner [15], Xie [18], and An [24], each of which investigated discrete flame spread experiments at various inclination angles.Gollner et al. [15] explored inclined flame spread on PMMA samples, a departure from typical studies that focused on vertical flame propagation.Surprisingly, they discovered that the flame spread rate in the vertical direction was moderately faster at the bottom of PMMA than in conventional vertical upward flame spread scenarios.An et al. [24] delved into upward flame spread over discrete XPS materials separated by air gaps, examining a range of inclination angles (α = 60°, 75°, 90°, 105°, 120°, and 135°).Their findings indicate that flame spread rate and melt zone length both decreased as the inclination angle increased.In addition, Xie [18] and Beer [25] developed critical flame spread models, although previous studies on the impact of inclination angles have primarily centered around thermally thick materials.Typically, solid fuels were affixed to insulated back plates to mitigate the effects of backside heat transfer.However, further investigation is required to comprehend flame spread characteristics over discrete thermally thin materials in inclined configurations. The literature review reveals that previous studies have predominantly centered on discrete fuels in both vertical [18,22,23,26] and horizontal [23,27,28] configurations.However, research on the influence of space distance on discrete flame spread over inclined solid fuels, particularly thermally thin materials, remains relatively limited.In this paper, a comprehensive series of experiments were conducted to investigate flame spread over discrete thermally thin cellulosic materials, considering the combined effects of air gap size and inclination angle.The primary objective of this study is to empirically analyze crucial parameters, including flame spread rate, typical flame phenomena, flame height, flame width, and the influence mechanism of fuel coverage on flame spread behavior. Materials and Methods The experimental platform, depicted in Figure 1, has been specifically designed to investigate the discrete flame spread characteristics over thermally thin paper.This apparatus comprises a rotatable holder and two video recorders for data collection.The rotatable holder, constructed from stainless steel, allows for the setting of different tilt angles.The sample is securely held between two stainless steel plates, with bolts available for adjustments to eliminate any air gap between these plates.In practical fire scenarios, concurrent flame propagation poses a considerable danger due to the rapid spread of flames and warrants thorough investigation.A low inclination of the fuel can result in flame separation from the fuel surface, whereas an elevated inclination angle can induce flame adherence to the fuel surface, thereby altering the heat transfer dynamics of the flame [8].We selected experimental parameters encompassing a spectrum from horizontal to nearly vertical orientations, meticulously specifying inclination angles of 0°, 5°, 25°, 45°, 65°, and 85°.Moreover, one of the plates features an engraved scale to facilitate the calibration of characteristic flame size during the flame spread process.Two video recorders, recording at a rate of 25 frames per second, were employed to capture the flame spread process.Subsequently, data on flame shape, characteristic flame lengths, and flame spread rate were extracted from the processed video images.The characteristic flame lengths include flame height (ℎ ) and flame width ( ), as shown in Figure 2. Flame height is defined as the vertical distance from the bottom to the tip of the flame.Flame width is defined as the length of the flame that is closely attached to the surface of the sample., where , , and h represent the sample thickness, the thermal conductivity of the sample, and the heat transfer coefficient of air) for the material used in this study was calculated as 0.088, significantly less than 0.1, indicating that the material with a thickness of 0.274 mm was considered thermally thin in this study.The effective exposed sample width was 30 mm due to the secure fixation of the sample ends by stainless steel plates.To create separation between the adjacent cellulose paper samples, various air gap distances, defined as gap length (g) and illustrated in Figure 1, were set.Prior to conducting the experiments, the samples underwent a 10 h drying process in a drying oven set to an ambient temperature of 100 °C to eliminate any moisture-related effects.To characterize the interplay between fuel length and gap size, a dimensionless parameter known as fuel coverage (f) was used.Fuel coverage, denoted as f, can be calculated by using the formula f = l/(g+l), where l represents the fuel length, as shown in Figure 1 [23].Our objective is to explore the impact of spacing on the progression of flame spread from continuous to critical conditions.To achieve this, we chose distinct fuel coverage rates, ranging from 1 to the critical fuel coverage rate, across different angles.Table 1 provides an overview of the experimental configurations employed in this study.In this study, the sample ignition process was executed by using a linear ignition source composed of a nichrome coil heater.A constant flow source, operating at a fixed current of 5.9 A, was used to supply consistent electrical power for the ignition process.Once the sample edge was successfully ignited, the ignition source was promptly removed.To ensure the capture of high-quality images depicting flame spread and to minimize potential disturbances, all experiments were conducted within a controlled, darkened environment.Additionally, each experiment was meticulously repeated 3-4 times to ensure the repeatability and reliability of the experimental data.During the data processing of the present study, the data in the real time images are derived from one set of experimental results from repeated experiments.The values in the scatter diagrams represent the mean values obtained from 3-4 independent repetitions.The error bars represent the standard deviation of these repetitions. Flame Spread Behavior Characterization A series of typical flame spread images during flame spread with different fuel coverage rates and inclination angles are presented in Figures 3 and 4, which show that the flames exhibited obvious differences.Unlike with continuous fuel, the flames of discrete fuel need to cross gaps to ignite the virgin fuel zone, and the following observations on flame behaviors can be made.The flames showed a smooth flame envelope with no obvious fluctuation which presented a laminar flow state for flame spread over continuous (f = 1) and discrete fuels (f < 1) at low angles (0° and 5°), as shown in Figure 3.As shown in Figure 4, with the increase in angle, the flames showed obvious irregular forms and turbulent flow characteristics accompanied by intense and frequent flame oscillations, as demonstrated by the real-time flame height in Figure 5a.As shown in Figure 5a, especially at high angles, such as 65° and 85°, the flame turbulence characteristics were more pronounced, and the flame vortex was clearly observable due to increased buoyancy resulting in longer flame and preheating zone, which supported increased entrainment of surrounding air, thus promoting fierce burning [16].For the effect of gap distance, it is noteworthy that the flame spread behavior at high fuel coverage rates resembled that of continuous samples.As the gap distance increased significantly, the intensity of burning decreased, resulting in a stable flame with reduced oscillation.For example, the variation in the real-time flame height over different air gaps at an inclination angle of 25° is presented in Figure 5b.With the decrease in fuel coverage (i.e., the increase in gap distance), the periodicity of the flame height became increasingly evident, and the amplitude of the flame height was greater.As shown in Figures 3 and 4, it can be also observed that the flame size increased as the inclination angle increased, with the smallest flame size being found under horizontal conditions for flame spread over continuous samples.When the fuel angle was small, the flame extended away from the fuel sheet, and as the angle was increased, the flame tended to spread closer to the wall.When the angle was 85°, the flame was highly tilted towards the fuel sheet, and the longest flame length was observed.For the discrete fuel, at fixed angle, the flame size increased and then decreased with the increase in gap length (i.e., the decrease in f).At 0° and 5° in Figure 3, the flame size of the discrete sample was obviously larger than that of the continuous sample.For increased inclination angles (for example, 25°), the flame size firstly increased until f = 0.93 (i.e., g < 3 mm) and then decreased. Moreover, flame jumping and flame splitting phenomena were observable as the flame traversed air gaps of varying sizes.In cases where the gap distance was small, the flame body adequately enveloped the paper above, initiating the ignition of the subsequent sample.Consequently, incomplete fuel conversion occurred before the first burntout fuel front jumped to the next unburned sample, a phenomenon known as flame jump [29].The premature ignition of adjacent fuel, leading to flame jumping, facilitated flame propagation, aligning with Park's experimental observations [20].With the increase in gap distance, the flame on the preceding sample stabilized, enabling it to ignite the subsequent piece and propagate the flame among discrete solid fuels.During flame propagation across the air gap, flame splitting occurred due to the absence of continuous flame caused by the lack of fuel gas in the gap.The sample in the preheating zone underwent decomposition by the flame front and the ceiling flame beneath it, facilitating the ignition of the next sample.Simultaneously, the flame continued to spread in the preceding sample, leading to flame splitting.Following flame splitting, the flame on the preceding sample stabilized and ignited the second piece, promoting flame propagation among discrete solid fuels.This phenomenon corroborates the findings of prior researchers [22]. Characteristic Flame Lengths Characteristic flame lengths are important to feature and quantify flame spread behaviors.Characteristic flame lengths, including flame height and flame width, are defined in Section 2. When measuring the flame lengths, the first and last pieces of the sample were removed to eliminate the effects of ignition and propagation completion.Flame height with a frequency of 50% in each period was taken as the average flame height, and the variation in average flame height with different fuel coverage rates is presented in Figure 6.The result indicates that the mechanism of flame spread changes with the increase in fuel coverage and fuel inclination angle.For the effect of air gaps, the average flame height firstly increased and then decreased with the increase in fuel coverage at fixed angle.The fuel coverage rates corresponding to the maximum average flame height decreased as the fuel inclination angle increased.When the fuel coverage decreased ( < < 1, where represents the critical fuel coverage), the flame presented obvious turbulent characteristics, accompanied by significant shaking, which indicates that the relatively small air gap can promote the entrainment of air and the effective mixing of fuel, so that accelerated combustion and larger flame size can be observed.On the other hand, with the further decrease in fuel coverage, the larger air gap hindered flame propagation from one sample to the next, making the ignition of the subsequent sample increasingly difficult; thus, a smaller flame size was achieved.Additionally, it was observed that when the flame passed through the gap, the flame was relatively stable.In addition, the air gap slightly affected the average flame height at low angles (i.e., 0° and 5°).During the propagation process, the flame on the upper surface of the sample remained approximately in a vertical shape, and the flame shape changed very little under vertical thermal buoyancy.For the effect of the fuel inclination angle, following the increase in fuel inclination angle, the average flame height increased significantly.As the inclination angle increased, the degree of combustion and flame spread of the sample became increasingly severe, and the volume of the flame body became larger and closer to the inclined surface of the sample, which enhanced the heat transfer effect of the flame on the unburned sample.The average flame width, which can be defined as , can be deduced by calculating the average value, which can be seen in Figure 7.The result indicates that the average flame width is nonlinearly related to fuel coverage and that the variation follows a similar pattern to that of the flame spread rate and average flame height.Figure 8 illustrates the variation in the ratio of average flame height (ℎ ) to average flame width ( ) relative to fuel coverage (f) across various tilt angles.At an angle of 85°, the average flame height is comparable to the average flame width, resulting in a ratio (ℎ / ) of average flame height to average flame width close to 1, which is different from other inclination angles.As depicted in Figure 4, the flame is in close contact with the fuel surface at 85°.In the range of 0° to 65°, with constant fuel coverage, an increase in tilt angle leads to a reduction in the ratio of average flame height to average flame width, indicating the flame's closer proximity to the fuel surface.In the range of 0° to 5°, the spacing between fuel elements minimally affects the flame, with ratios consistently greater than 1.3, as depicted in the typical flame images in Figure 3, indicating that the flames are distanced from the fuel surface.Between 25° and 65°, when comparing wide and narrow spacing, wider spacing exhibits larger ratios, signifying that the flames are situated farther from the fuel surface. Flame Spread Rate To obtain the flame spread rate, the real-time pyrolysis position was recorded.Image J was employed to extract and analyze the RGB images captured from side views and transformed into grayscale images.The statistical averaging of binary values at each pixel resulted in the creation of contour values representing the probability of flame presence.The pyrolysis front is easy to observe and measure, as shown in Figure 9, which shows the variation in the pyrolysis front with time under different fuel coverage rates.Splitting occurs when the flame crosses an air gap; therefore, the pyrolysis front is disconnected.In discrete fuel arrays, the flame spreads through the air gaps by jumping.The pyrolysis front reaches the edge of one unit of fuel, and the flame steadily raises the temperature of the adjacent unit, which then begins to pyrolyze and ignite.And that is the reason for the observed split in flame spread.From Figure 9, the flame spread rate being significantly increased can be described as fire jumping when the flame strides across the gap when the gap size is small, while as the gap size increases, the increase in flame spread rate is weakened.The reason is related to the longer ignition time or the weakening of the heating effect of adjacent fuel sheets as the gap size increases.Despite the occurrence of splits, the real-time pyrolysis front still presented a fine linear relationship with time; thus, the flame spread rate could be determined by applying a linear fitting.As shown in Figure 9, the R-squared values of the linearly fitted curves for all experimental conditions are greater than 0.94, which indicates that the pyrolysis front position increased linearly with time.The average flame spread rate can be acquired by the slope of the linear fitting. The average flame spread rates with different fuel coverage rates are presented in Figure 10.The results show that the flame spread rate remained basically unchanged at low angles (0° and 5°), while the flame spread rate firstly increased and then decreased at other angles with the increase in fuel coverage.The maximum flame spread rates occurred for the fuel coverage range of 0.571 to 0.93 for different inclination angles.Figure 11 illustrates the flame spread rate with different fuel inclination angles.The result shows that the flame spread rate increases with the increase in angle.From a macro perspective, as the inclination angle of the sample increases, the flame body deviates significantly from the vertical state and moves towards the material, resulting in an increase in the preheating zone and an enhancement in the heat flux transferred from the flame to the unburned zone [30].When the flame spreads along the direction of buoyancy brought by inclination, the fuel in the preheating zone is heated to produce pyrolysis gas by the flame above and below the sample simultaneously.As the gap size is relatively low, the mixing of pyrolysis gas and air is enhanced across the gap and the premixed gas near the next fuel unit is ignited by the flame above or below the fuel unit even though the previous unit has not been burned out yet, resulting in an increased flame spread rate.This accelerating effect of flame spread is called flame jumping.When the gap size is further increased, the competition between the acceleration effect of flame jumping is weakened and the cooling effect of the large gap size is formed, resulting in a decreased flame spread rate when fuel coverage falls below a certain threshold. To further quantify the rate of flame spread, it is necessary to establish a theoretical model.According to classical flame spread theory, the flame spread rate (v p ) over thermally thin material is related to the preheating length (δf) and the heat flux (q̇f '' ) in the preheating zone [31]: where ρ, cp, , x, Tig, and T∞ are the density, the specific heat, the thickness, the downstream distance position from the flame front, the ignition temperature, and the ambient temperature of the sample.For upward flame spread, the heat transfer from the flame to the virgin zone ahead of the flame front satisfies Equation (2) [32]. Pr( ) where * ( ) Gr are the dimensionless heat flux, the dynamic vis- cosity of air, the heat of combustion of the sample, the Prandtl number, and the Modified Grashof number. * x Gr can be written as Equation (3) [32,33]: where g, β , T f , θ , and ν ∞ are the gravitational acceleration, the thermal expansion coefficient of air, the flame temperature, the inclination angle of the sample, and the kine- he thermophysical parameters of air and solid fuel, the ignition temperature, and the flame temperature are constants, and x 9/4 , where w f is the average flame width.So, Equation (1) can be expressed As f = l/(g + l), Equation ( 5) can be further expressed as follows: Equation (6) gives the flame spread model, and the flame spread rate can be calculated by consulting the relevant parameters of paper and air.The relevant parameters of paper and air can be found from related literature [34], as shown in Tables 2 and 3. A comparison of the theoretical calculations based on the flame spread model with the experimental data is plotted in Figure 12.The equation derived in the present study provides the evolutionary trend of the flame spread rate under the effect of inclination angles and gap sizes.Table 4 lists the values of pyrolysis length (x p ).In this study, pyrolysis length is defined as the vertical distance from the bottom of the flame to the flame tip.It should be noted that the flame width in Equation ( 6) is replaced with the pyrolysis length because the experimental flame spread rate is obtained from the pyrolysis front.This is because of the adherence of the flame to the wall during the spread process.As the angle increases, the flame adheres more prominently to the wall, resulting in the flame front significantly outpacing the pyrolysis front.Consequently, the flame width is observed to be substantially greater than the pyrolysis width.When the flame width is utilized to calculate the rate of fire spread, it yields a significantly higher value compared with the experimentally measured rate, which is derived from the pyrolysis front.According to Figure 12, the predicted rate of flame spread exhibits satisfactory alignment with the experimental data.For cases with relatively small angle, the experimental and predicted values are relatively close.However, at large inclination angles, i.e., 85°, the predicted value deviates significantly from the experimental value.Perhaps due to the enhanced turbulence characteristics of air at high angles, the measurement of the pyrolysis front through video observation is not accurate because the flame obscures the traces of pyrolysis, especially the pyrolysis front.Additionally, the thermophysical parameters of air are defined as constants, which further leads to inaccuracies in the results.Nevertheless, the model of Equation ( 6) still provides a good explanation of the relationship among the flame spread rate, the inclination angle of fuel, and fuel coverage.Measured flame spread rate (mm/s) Predicted flame spread rate (mm/s) Moreover, based on our experimental findings, we observed that at certain angles, the flame spread rate initially increased and then decreased with the increase in fuel spacing until flame extinction occurred.We established a representation of the maximum flame spread rate and critical flame spread rate by using points positioned at the horizontal and vertical coordinates, indicating the fuel tilt angle and fuel coverage rate, respectively.Using the flame spread rate as a basis, the chart delineates three distinct regions characterized by the continuous flame spread line, the maximum flame spread line, and the critical flame spread line, as depicted in Figure 13.Region 1 represents the accelerated flame spread region, where the flame spread rate is increased by enhanced air entrainment and heat transfer.Region 2 is the flame spread weakening region, where the cooling effect of the large gap size plays the dominant role.Region 3 can be referred to as the failed flame spread region, where the flame cannot successfully spread due to the insufficient heat transfer to ignite the next fuel unit. Conclusions In this study, we designed 34 sets of fuel arrays, spanning six inclination angles (0°-85°) and incorporating various fuel coverage rates (0.421-1).We conducted a thorough analysis of typical flame phenomena and essential flame characteristic dimensions, including flame height, flame width, and flame spread rate.The main findings are summarized as follows: 1.At lower angles, the flame front exhibited a smooth envelope with minimal fluctuations.However, at higher inclination angles, turbulent flame structures became increasingly apparent.Nevertheless, the burning intensity diminished, resulting in more stable flames, especially when the gap distance was sufficiently large, even at high inclination angles.In instances of smaller gap distances, flames leaped across the air gap.As the gap distance increased, flames propagated across the gap by relying on continuous heating from the preceding fuel flame, often accompanied by flame splitting.2. Both flame height and flame width exhibited an initial increase followed by a decrease with the increase in fuel coverage, reaching their peak values at specific points. Higher fuel coverage levels facilitated the entrainment of air and effective fuel mixing in the relatively small air gap, accelerating combustion and resulting in a larger flame size.Conversely, at low fuel coverage, larger air gaps hindered flame propagation due to the increasingly challenging ignition of the next sample, resulting in a smaller flame size.3. The flame spread rate demonstrated an initial increase followed by a decrease with the increase in fuel coverage, reaching a maximum value at fuel coverage rates between 0.93 and 0.571 for various inclination angles.We proposed a theoretical model to predict flame spread which effectively elucidates and predicts the interplay among flame spread rate, inclination angle, and fuel coverage.Furthermore, we delineated distinct regions within the map formed by inclination angle and fuel coverage, including the accelerated flame spread region, the flame spread weakening region, and the failed flame spread region. The phenomenon of flame propagation on inclined, discrete, thin fuels becomes increasingly complex in the presence of wind, as the involvement of wind significantly alters flow field characteristics, flame morphology, and heat transfer mechanisms [35].The primary influence on combustion behavior of forced airflow is attributed to the horizontal momentum generated by the flow, counteracting the vertical buoyancy produced by the fire [36].Research by Lai et al. [17] indicates that low-speed airflow can enhance combustion longevity and char yield, while high wind speeds can diminish combustion intensity.The involvement of different wind speeds adds further interest to our study, and our next step involves investigating flame propagation on inclined, discrete, thin fuels under varying wind speeds. Figure 1 . Figure 1.The diagrammatic sketch of the experimental apparatus and the sample layout. Figure 2 . Figure 2. The definition of the characteristic flame size.For the present study, thermally thin paper samples were employed.Seven cellulose paper samples, each measuring 40 mm × 40 mm × 0.274 mm (length × width × thickness), were securely fixed within the sample holder.The value of Biot number (Bi = •ℎ Figure 4 . Figure 4. Typical flame images with different fuel coverage at inclination angles from 25° to 85°. Figure 5 . Figure 5.The real-time flame height with time: (a) the real-time flame height over continuous samples (f = 1) with different inclinations; (b) the real-time flame height with different fuel coverages (f) at 25°. Figure 6 .Figure 7 . Figure 6.The dependence of the average flame height on different fuel coverage rates. Figure 8 . Figure 8.The relationship between the ratio (ℎ / ) of average flame height to average flame width and the fuel coverage rate (f). Figure 11 . Figure 11.The average spread rate versus fuel angular orientations with different fuel coverage rates: (a) all the fuel coverage rates; (b) partial enlargement. ) 2 ( matic viscosity of air.The dimensionless heat flux ( * ( ) depends on the flame attachment length and x, which can be expressed as * and verified by Ju et al.[32].By bringing * x Gr and * ( ) Figure 12 . Figure 12.Comparison between the calculated flame spread rates and the measured values for different fuel coverage and inclination angle values according to Equation (6). Figure 13 . Figure 13.Distribution of flame spread regimes for different inclination angle and fuel coverage values. Table 2 . The relevant parameters of air. Table 3 . The relevant parameters of paper.
2024-05-26T15:33:42.163Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "32ff2ca433de74431a4497c566efd84a055081ef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2571-6255/7/6/177/pdf?version=1716465496", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "52944b96828baa210457db988cd869029edd44d3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
227751874
pes2o/s2orc
v3-fos-license
Identification of Long Noncoding RNA Biomarkers for Hepatocellular Carcinoma Using Single-Sample Networks Objective Many studies have found that long noncoding RNAs (lncRNAs) are differentially expressed in hepatocellular carcinoma (HCC) and closely associated with the occurrence and prognosis of HCC. Since patients with HCC are usually diagnosed in late stages, more effective biomarkers for early diagnosis and prognostic prediction are in urgent need. Methods The RNA-seq data of liver hepatocellular carcinoma (LIHC) were downloaded from The Cancer Genome Atlas (TCGA). Differentially expressed lncRNAs and mRNAs were obtained using the edgeR package. The single-sample networks of the 371 tumor samples were constructed to identify the candidate lncRNA biomarkers. Univariate Cox regression analysis was performed to further select the potential lncRNA biomarkers. By multivariate Cox regression analysis, a 3-lncRNA-based risk score model was established on the training set. Then, the survival prediction ability of the 3-lncRNA-based risk score model was evaluated on the testing set and the entire set. Function enrichment analyses were performed using Metascape. Results Three lncRNAs (RP11-150O12.3, RP11-187E13.1, and RP13-143G15.4) were identified as the potential lncRNA biomarkers for LIHC. The 3-lncRNA-based risk model had a good survival prediction ability for the patients with LIHC. Multivariate Cox regression analysis proved that the 3-lncRNA-based risk score was an independent predictor for the survival prediction of patients with LIHC. Function enrichment analysis indicated that the three lncRNAs may be associated with LIHC via their involvement in many known cancer-associated biological functions. Conclusion This study could provide novel insights to identify lncRNA biomarkers for LIHC at a molecular network level. Introduction Hepatocellular carcinoma, one of the most common cancers worldwide, is the third leading cause of worldwide mortality for various cancers, and its incidence rate per year remains increasing rapidly [1][2][3]. The risk factors for HCC include infection with hepatitis B virus (HBV) or hepatitis C virus (HCV), aflatoxin B1 intake, alcohol consumption, nonalcoholic fatty liver disease, and some hereditary diseases [4][5][6]. Since patients with HCC are usually diagnosed at late stages, when medication is no longer effective, understanding the molecular mechanisms of HCC and identifying biomarkers for early diagnosis and treatment seem to be essential [7,8]. Long noncoding RNAs (lncRNAs) are non-proteincoding transcripts longer than 200 nucleotides. According to the well-known central dogma of molecular biology, genetic information is stored in protein-coding genes [9,10], for which noncoding RNA (ncRNAs) have been considered as "junk genes" or "transcriptional noise" for a long time [11]. However, with the development of both experimental technology and computational methods, an increasing number of lncRNAs have been discovered in human transcriptome. Over the past decades, several researches have shown that lncRNAs are involved in almost the whole life cycle of cells through different mechanisms, and they have played diverse and important roles in many fundamental and critical biological processes, including transcriptional regulation, epigenetic regulation, organ or tissue development, cell differentiation and apoptosis, cell cycle control, metabolic processes, and chromosome dynamics [12][13][14][15]. Recently, several lncRNAs have been demonstrated to be associated with the development and survival in patients with different kinds of cancers, including HCC [16][17][18]. Moreover, many studies have highlighted the molecular mechanism and biological characters of lncRNAs in HCC occurrence and progression, and the result revealed that some lncRNAs can also serve as valuable prognostic predictors for HCC patients [19][20][21]. Despite precision medicine, which uses molecularly targeted therapy against malignant tumors and speeds up progress toward the discovery of novel molecular targets with the diagnostic and prognostic value [22], the management of patients with HCC remains problematic [23,24]. Therefore, there is an urgent requirement to identify many more lncRNA biomarkers for HCC. One key to achieving personalized medicine is to elucidate the molecular mechanisms of individual specific diseases, which generally result from the dysfunction of individual specific molecular network rather than the malfunction of single molecules. With rapid advances in highthroughput technologies, applying molecular networks to analyze human complex disease is attracting increasing wide attention [25,26]. A molecular network, e.g., a gene regulatory network or a coexpression network, can be generally estimated by correlation coefficients of molecule pairs from expression or sequence data of multiple samples [27,28]. In recent years, based on biological and clinical data, a number of network-based methods were proposed not only to identify disease modules and pathways but also to elucidate molecular mechanisms of disease development at the network level [29][30][31]. Many studies have shown that network-based biomarkers are superior to traditional single-molecule biomarkers for accurately characterizing disease states due to their additional information on interactions and networks [28,30,32,33]. In particular, a single-sample network is considered to be reliable for accurately characterizing the specific disease state of an individual. It can be directly used to identify the biomarkers and further elucidate the molecular mechanisms of a disease for individual patients [29]. In this study, we aimed to identify the lncRNA biomarkers for the patients with LIHC based on the RNA-seq data from TCGA. By constructing the single-sample networks for the 371 tumor patients, we obtained three lncRNA biomarkers associated with overall survival (OS) of LIHC patients. Then, a 3-lncRNA-based risk score model was established on the training set, which could effectively predict the OS of LIHC patients. The independence and the predictive ability for survival prediction of the 3-lncRNA-based risk score were validated on the testing set and the entire set. Differential Expression Analysis. EdgeR is a Bioconductor package for differential expression analysis of replicate count data, which was widely used in the differential expression analysis of high-throughput sequencing data. In our study, we extracted the expression profile of mRNAs and lncRNAs from RNA-seq count data, which was normalized using the edgeR package (version 3.22.5). Those mRNAs and lncRNAs with zero expression value in more than 10% samples were discarded. The differentially expressed lncRNAs and mRNAs were calculated by edgeR at a threshold of FDR < 0:05 and | log 2 ðfold changeÞ | >1. 2.3. Construction of the Single-Sample Networks. In our study, the differentially expressed mRNAs and lncRNAs which had the same gene names were removed. As a result, 3329 differentially expressed mRNAs and 956 differentially expressed lncRNAs for each sample were left for further investigation. A single-sample network was constructed based on statistical perturbation analysis of a single test sample against a group of given reference samples, which can accurately characterize the disease state of an individual or a sample. The more detailed description about the single-sample network can be found in Ref. [34]. In our study, we took 50 normal samples as the reference samples, while 371 tumor samples were the test samples. Firstly, based on the gene expression data of the reference samples, a reference correlation network can be constructed by computing the Pearson correlation coefficient (PCC) between lncRNA-lncRNA and lncRNA-mRNA pairs, which was denoted as N r . Then, adding a test sample s to the reference samples, another perturbed correlation network was obtained in the same way, which was denoted as N p . By comparing the difference of the two correlation networks, we can get a single-sample network N ssn for this test sample sðN ssn = jN r − N p jÞ. Finally, 371 single-sample networks were obtained in our study. For convenience, we transformed each single-sample network to an adjacency matrix Δ D, and the element ΔD i,j represents the ΔPCC of the edge for a pair of molecules in the single-sample network. As the theoretical foundation for this method, if there were obvious differences between the reference samples and the single sample s in terms of the gene expression pattern, adding the tumor sample s to the reference samples would cause significant changes of the PCC on some edges in the perturbed network. We assumed that if a lncRNA might be a key biomarker, the sum of the Δ PCC of the edges linked by the lncRNA would be higher than others. Then, a vector SD was used to represent the sum of ΔPCC of the edges linked by each lncRNA in a singlesample network, which was denoted as where SD i is the sum of ΔPCC of all the edges linked by the ith lncRNA in a single-sample network and can be calculated by Consequently, all the 371 single-sample networks can be represented by a matrix M 956×371 : , where i denotes the ith lncRNA and j denotes the jth tumor sample. Different from the method in Ref. [34], we designed a ranking system to identify the candidate lncRNA biomarker according to the matrix M. Firstly, each column in matrix M was sorted by the value size of SD ij , and the items in the column returned to the corresponding lncRNA in matrix ML. Then, we calculated the frequency of each lncRNA in the top K rows of matrix ML and the top 5% lncRNAs were retained. Finally, in order to improve the effectiveness of our method, we took the intersection of the candidate lncRNAs under different K (K = 5,10,20,30) as the final candidate lncRNA biomarkers by SSN. The flowchart of the ranking system is shown in Figure 1. Survival Analysis. The differentially expressed lncRNAs, of which the expression level was zero that exceeded 10% of all subjects, were removed from the prognostic analysis. In the meantime, clinicopathological features, including survival information, were also achieved from TCGA. Samples without sufficient clinical data were omitted. Finally, the prognostic analysis included a total of 307 tumor samples with the expression data from 956 lncRNAs. The univariate and multivariate Cox regression analyses were conducted to evaluate the association between the variables and the OS of LIHC patients on the training set, the testing set, and the entire set, and statistical significance was assessed using p < 0:05. The Kaplan-Meier plots were employed to observe the prognosis effect. The ROC curve analysis was used to evaluate the prognosis performance for 5-year survival rate. All analyses were performed on the R3.4.3 framework. 2.5. Construction of the Risk Score Model. The 307 patients were randomly divided into two datasets. 154 patients were used as the training set to build a risk score model based on the candidate lncRNA biomarkers, while the other 153 patients were used as the testing set to evaluate the predictive ability of the risk score model. Firstly, the multivariate Cox regression analysis was used to evaluate the association between the expression of the lncRNA biomarkers and the patient's OS in the training set. Then, a risk score model was built by linear combination of the expression levels of the lncRNA biomarkers weighted by their Cox regression coefficients. The calculation formula of the risk score model was as follows: where i represents the total number of the lncRNA biomarkers, Exp indicates the expression profiles of lncRNA, and Coe is the estimated regression coefficient of the ith lncRNA derived from the multivariate Cox analysis. Based on this formula, each patient with LIHC had an RS and the median RS was treated as a cut-off point to divide the patients into the low-risk group and the high-risk group. 2.6. Pathway and Functional Enrichment Analysis. Metascape (http://metascape.org/gp/index.html) is a free gene annotation and analysis resource that helps biologists make sense of one or multiple gene lists [35]. To understand the function of the identified lncRNA biomarkers in our study, the Gene Ontology (Go) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis was performed by Metascape. A value of p < 0:01 was set as the cut-off for significance. Differentially Expressed lncRNAs and mRNAs in LIHC. We downloaded the expression data of 60483 RNA, from which 14822 lncRNAs and 19814 mRNAs were obtained by the gene type data reported by the genome GRCh38.p13. By calculating with edgeR, a total of 956 lncRNAs and 3329 mRNAs were considered to be differentially expressed between tumor samples and normal samples. The volcano plot of the differentially expressed mRNAs and lncRNAs is shown in Figure 2. The Candidate lncRNAs Identified by Single-Sample Networks. In our study, the top 48 lncRNAs (5% of 956 lncRNAs) that had the highest frequency of occurrence in all the 371 tumor samples remained as the candidate lncRNA biomarkers by the ranking system. Furthermore, by taking the intersection of the candidate lncRNAs identified under different KðK = 5, 10, 20, 30Þ, 27 lncRNAs were obtained as the final candidate lncRNA biomarkers by the ranking system according to 371 single-sample networks. The detailed differential expression information of these lncRNAs is listed in Table S1. Figure 3. The detailed description of the three lncRNAs is shown in Table 1. 3.5. Evaluation of the Risk Score Model in the Testing Set and the Entire Set. To assess the robustness of the 3-lncRNAbased risk score model in OS prediction for LIHC patients, we further examined it in the testing set and the entire set. The same 3-lncRNA-based risk score model and cut-off point derived from the training set were used to divide the testing set and the entire set into the high-risk group and the low-risk group (n = 77 and 77, n = 154 and 153, respectively). In the testing set, the Kaplan-Meier curve showed that the survival of LIHC patients in the low-risk group exhibited a longer OS as compared to those in the high-risk group ( Figure 5(b), p = 0:0037). The median survival of patients in the high-risk group and the low-risk group was 2.29 years and 6.69 years, respectively. In the entire set, a similar result was shown that patients in the high-risk group exhibited a shorter OS as compared to those in the low-risk group ( Figure 5(c), p = 0:0011). The median survival of patients in the high-risk group and the low-risk group was 3.39 years and 5.52 years, respectively. The AUC of 5-year ROC curve in the testing set and the entire set was 0.704 and 0.664, implying a good prognostic capacity of the 3-lncRNA-based risk score (Figures 5(e) and 5(f)). , which revealed that the 3-lncRNA-based risk score was an independent predictor of OS for LIHC patients. More detailed results are provided in Table 2. Pathway and Functional Enrichment Analysis. We were also interested in the molecular mechanisms of the three lncRNAs. Unfortunately, little publication was found on the functional mechanism of these lncRNAs. Finally, we performed Pearson correlation analyses between the three lncRNAs and protein-coding genes based on their expression levels in TCGA LIHC cohort. The proteincoding genes that correlated with at least 1 of the three lncRNAs (Pearson coefficient > 0:45, p < 0:01) were considered to be significantly correlated coexpressed genes of the three lncRNAs, which are listed in Table 3. Then, pathway and process enrichment analyses were carried out with the following ontology sources in Metascape: KEGG pathways and GO biological processes. All genes in the genome were used as the enrichment background. Terms with p < 0:01, minimum count = 3, and enrichment factor > 1:5 were collected and grouped into clusters based on membership similarities. We found that the coexpressed genes of three lncRNAs were mainly associated with terms related to endothelial cell migration, ncRNA metabolic process, cell cycle arrest, etc. (Figure 6). The retrieved data is also provided in Table S2. Discussion HCC is the most common type of liver cancer, accounting for about 80% of the tumours in this organ. Although there are advances in the diagnosis and treatment of HCC, the OS time of patients with HCC remains poor. Recently, molecular biomarkers have been identified throughout the various clinical stages and pathological tissue types, and multiple studies have revealed that differential expression of lncRNAs played critical roles in cancer development, indicating their potential as novel biomarkers for cancer diagnosis and prognosis [36,37]. However, until now, only a few lncRNAs had been experimentally verified for HCC. Thus, it is necessary to identify more potential lncRNA biomarkers for HCC. In the present study, we analyzed the expression profiles and clinical information of LIHC patients from TCGA database. The differential expressed lncRNAs and mRNAs were screened to construct the single-sample networks for 371 LIHC patients. According to the 371 single-sample networks, 27 candidate lncRNAs were selected by the ranking system designed for identifying the lncRNA biomarkers from the single-sample networks. Furthermore, by using the univariate Cox regression, three lncRNAs (RP11-150O12.3, RP11-187E13.1, and RP13-143G15.4) were selected as the lncRNA biomarkers associated with patients' OS. Next, multivariate Cox regression was performed in the training set to establish a 3-lncRNA-based risk score model based on the expressions and relative contributions in the multivariate Cox regression model. The 3-lncRNA-based risk score had a good prognosis prediction ability for OS of patients with LIHC, which was tested in the testing set and the entire set. Moreover, we performed multivariate Cox regression analyses using the 3-lncRNA-based risk score and other clinical data in the training set, the testing set, and the entire set. The result showed that the 3-lncRNA-based risk score was an independent predictor for the OS of LIHC patients. The results of function enrichment analysis showed that these KEGG pathways and functional categories of the three lncRNA biomarkers were all closely associated with tumorigenesis. For instance, the transcription factor p53 is a tumor suppressor, activating downstream targets to trigger cell cycle arrest and apoptosis [38], and several lncRNAs participate in the p53 regulatory network and serve as p53 regulators or effectors [39]. Moreover, a recent global transcriptome study identified that sixteen p53 target lncRNAs forming a pathway web constitute tumor suppressor signature with high diagnostic power [40]. Importantly, Jelena et al. revealed that p53 might play multifunctional roles in different stages in HCC [41]. Moreover, the metastatic characteristics of liver cancer are the key factors affecting the survival and prognosis of tumor patients, and the process of cell migration is involved in tumor metastasis [42]. The COP9 signalosome is a highly conserved protein complex implicated in diverse biological functions that involve ubiquitin-mediated proteolysis [43]. Also, research demonstrates that COP9 signalosome is an important regulator of cell cycle and cell survival mediating the proliferation of HCC cells and highlight that COP9 signalosome might be a promising strategy for anti-HCC therapy [44]. The unfolded protein response is a signal transduction cascade, which acts as a quality control mechanism for protein synthesis and consequently can act to protect cells from an adverse external microenvironment. The study declared that the unfolded protein response is activated in the majority of HCC and may play a role in chemoresistance to the most widely used chemotherapy agent, doxorubicin, and have effects on newer antiestrogen and multikinase inhibitor therapies [45]. The present results indicated that the three lncRNAs might have an important role in LIHC via their involvement in these known cancerassociated biological functions. Besides, previous studies have demonstrated that RP11-150O12.3 was significantly and independently associated with survival of HCC patients [46]. Moreover, RP11-150O12.3 also has been recognized as an independent C7orf61, LSM8, AP1S1, ELOB, TSR3, PSMG3, COPS9, COPS6, ZNF593, LOC101927420, CLDND2, ST7-AS1, C6orf52, LOC106660606, B9D1, FIS1, PRSS3, S100A13, S100A16, NSUN5, LMNA, LAGE3, ACD, SNHG19, HSPB1, TMEM54, LINC00896, TMSB10, SFN, BRI3, EIF3B, HRAS, MRPL28, MIR6132, NT5C, PTCD1, RASSF1-AS1, MINCR, LOC105371849, METTL26, DPM3, BCL7C, NUDT1, AP1M2, MCM7, BOLA2B, JOSD2, LOC400684, PUSL1, ARF4-AS1, UQCC3, POLR2J, NAA10, LOC101928659, PGP, S100P, POP7, LAMTOR4 predictor of gastric cancer prognosis [47], which also related to survival time of patients with colorectal adenocarcinoma [48]. RP13-143G15.3 is abnormally expressed in liver cancer and may act as an important role of tumor suppressor in human HCC [49]. The results of these studies provided some grounding to validate our findings. In our study, the single-sample networks were constructed for identifying the lncRNA biomarkers associated with LIHC patients. Actually, a single-sample network in this study is not a real molecular network for each sample but a perturbation network for a single sample against the reference network. It mainly reflects the variation between normal and disease samples in terms of interactions or a network. Similarly, a differential expression of a gene is not the real gene expression level for each sample but the variation of the gene expression between normal and disease samples. The advantage of the single-sample network is that it cannot only characterize the personalized features for all the single samples but also can be directly applied to the data analysis of single samples at the molecular network level, which is superior to traditional single molecular biomarker. There are several limitations to the present study. First, the primary purpose of the present study was to identify lncRNA biomarker for LIHC at the molecular network level. The lncRNA biomarkers selected in our study should be further investigated incorporating more specific clinical characteristics to fully understand the associations involved. Second, the three lncRNA biomarkers were only validated in the datasets from TCGA database, which required additional confirmation in other large cohorts of LIHC patients in the future. Third, the lncRNA biomarkers identified in our study have no validation in fresh samples and experiment studies. Conclusion In this study, we constructed the single-sample networks for LIHC patients and identified three lncRNA biomarkers associated with the OS of LIHC patients. A 3-lncRNA-based risk score model with the ability to effectively predict the survival of LIHC patients was established, which was validated to be an independent predictor for the OS of LIHC patients. Functional enrichment analysis revealed that three lncRNAs may be associated with LIHC via their involvement in many known cancer-associated biological functions. This study provides a new perspective for identifying the lncRNA biomarkers at the molecular network level.
2020-11-19T09:15:29.537Z
2020-11-14T00:00:00.000
{ "year": 2020, "sha1": "fbdf9dffc4b4a8bf36deea5e273c58fbdc8d4634", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2020/8579651.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46fea3f0ea289050a63e961529da5d9b8a0a58f6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
262774952
pes2o/s2orc
v3-fos-license
International Journal of Pharmaceutics and Drug Analysis Effectiveness of counseling in patients of breast and cervical cancer receiving chemotherapy and radiotherapy - prospective observational study Cancer is one of the main problems in the India and world also. From the world health organization report (2022), 10 million deaths in the year of 2022 of the entire world. This reflects one in six deaths. Side effects are very common in the cancer treatment process i.e. chemotherapy, radiotherapy and chemo radiation therapy. Proper knowledge is required in these aspects. Sometimes these side effects lead to life threatening due to lack of knowledge on side effects management while patients are in the home. These patients also need counseling, consultation and about side effects management of their treatment. This counselling leads to not only the patient’s quality of life but also improve the patients’ psychological conditions during their treatment process. In our prospective observational study, the main objectives are checking the effectiveness of before counselling and after counselling with respect to patient’s side effect management and psychological effectiveness. Finally evaluate the outcome with the counselling in both breast and cervical cancer patients. We provide counseling for cancer patients in King George Hospital, Visakhapatnam, India. In our study we used EORTC QLQ – C30 (version 3) scale for comparison of with or without counseling in the areas of side effect management, effectiveness of psychological and nutritional in breast and cervical cancer patients. Finally in our study we noticed that mostly rural area and some urban people were lack of knowledge on side effect management of cancer treatment. By providing better counseling to these patients especially in rural area people for their treatment and create awareness of cancer in group counseling for care givers then automatically better side effect management was possible in cancer patients. Pharmacist counseling is one of the important services that has been associated with improved outcome. Exposure to radiation (X-rays, UV rays)  Chemical exposure  Lack of physical activity Some risk factors are in our hand to control the chance of occurrence, they are avoiding tobacco, alcohol consumption, and unhealthy food. Some risk factors are not in our hand they are age (getting older), cancer causing genes.  Sign is an alert notice for any disease. Following are the main signs in cancer  Unexpected weight loss or gain.  Changes related to skin (excessive hair growth, reddened skin, itching).  Bladder function changes.  Unusual bleeding or discharge.  Fatigue.  Swelling or lumps appears in any part of the body. Symptoms Most of the cancer symptoms are related to the effected organ in our body. Side effects Sore throat, Heart burn, Nausea, vomiting and diarrhea, Difficulty in swallowing, Skin problems, Taste changes, Loss of appetite, Low blood cell count. C) Chemotherapy: Chemotherapy is a procedure with injection of highly cytotoxic drugs into the patient's body for killing the rapidly dividing cells. Chemotherapy given in two ways either by alone or combine with radiotherapy/surgery. The main purpose while using combination for reducing the size of the tumor then it will be helpful for surgery. Some times other main organs cells kidney, heart, lungs, nervous system, bladder cells are also damaged. Drugs used for cervical cancer: Single line Therapy Aim In this prospective observational study, the main is the evaluation of outcome of effectiveness of counselling in the side effects management and psychological improvement while patient taking chemotherapy, radiotherapy and chemo radiation therapy. Methodology The study entitled "Effectiveness of counselling in patients of breast and cervical cancer receiving chemotherapy and radiotherapy" was conducted in the Oncology department of Exclusion criteria Patients who will deny giving consent i.e. unwilling patients. Patients who will attend to the Oncology ward but not diagnosed with breast and cervical cancers Patients who will undergo surgery. Pregnant women with this type of cancers. Patient who will be referred to the other ward. Study Procedure This was a Prospective, questionnaire-based observational study. The study will be conducted on patients attending at Oncology unit, King George Hospital (KGH), Visakhapatnam. 1. Firstly, identify the cervical and breast cancer patients, and then study protocol will be explained after that patient consents will be taken individually. 2. A questionnaire which covers details of patient demographic information will be asked in the language spoken by the patient, mostly Telugu as it is the native language. 3. The data pertaining to the patient including the Patient's name, age, sex, educational level, marital status, place of living, cancer stage, patient past medical history, chemotherapy cycle number, Chemotherapy drug combination, previous history of illness, chemotherapy drug strength/dose will be collected in a well-designed patient data collection and documentation form. 4. Prescription of the patients will be documented in data collection form. 5. Depending up on the stage of cancer and the prescribed drugs we will identify the side effects occurring in the Patient after chemotherapy and radiotherapy. 6. We will council the Patients based up on their side effects with the help of designed questionnaire format. 7. Counseling will be nutritional, psychological, drug related counseling in a comfortable manner to the patient. 8. Leaflet will be provided to each individual; it includes the information regarding the side effects after chemotherapy and radiotherapy. 9. The patients will be followed for period of 5 months. 10. After completion of the counseling results will be analyzed with proper statistical tools. Plan of Work Patients with Cervical and Breast cancer approached to the At the end of the study, results will be analyzed and interpreted using statistical method Statistical methods For measuring of outcomes chi-square test was used. Chi-square test compress the observed values to the expected values. This test is also helpful for determining the difference between observed and expected values are statistically significant or not. Smoker, Alcoholism Chi-square test was used for assess the improvement of quality of health during their treatment and after the treatment in both the breast and cervical cancer patients. Results Our significant value is 0.5 Percentage chance of occurrence In our study we noticed that the age group of 41 to 55 had more chance of occurring cervical and breast cancer Psychological effectiveness of breast and cervical cancer patients after counseling In our study we found that especially for breast cancer patients they get depressed for loss of breast during their treatment. By providing mental and psychological support and also counselling for patients along with care givers. Finally, we noticed that end of 3 to 4 chemo cycles the outcome grade was raised from one good patient (before counselling) to 19 (after counselling). Similarly in cervical cancer patients the psychological outcome was improved with good counselling. Discussion We found that rural area peoples having more chances to developing cancer compare to urban population. low economic people are also highly effects with cancer. In to that patient, we can reduce the side effects, to give psychological support and advices will also useful for maintain better health care during their treatment. Summary The research thesis entitled, "Effectiveness of counselling in patients of breast and cervical cancer after receiving chemotherapy and radiotherapy -prospective Before counselling observational study" reported that patient counselling was helpful to improve the quality of health in both breast and cervical cancer patients. In breast cancer patients both chemotherapy and radiotherapy given to the patient simultaneously. In cervical cancer chemotherapy was given to the patients. In both cases during of treatment process we observed reduction of anxiety, depression, tension and worry after counsel the patient. We advices the better nutrition and liquid intake supplements that was useful to makes the patient healthy during the chemo and radiotherapy treatment. Funding No funding. Author Contribution All authors contributed equally.
2022-07-12T16:42:59.496Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "9d26a830c6db97502c248b71abd5a6d0f4a7eddb", "oa_license": "CCBYNC", "oa_url": "https://ijpda.com/index.php/journal/article/download/514/518", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "11782d86d064e57d9e9b7fde59a2c7e6375de711", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
17258959
pes2o/s2orc
v3-fos-license
The Prevalence of Vaginal Microorganisms in Pregnant Women with Preterm Labor and Preterm Birth Background To investigate the risk factors for vaginal infections and antimicrobial susceptibilities of vaginal microorganisms among women who experienced preterm birth (PTB), we compared the prevalence of vaginal microorganisms between women who experienced preterm labor (PTL) without preterm delivery and spontaneous PTB. Methods Vaginal swab specimens from 126 pregnant women who experienced PTL were tested for group B streptococcus (GBS), Mycoplasma hominis, Mycoplasma genitalium, Ureaplasma urealyticum, Chlamydia trachomatis, Trichomonas vaginalis, Neisseria gonorrhoeae, Treponema pallidum, herpes simplex virus (HSV) I and II, and bacterial vaginosis. A control group of 91 pregnant women was tested for GBS. Antimicrobial susceptibility tests were performed for GBS, M. hominis, and U. urealyticum. Results The overall detection rates for each microorganism were: U. urealyticum, 62.7%; M. hominis, 12.7%; GBS, 7.9%; C. trachomatis, 2.4%; and HSV type II, 0.8%. The colonization rate of GBS in control group was 17.6%. The prevalence of GBS, M. hominis, and U. urealyticum in PTL without preterm delivery and spontaneous PTB were 3.8% and 8.7% (relative risk [RR], 2.26), 3.8% and 17.3% (RR, 4.52), and 53.8% and 60.9% (RR, 1.13), respectively, showing no significant difference between the 2 groups. The detection rate of M. hominis by PCR was higher than that by culture method (11.1% vs. 4.0%, P=0.010). The detection rates of U. urealyticum by PCR and culture method were 16.7% and 57.1%, respectively. Conclusions There was no significant difference in the prevalence of GBS, M. hominis, and U. urealyticum between the spontaneous PTB and PTL without preterm delivery groups. INTRODUCTION The complications of preterm birth (PTB) cause approximately 70% of neonatal deaths and nearly half of all long-term neurological morbidity [1,2]. PTB could be categorized by its clinical presentation: spontaneous preterm labor (PTL) leading to spontaneous PTB (S-PTB), preterm premature rupture of the membranes (PPROM), and medically induced PTB (M-PTB) due to maternal or fetal complications [3]. However, not all contributory causes of S-PTB have been identified, and the healthcare system is unable to target and manage relevant risk factors appropriately [4]. Intrauterine infection has been proposed as one of the most important risk factors for complications in pregnancy, such as premature rupture of membrane, PTL, PTB, and perinatal infections. Microorganisms may gain access to the amniotic cavity http://dx.doi.org/10.3343/alm.2012.32. 3.194 www.annlabmed.org and the fetus through the following pathways: 1) ascending from the vagina and the cervix; 2) hematogenous dissemination through the placenta (transplacental infection); 3) retrograde seeding from the peritoneal cavity through the fallopian tubes; and 4) accidental introduction at the time of invasive procedures such as amniocentesis, percutaneous fetal blood sampling, chorionic villous sampling, or shunting [5]. Most studies on detection of infection in patients who underwent PTL and preterm delivery have focused on microbial invasion of the amniotic cavity, which is normally sterile. Therefore, the isolation of any microorganism from the amniotic fluid constitutes evidence of microbial invasion. The most common pathway of intrauterine infection is the ascending route [6]. Although colonization of the maternal genital tract by specific organisms has been inconsistently associated with S-PTB and/ or PPROM, some infections have been consistently associated with preterm delivery [7]. It has been hypothesized that screening for and treatment of common vaginal infections would reduce the rate of PTB among affected women [8]. The microorganisms associated with PTL without preterm delivery (PTL-WO), S-PTB, and PPROM are as follows: Treponema pallidum, Neisseria gonorrhoeae, group B streptococcus (GBS), Ureaplasma urealyticum, Mycoplasma hominis, Chlamydia trachomatis, Trichomonas vaginalis, Gardnerella vaginalis, Bacteroides spp., and peptostreptococci [7,9]. However, it is difficult to assess the extent of the causal relationship between infection and S-PTB because the maternal genital colonization rates by microorganisms differ according to race, gestational age, geographical variation, and investigators [2,7,9]. Moreover, it is not known how and when bacteria invade the uterus and whether additional (as yet undocumented) infections by viruses, protozoa, or bacteria other than those mentioned above are involved in PTB [9]. The purpose of this study was to determine the prevalence and antimicrobial susceptibilities of vaginal microorganisms isolated from Korean women who experienced PTL-WO and S-PTB. Finally, we sought to identify the microorganisms that were potential risk factors for PTB. Study population The study population consisted of 126 pregnant women who experienced PTL and a control group of 91 pregnant women, with no history of PTL or preterm delivery and who underwent routine prenatal care, admitted to the Wonju Christian Hospital be-tween August 2008 and July 2009. The patients were divided into the following 3 groups according to the type of complications during pregnancy: 1) PTL-WO (N = 26), 2) S-PTB (N = 69), and 3) M-PTB (N = 31). M-PTB was defined as PTB with risk factors such as spontaneous rupture of membranes, anomalies of conception, malformations of the fetus, overdistended uterus, hydramnios, multiple gestations, fetal death, cervical incompetency, uterine anomalies, retained intrauterine device, serious maternal disease, pre-eclampsia, gestational diabetes mellitus, gestational hypertension, and active systemic infection. This study protocol was approved by the Institutional Review Board of Yonsei University Wonju College of Medicine. 1) Culture and antimicrobial susceptibility test for GBS Swab specimens from the vagina, anorectum, and urethral orifice of pregnant women were placed in selective Todd-Hewitt broth (S-THB) and new Granada plate and tube medium made in the laboratory [10] and in new Granada medium purchased from bioMérieux (Marcy-l'Etoile, France) for [18][19][20][21][22][23][24] hr in 5% CO2 atmosphere at 35°C. The colonies exhibiting orange coloration on new Granada tube or plate medium and/or exhibiting growth turbidity in S-THB were subcultured on 5% sheep blood agar and were identified using Streptex group B Streptococcus reagent (Wellcome Diagnostics, Dartford, England). Susceptibility to clindamycin, erythromycin, chloramphenicol, levofloxacin, penicillin, ceftriaxone, and vancomycin was tested using the Mi-croScan ® MICroSTREP plus panel (Siemens Healthcare Diagnostics, Sacramento, CA, USA). The panel was inoculated using the Renok hydrator/inoculator, which delivered 115 µL of Mueller-Hinton broth with 3% lysed horse blood to each well. After inoculation with 0.5 McFarland standard bacterial suspension, the panels were incubated at 35°C in ambient air for 20 to 24 directly into R1 tubes (transport medium) and subsequently delivered to the clinical microbiology laboratory for the identification of both M. hominis and U. urealyticum and for the determination of antimicrobial susceptibility. Swabs in the R1 transport medium were processed according to the manufacturer's instructions. They were mixed rapidly by vortexing, and then 3 mL of R1 was used to rehydrate the lyophilized growth medium R2 (provided in the Mycoplasma IST-2 kit). A Mycoplasma IST strip, consisting of 22 wells, was then inoculated with the rehydrated R2 growth medium (55 μL per well, overlaid with 2 drops of mineral oil). From the R2 positive tube, 0.1 mL was also inoculated onto A7 Mycoplasma agar plates (BioMérieux) and incubated at 37˚C in an atmosphere of 5% CO2 to evaluate characteristic colony morphology. All media and the inoculated strip were incubated at 37˚C in a CO2 incubator and monitored for color changes. The results were interpreted after 24 and 48 hr of incubation [11]. Wells 1-5 provided information on the presence or absence of M. hominis and U. urealyticum, with an estimate of the density of each organism (above, below, or equal to 10 4 colony-forming units [CFUs]). The antimicrobial susceptibility test was performed when the colony count for each organism was ≥ 10 4 CFU/mL. 3) Multiplex PCR for genital microorganisms Samples for multiplex PCR were obtained from the posterior vaginal fornix by swabbing with a cytobrush. The swab was rinsed in a transport tube containing PCR buffer (phosphatebuffered saline solution, pH 7.4, without calcium chloride and magnesium chloride; GibcoBRL, Grand Island, NY, USA). T. vaginalis, M. hominis, Mycoplasma genitalium, U. urealyticum, C. trachomatis, N. gonorrhoeae, T. pallidum, herpes simplex virus (HSV) I, and HSV II were simultaneously detected using the Seeplex ® STD6B ACE Detection kit (Seegene, Seoul, Korea). The swabbed sample was briefly vortexed and then centrifuged at 3,000 rpm for 20 min. After centrifugation, the mixture of sample pellet, proteinase K (Boehringer Mannheim, Mannheim, Germany), and buffer solution (10 mM Tric HCl, pH 8.3, 50 mM KCl, 0.1 mg/mL gelatin, 0.45% NP40, and 0.45% Tween 20) was incubated at 56˚C for more than 3 hr. The pellet was resuspended with phenol-chloroform, sodium acetate, and ethanol and incubated on ice for 30 min. Then, the DNA was extracted by centrifugation at room temperature for 5 min at 14,000 rpm, and the final pellet was frozen at -20˚C. The extracted DNA was washed with 70% ethanol and resuspended in sterile distilled water before using it for multiplex PCR. Multiplex PCR was performed using the method described by Kim et al. [12]. 4) Wet mount, Gram staining, and routine culture of vaginal specimens Vaginal samples obtained using sterile cotton-tipped swabs were sent to the laboratory in transport medium. In the laboratory, the samples were aliquoted into 2 separate tubes: one for direct wetmount microscopic examination and the other for Gram staining and subsequent inoculation onto blood and MacConkey agar plates. Examination of wet preparations was performed to rule out the presence of T. vaginalis and to detect the presence of yeasts. Gram staining was used to detect the presence of yeasts, inflammatory cells, and possible pathogenic microorganisms and to diagnose bacterial vaginosis (BV) by observing the "clue cells". Statistical analysis Statistical analysis was performed using the chi-square or Fisher's exact test with PASW Statistics 18 (SPSS Inc., Chicago, IL, USA). P values < 0.05 were considered statistically significant. Relative risk ratio analysis was performed to evaluate if M. hominis, U. urealyticum, and GBS were independent risk factors for the development of S-PTB from PTL-WO. The prevalence of genital microorganisms in different types of pregnancy complications The colonization rates of GBS in the PTL-WO, S-PTB, M-PTB, and control groups were 3.8%, 8.7%, 9.7%, and 17.6%, respectively. Although the colonization rate of GBS in the S-PTB group was more than 2 times that in the PTL-WO group, there was no statistically significant difference between the 2 groups (relative risk ratio [RR], 2.26; 95% confidence interval [CI], 0.29-17.89; P > 0.05). The colonization rate of GBS in the control group was much higher than that in the patient groups, although there was no statistically significant difference among the groups. The detection rates of M. hominis in the PTL-WO, S-PTB, and M-PTB groups were 3.8%, 17.3%, and 9.7%, respectively. The prevalence of M. hominis in the S-PTB group was higher than that in the PTL-WO group; however, there was no statistically significant difference between the 2 groups (RR, 4 DISCUSSION The causal relationship between abnormalities in vaginal flora during pregnancy and PTB has gained a lot of attention [13][14][15][16]. Although this phenomenon is relatively common, the pathogenic mechanism that leads to S-PTB is still poorly understood. It is particularly difficult to define abnormal genital tract flora in pregnant women. The pathogenic role of specific microorganisms in the vagina as risk factors for S-PTB varies according to the investigators, because microorganisms commonly found in the lower genital tract are those that are most frequently isolated from patients with intrauterine infections. The colonization rate of microorganisms in the vagina could be affected by not only technical factors, such as detection method and sampling site, but also internal and external factors of each individual [17]. Possible racial/ethnic differences in the composition of the normal vaginal microflora also exist. Usui et al. [16] reported that absence of vaginal lactobacilli was a better predictor of S-PTB at < 33 weeks of gestation than the presence of M. hominis, but its sensitivity and positive predictive value were no more than 28% and 25%, respectively. Breugelmans et al. [15] reported that there was no significant correlation between the presence of abnormal vaginal flora and S-PTB, and the risk of S-PTB increased when Ureaplasma species was associated with an abnormal vaginal flora. Although BV has been considered a risk factor for S-PTB, investigators have been unable to prove a consistent association between BV and S-PTB [18]. In this study, clue cells were detected in only 1.4% of the S-PTB group, which suggests that BV is not associated with S-PTB. Possible explanations for our result are that BV is diagnosed mostly in the first trimester and the prevalence decreases in the second and third trimesters [18] and that the relationship of BV with S-PTB may vary according to the diagnostic criteria for the detection of BV. U. urealyticum and M. hominis are among the organisms most frequently isolated from both placental membranes and amniotic fluid in patients with PPROM and those with PTL-WO [19][20][21][22]. Both organisms can initiate the synthesis of prostaglandins, resulting in S-PTB [9]. Nevertheless, there is much debate regarding whether genital mycoplasmas are pathogenic or simply frequent colonizers of the genital tract [14,15,20,21]. The traditional view has been that genital mycoplasmas are part of the vaginal microflora in many women, and their presence in the lower genital tract, unlike their presence in the upper genital tract, has not been associated with an increased risk of S-PTB [22,23]. Studies on the exact colonization rates of genital mycoplasmas should be performed to elucidate whether genital mycoplasmas are pathogenic. In our study population, overall detection rates of M. hominis and U. urealyticum in the vagina were 12.7% and 62.7%, respectively. The detection rate of M. homi-nis by PCR was higher than that by the culture method, while the detection rate of U. urealyticum by the culture method was higher than that by PCR. Possible explanations include lower sensitivity of the specific primer used in this study for the detection of U. urealyticum and/or false positive detection of Ureaplasma parvum by the culture method. A new species U. parvum (formerly biovar 1 of U. urealyticum serovars 1, 3, 6, and 14) was identified as U. urealyticum by using the Mycoplasma IST 2 test [24]. Lee et al. [14] reported that the prevalence of U. urealyticum was 28.6% (279/977) and that of M. hominis was 3.7% (36/977) in a vaginal fluid culture for genital mycoplasmas in pregnant women and that it was not associated with an increased risk for S-PTB. In contrast, Kafetzis et al. [25] reported that vaginal colonization with Ureaplasma species is associated with S-PTB because the rate of vertical transmission of Ureaplasma in mothers who experienced PTB was higher than that in mothers who experienced full-term delivery (33% vs. 17%). However, vaginal Ureaplasma colonization occurred among 36.5% of the mothers who experienced S-PTB and among 38% of the mothers who experienced full-term delivery. In this study, the RR of M. hominis and U. urealyticum in S-PTB to that in PTL-WO was 4.52 and 1.13, respectively, but these results were not statistically significant. Ureaplasma spp. can persist for weeks in the upper genital tract before spontaneous PTL or membrane www.annlabmed.org rupture initiates S-PTB. Among the pregnant women who show Ureaplasma spp. and/or M. hominis colonization, only a small group will experience an ascending infection. The reason why this microorganism sometimes causes an ascending infection is not yet understood. There might be other associated factors that increase the capacity of the microorganism to invade the uterine cavity. These factors might include the host immune system, the virulence of the microorganisms, or local factors favoring the invasion of genital mycoplasmas. GBS is detected in the vagina and rectum of 10-30% of pregnant women as normal flora, but it has emerged as a major pathogen for a variety of bacterial infections among pregnant women, non-pregnant adults, and elderly patients [26]. Garland et al. [27] reported that GBS is considered a risk factor for S-PTB because of its association with asymptomatic bacteriuria. However, when it colonizes the lower genital tract alone, it is not thought to promote preterm delivery. In this study, the RR of GBS in S-PTB was 2.26 (P < 0.05), which suggests that GBS was not an independent risk factor for the development of S-PTB from PTL-WO. N. gonorrhoeae and C. trachomatis are rarely implicated in uterine or fetal infection before the rupture of the membranes; however, both may have an adverse effect on pregnancy, thus increasing the risk for PTL [9]. N. gonorrhoeae has been shown to be present in approximately 1% of pregnant women and if left untreated, leads to a 2-5% RR of S-PTB [28]. In our study, the prevalence of C. trachomatis and N. gonorrhoeae by PCR was 2.4% and 0%, respectively; HSV I, M. genitalium, T. vaginalis, and T. pallidum were not detected using PCR. Antibiotic administration to patients with PPROM is associated with prolongation of pregnancy and reduction in the rate of clinical chorioamnionitis and neonatal sepsis. The benefit has not been demonstrated in patients with PTL-WO and intact membranes. Major efforts are required to determine why some women develop an ascending intrauterine infection and others do not and which interventions reduce the deleterious effect of systemic fetal inflammation. According to a large randomized study, erythromycin treatment of pregnant women with U. urealyticum vaginal colonization does not confer any advantage in terms of reduction in the rate of S-PTB [29]. In this study, the resistance rate of erythromycin in U. urealyticum was 8.9%, while that of ciprofloxacin was 96.4%. Our results demonstrated that genital mycoplasmas and GBS in the lower genital tract were not risk factors for the development of S-PTB from PTL-WO, although detection rates of those organisms in S-PTB were higher than those in PTL-WO. Differ-ences in host response to the presence of genital mycoplasmas and GBS, rather than the mere presence of the bacteria themselves, may contribute to an increased risk of S-PTB. The limitations of this study were that only a small number of PTL patients were enrolled and that the control group that did not experience PTL or PTB was only tested for GBS colonization. Additional large-scale studies on the colonization rates of GBS, M. hominis, and U. urealyticum in normal preterm women without labor or delivery are required to elucidate the pathogenic roles of these organisms in PTL-WO and S-PTB.
2014-10-01T00:00:00.000Z
2012-04-18T00:00:00.000
{ "year": 2012, "sha1": "72ce5c4775b2096fbc9a236bae5406af6fe21c11", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3339299?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "72ce5c4775b2096fbc9a236bae5406af6fe21c11", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213002196
pes2o/s2orc
v3-fos-license
The Causes, Prevention, and Management of Gastric Leakage after Laparoscopic Sleeve Gastrectomy: A Review Article Obesity has been considered a chronic relapsing disease. The increasing number of obese individuals has resulted in an increase in the demand for bariatric surgeries annually. Post-laparoscopic sleeve gastrectomy complications are challenging for both patients and surgeons. Thus, this paper reviews the most common and significant risk factors for leakage occurrence after laparoscopic sleeve gastrectomy and presents new tools, techniques, management options, and recommendations, gathered from newly published articles, for post-laparoscopic sleeve gastrectomy leakage. Causes of post-laparoscopic sleeve gastrectomy leakage include technical factors such as bougie size, transection point, reinforcement materials, and patient co-morbidities as well as ischemic reasons. Ischemic leakage, which is most commonly seen in laparoscopic sleeve gastrectomy, occurs usually after the fourth day, although some leakage may appear earlier within 1-3 days due to technical issues. Use of varied bougie sizes results in similar excess weight loss % at the one-year follow-up. Buttressing materials also reduce post-laparoscopic sleeve gastrectomy bleeding, but not the leakage rate. Endoscopic stents play a significant role in gastric leakage treatment in post-laparoscopic sleeve gastrectomy patients. Intra-operative or even early postoperative diagnostic tools can help in detecting early leaks, but minor leaks as well as those due to ischemic causes may be missed. In conclusion, laparoscopic sleeve gastrectomy is still one of the most effective bariatric surgeries, exhibiting approximately 70% excess weight loss. Although complications of post-laparoscopic sleeve gastrectomy are severe and may be life-threatening, most patients can be treated conservatively. However, those with persistent fistula require surgical intervention. INTRODUCTION Obesity has been considered as a chronic relapsing disease [1]. The increasing number of obese individuals has resulted in an increase in the demand for bariatric surgeries annually. Laparoscopic sleeve gastrectomy (LSG) is performed to re-design and re-size the stomach volume to induce satiety and reduce appetite by fundal ghrelin-producing cell resection [2,3]. The considerable mid-term effectiveness of LSG on excess weight loss (EWL) and obesityrelated co-morbidities can explain its worldwide success. [4]. However, gastric leakage after LSG has been reported, with a mean incidence of 2.1% (1.1%-5.3%) [5]. Thus, this paper reviewed the most common and significant risk factors for leakage occurrence after LSG and presented the new tools, techniques, management options, and recommendations, gathered from newly published articles, for post-LSG leakage. The United Kingdom (UK) Surgical Infection Study Group has proposed a definition of anastomotic leakage as "the leak of luminal contents from a surgical join among two hollow viscera" [6]. THE INCIDENCE OF GASTRIC LEAKAGE AFTER LSG In recent years, LSG has become one of the most common and popular surgeries for treating obesity in the United States and Europe. As the patient risk increases, the risk of complication may also increase. Minor surgical complications have a general incidence of 11%, whereas major surgical complications, such as leakage, bleeding, and gastric stenosis, constitute approximately 5% of cases. The incidence of gastric leakage after LSG is still low, ranging from 1.1% to 5.3%, with a mean incidence of 2.1% [5]. Although this complication is uncommon, it is still considered as the second most common cause of death in bariatric surgery cases, with an overall mortality rate of 0.4% [7]. CLASSIFICATION OF LEAKAGE Sarkhosh et al. [8] characterized early, transitional, and late leaks as those occurring at 1-4, 5-9, and at least 10 days post-surgery, respectively. Based on clinical pertinence and data from many publications, they characterized type showing both radiological and clinical evidence [8]. CAUSES OF LEAKAGE Gastric leaks can be due to either mechanical or ischemic reasons. Mechanical aspects Silecchia and Iossa [9] reported that staplers with inappropriate firing and direct traumatic tissue injury are considered as "mechanical and tissular" causes, and the leakage normally occurs at 2 days post-surgery (early leak). Tissue creeping, stress relaxation, and shearing are modulated by the time factor, as reported by many experts. Moreover, they found that the optimal technical stapling time is 15 seconds before firing could be performed, thereby, allowing tissue compression and creeping and minimizing the excessive tensile stress [9]. Ischemic aspects Silecchia and Iossa [9] defined ischemic leaks as those occurring at 5 or 6 days postoperatively, and it was discovered that most leaks after LSG occur at the Angle of avoided, to reduce the incidence of leakage occurrence [11]. DIAGNOSIS 1. Signs and symptoms The clinical presentation of gastric leakage could vary in each patient. It may be totally asymptomatic (only detected through a radiological examination) or symptomatic accompanied with peritonitis, septic shock, multiorgan failure, and death in some cases. Burgos et al. reported 7 cases of leakage in 214 patients (3.3%), with 5 patients presenting with abdominal pain, fever, tachycardia, tachypnea, and increased laboratory markers of inflammatory processes. Sustained tachycardia is the initial clinical sign observed in the majority of cases with early leakage [12]. Casella et al. [13] reported a 3% incidence of leakage in 200 patients. In general, the symptoms of gastric leakage include abdominal pain, vomiting, and fever. Only one patient in their study was asymptomatic. Methylene blue test The methylene blue test is an intraoperative method used to identify leaks that require immediate repair. Most authors have reported the use of this test, although the drain is sometimes removed before the fistula has been diagnosed. This diagnostic method can be useful only when the result is positive, since a negative finding does not exclude the possibility of a fistula [13,14]. Gastrointestinal transit test This fluoroscopic test is a radiological method that detects the failures in LSG, including suture line leaks (Fig. 1), abnormal and delayed gastric emptying, presence of stenosis or total obstruction, or large gastric residue. an intraoperative methylene blue test [17][18][19]. A suitable cartridge height is required, as advised by expert bariatric surgeons, to minimize the risk of leakage. The procedure initially begins with 2 stapler firing with a black cartridge, then the stapler height may be shortened according to the stomach wall thickness with a green or a gold cartridge until the angle of His is reached (Fig. 3-5) [20]. In the literature review about the role of omental wrapping and omentopexy in preventing leakage in LSG, the omental wrap showed some roles in reducing leakage in gastrojejunal anastomosis Roux-en-Y [21], although there is no supportive data in laparoscopic sleeve gastrectomy leak prevention, apart from the fixation of the tubal stomach to stabilize the stomach axial twist and volvulus narrowing [22]. However, some studies showed that the omentopexy has no significant advantages regarding food intolerance and gastrointestinal symptoms compared to conventional LSG, and may even require administration of more ondansetron due to nausea and vomiting [23]. MANAGEMENT OF POST-LSG LEAKAGE The management of post-LSG leakage is very challenging for medical professionals, and the success depends on multiple factors. It may be possible to stabilize the patients' condition and control the fistula, but managing the leakage may be difficult, especially leaks located at the GEJ. Moreover, the management of leakage depends on the patients' condition. Hemodynamically unstable patients who cannot be managed with conservative treatment will require surgical intervention. In case of early leaks (<3 days) with secured and localized leakage location and with the presence of inflammatory process, primary repair is still an option, although there is a possibility of recurrence ( Fig. 6) [12]. In a patient with signs of late leakage, presence of severe inflammatory process, and abscess collection in the area, washout and drain placement is the best management option (Fig. 6). Finally, in a stable patient with long-standing fistula post-LSG, conservative treatment is still a good option. However, drain placement is performed for the presence of abscess collection, and endoscopic stents are used in some cases with administration of total parenteral nutrition, high doses of proton pump inhibitors, and antibiotics as early as possible. In recent years, most studies recommend the use of flexible-coated stents as a second management procedure [24]. However, weekly fluoroscopic tests are required for this type of management to confirm that no stent migration has occurred and to check the stent's position.
2020-01-02T21:17:39.470Z
2019-12-30T00:00:00.000
{ "year": 2019, "sha1": "c4f06ef0ec37dcc20155553fc485da957349f68c", "oa_license": null, "oa_url": "https://doi.org/10.17476/jmbs.2019.8.2.28", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1d3947160b4014e348d160a08e592ba1dc8ca511", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253117009
pes2o/s2orc
v3-fos-license
Can Transformer Attention Spread Give Insights Into Uncertainty of Detected and Tracked Objects? Transformers have recently been utilized to perform object detection and tracking in the context of autonomous driving. One unique characteristic of these models is that attention weights are computed in each forward pass, giving insights into the model's interior, in particular, which part of the input data it deemed interesting for the given task. Such an attention matrix with the input grid is available for each detected (or tracked) object in every transformer decoder layer. In this work, we investigate the distribution of these attention weights: How do they change through the decoder layers and through the lifetime of a track? Can they be used to infer additional information about an object, such as a detection uncertainty? Especially in unstructured environments, or environments that were not common during training, a reliable measure of detection uncertainty is crucial to decide whether the system can still be trusted or not. I. INTRODUCTION Object detection and tracking are essential tasks in a perception pipeline for autonomous and automated driving. Only with knowledge about surrounding objects, downstream tasks, such as prediction and planning, are possible. In such a system, where the cascading effects of perception errors can be detrimental, it is very important to be able to quantify the reliability of the detection and tracking output. In object detection, uncertainty can stem from two sources [1]: Epistemic uncertainty is caused by uncertainty of the model, e.g. when an observation is made that was not present in the training dataset. Unstructured and dynamic environments can also cause such an uncertainty, as their versatility can hardly be captured in a training dataset. Second, aleatoric uncertainty stems from sensor noise, and also encompasses uncertainty caused by low visibility and increased distance from the sensor [1]. While state-of-the-art object detection methods have been based on deep learning for many years, both with image input [2] as well as on point clouds [3], [4], it is a recent phenomenon that deep learning based models are also used for joint tracking and detection [5], [6], [7]. Such trackers aim to utilize the detector's latent space to infer additional information about a tracked object, rather than relying on lowdimensional bounding boxes as input. However, they have the drawback that they are unable to output an uncertainty, as a conventional method would, e.g. tracking based on a Kalman filter [8]. While deep learning based detectors and trackers usually output a confidence score or class probability 1 score per estimated object, these generally can not be used as a reliable uncertainty measure, but additional measures are necessary to capture uncertainty [9]. One approach towards joint object detection and tracking is the usage of transformer models [10], which were able to achieve state-of-the-art results in some domains [11], [6]. Transformers are based on attention, i.e. the interaction between input tokens, which is why these models allow for a unique insight into their reasoning: One can visualize the attention matrices that are computed in each model forward pass and investigate which part of the input data was used to generate a certain output. In previous work, we developed a transformer based model for detection and tracking [7] in the context of autonomous driving that operates on (lidar) point clouds. An example of visualized attention weights from the tracking model are pictured in Figure 1. In empirical observations, a more focused attention tends to lead to a more accurate detection. Therefore, we investigate whether the attention weight distribution can give insights into a detection uncertainty in this paper. An uncertainty indicator would be very valuable towards the ability to use transformer based methods in safety-critical use cases, such as autonomous driving. The main contributions of this work are: • We propose a new metric, namely attention spread, to quantify the distribution of attention weights. • We analyze whether attention spread is an indicator for uncertainty by comparing it to the observed detection accuracy in terms of Intersection-over-Union (IoU) with ground truth bounding boxes. • We analyze the spatial and temporal dependencies of the observed attention weight matrices, both in the context of object detection as well as tracking. II. BACKGROUND A. Attention in Transformers The transformer model was first introduced in [10] in the context of natural language processing, but has since been applied in many fields, such as computer vision [12], [13]. The original model consists of an encoder and a decoder. In the encoder, a sequence of tokens is input, which can be encoded words, pixels, or grid cells, depending on the usage. These input tokens x i (feature vectors), i = 1, . . . , N , can interact with one another through self-attention. For this, each of them is transformed into three unique vectors via learnt mappings: a query q i , a key k i and a value v i . Now, attention weights are computed by comparing each of the queries with each of the keys in terms of their dot product, obtaining scalar weights w ij = q i k j , which result in an attention weight matrix of size N × N . The output of the self-attention layer is obtained by a weighted sum of the values N j=1 w ij v j , i = 1, . . . , N , whereas computations are commonly performed in matrix form to increase efficiency and a softmax is applied to the weights to normalize their sum to 0. In the transformer decoder on the other hand, it is common to input two sequences: the encoder's output of length N , as well as a sequence of query vectors of length M . During selfattention in the decoder, the queries attend to one another, while cross-attention layers allow an interaction between the two sequences: In the attention module, the keys and values are now computed from one sequence, while the queries stem from the other. The attention weight matrices of size N × M during cross-attention in the transformer decoder are the focus of this paper. B. Transformers for Object Detection and Tracking The detection transformer (DETR) [12] was one of the first models to utilize a transformer for object detection. It has been adapted and extended in multiple ways, for example for joint tracking and detection [6]. Our model from previous work [7], which is the subject of the analysis in this paper, is based on the aforementioned approaches, but with a focus on applicability to large point clouds, such as those, which are commonly measured with automotive lidar sensors. The model is introduced in the following. An overview of the joint detection and tracking model is pictured in Figure 2, with the detector on the left (a). This detector can either be used as a standalone model, i.e. without tracking, or it serves as a track initializor on the first frame of a sequence. The input point cloud is processed through a backbone, Pointpillars [3]. This could be replaced by any pretrained backbone that encodes the input into a grid (or sequence) of feature vectors. A positional encoding is added before the encoded input is passed to the transformer decoder as an unordered sequence. Note that the transformer encoder is left out in this model, allowing for a comparably smaller GPU memory requirement. Therefore, only the backbone is available to provide context-encoding functionality. The second sequence that is passed to the decoder is denoted anchor-based object queries. M queries are generated, which is more than the number of objects that are expected to appear in one frame. Each of them serves as a slot for a possible object and is able to access the input data through cross-attention, as well as interact with the other object queries during self-attention. Following [14], [15], the query encoding is computed from prior locations, which are sampled from the input point cloud using farthest point sampling. This is meant to achieve an even spread of queries over the birds-eye-view grid, while not placing queries in areas where no lidar data is available (e.g. behind a large building). The object queries are transformed through L decoder layers. To the resulting feature vectors, a regression and classification head is applied to obtain bounding box parameters of detected objects, whereas some are assigned to the 'no-object' class, since there are generally more queries than objects in the scene. The focus of this paper are the cross-attention matrices in the decoder, of size N ×M . Since we operate on a square grid that is output by the backbone, we reshape the matrices to √ N × √ N × M , so that the first two dimensions correspond to indices on the birds-eye-view grid. For each object that is output by the model, one can obtain L attention matrices of shape √ N × √ N (one per decoder layer), giving an insight, which input data the respective query accessed to detect this object. In Figure 2 (b), the object tracker [7] is pictured. In addition to the object queries, track queries are passed to the decoder, which serve as slots for continued tracks. These stem from the model's output at the previous timestep and contain information about an object in feature space. They are transformed through an ego-motion compensation module (EMC) to correct the ego-motion between frames. For these track queries, the aforementioned attention weight matrices can be obtained as well. III. PROPOSED METHOD A. Modelling the attention distribution The cross-attention weights between one object or track query and the input birds-eye-view grid in a given decoder layer are reshaped to a matrix of shape √ N × √ N , as introduced above. It contains attention weights w pq , 1 ≤ p, q ≤ √ N , where p and q correspond to indices on the birds-eye-view grid. The largest K weights are selected, i.e. p, q ∈ S K . By definition of the birds-eye-view grid, each [7]. The focus of this paper lies with the analysis of the attention matrices within the transformer decoder, which are illustrated in purple. a. Object detection model that can either be used as a standalone detector, or as track initializor in the first frame of a sequence. The input point cloud is processed through a backbone and a positional encoding is added. M locations are sampled from the input point cloud and an object query is generated from each. In the transformer decoder, the queries can each aggregate information about one object, through cross-attention with the input feature map. b. Tracking model. Those feature vectors from the previous frame that belong to an object to be tracked are passed into the transformer decoder as track queries, in addition to the object queries. To compensate ego-motion between frames, they are transformed in feature space through an ego-motion compensation module (EMC). horizontal index q is associated with a location in x-direction x q , and each index p with one in y-direction, y p . We define the mean of top-K attention as where W = p,q∈S K w p,q . We propose to quantify attention covariance as follows: (2) This covariance matrix can be reduced to a single value, namely the proposed attention spread (AS) It is difficult to quantify 'ground truth' uncertainty of a deep learning model. In this work, we quantify detection accuracy in terms of intersection-over-union (IoU) between estimated boxes and ground truth. IoU has been shown to correlate with epistemic uncertainty [1]. Aleatoric uncertainty, on the other hand, arises with low visibility and object distance [16], among other causes. B. Attention shift and focus In the detector, an object query passes through L decoder layers. Before the first layer, it is assigned an anchor location, from which its encoding is computed and that serves as a prior location for object detection. One would expect that the attention between the query and the input data in the first layer is quite broad, since it might move away to find a nearby object, while still located around the prior location. As the object query continues to focus on one object through the following layers, the resulting attention is expected to be more focused around that object. We propose to test these assumptions by collecting the aforementioned attention distribution parameters for each decoder layer. Another related question of interest is how the attention focus changes during the lifetime of a track. IV. RESULTS For this analysis, we set M = 100, √ N = 128, L = 6, K = 100, and use the pretrained detection and tracking model as introduced in previous work [7]. The experiments are performed on the nuScenes val dataset and only the class 'car' is considered. The model is either used in detection mode, i.e. without passing temporal information to the following frame, or in tracking mode, by passing track queries to the model in addition to the object queries. Only bounding boxes are considered that the model itself classified as relatively confident, with a detection score above 0.8, since it is common in this model that many queries are idle. A. Detection and tracking accuracy In Figure 3, attention spread is plotted in terms of intersection-over-union (IoU). For this, the IoU between the estimated bounding boxes and their closest ground truth object was computed. A large IoU corresponds to a more accurately detected object. IoU values of 0 (no overlap) were removed. The IoU values were grouped into 10 bins of width 0.1, and the corresponding attention spread obtained from the last decoder layer. The median attention spread per bin was plotted, as well as 25th and 75th percentiles as error bars. The plots are very similar for object detection (top) and joint detection and tracking (bottom). The observed median attention spread sinks with growing IoU. This means that low attention spread can be an indicator for high IoU and vice-versa. Therefore, attention spread may also indicate epistemic uncertainty [1], however the ranges between 25th and 75th percentiles are quite large. B. Object distance to the ego vehicle To see whether attention spread changes with the distance of estimated bounding boxes to the ego vehicle, the birdseye-view is divided into 20×20 bins. For each bin, estimated bounding boxes located there, as well as their respective attention spread in the last layer of the detector, are aggregated. Their mean per bin over all frames in the nuScenes val dataset is pictured in Figure 4. It is observable that mean attention spread increases with distance to the ego vehicle. In this regard, it behaves similarly to aleatoric uncertainty [16]. C. Attention through multiple decoder layers To investigate how attention spread changes through the decoder layers, its values observed during object detection are collected per layer for each observed object. Median attention spread per layer is pictured in Figure 5, as well as its 25th and 75th percentiles. The mean attention spread in the last layer is smaller than in the first, which is in line with the expectations. However, the value is quite small in the second layer and rises again in the following, which poses the question whether there exists a specific reason for this early focusing and later broadening of the cross-attention. D. Attention during a track's lifetime Observing many tracked objects, we find that the attention spread of a tracked object changes over time. One exemplary visualization of the initialization phase of a track is pictured in Figure 6. When the lime green track is initialized at a distance of ca. 50 meters, it barely overlaps with the field of view. Its attention spread is quite large (displayed in terms of an ellipse that is derived from the covariance matrix). As it moves closer over time, its attention spread decreases. In Figure 7 (left), the median attention spread in terms of elapsed time since track initialization is pictured. For this, the first 3.5 seconds of all tracks, as output by the model on the nuScenes val dataset, were considered if the total length of the respective track was at least 7 seconds. In Figure 7 (right), the median attention spread in terms of the remaining time until track finalization is pictured. In both figures, error bars denote the 25th and 75th percentiles. We observe that both in the track initialization phase as well as in the finalization phase a change in median attention is visible. Multiple factors may influence this behavior: Firstly, tracks are often initialized and finalized at a large distance to the ego vehicle, when an object first enters and last exists the field of view, respectively. In that regard, the observations are similar to those in Figure 4 may increase in confidence after a certain initialization time, since the model can utilize past observations, encoded in the track queries, for its reasoning. V. CONCLUSION In this paper, we investigated the distribution of the attention weight matrices in the transformer decoder in the context of object detection and tracking and proposed a new metric to quantify it. We found that median attention spread decreases with larger IoU, while it increases with larger distance to the ego vehicle. This indicates that attention spread may be able to offer insights into both aleatoric and epistemic uncertainty, whereas it is not possible to differentiate between the two from attention spread alone. Besides this, attention spread observed during the lifetime of tracks changes over time, especially in the initialization and finalization phase. Our findings open up questions for further research: Is attention spread the best way to describe the attention weight distribution? Can a measure with less variance be found? Can the model design be improved or the model behavior be better understood based on the knowledge about attention spread per layer? We conclude that the attention matrices available in transformer models have the potential to give insights into detection and tracking uncertainty and that further research is promising.
2022-10-27T01:16:27.038Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "67d3d317d9a853feb463bb52b9849a1bc22db685", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "67d3d317d9a853feb463bb52b9849a1bc22db685", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52845128
pes2o/s2orc
v3-fos-license
Destruxin B Suppresses Drug-Resistant Colon Tumorigenesis and Stemness Is Associated with the Upregulation of miR-214 and Downregulation of mTOR/β-Catenin Pathway Background: Drug resistance represents a major challenge for treating patients with colon cancer. Accumulating evidence suggests that Insulin-like growth factor (IGF)-associated signaling promotes colon tumorigenesis and cancer stemness. Therefore, the identification of agents, which can disrupt cancer stemness signaling, may provide improved therapeutic efficacy. Methods: Mimicking the tumor microenvironment, we treated colon cancer cells with exogenous IGF1. The increased stemness of IGF1-cultured cells was determined by ALDH1 activity, side-population, tumor sphere formation assays. Destruxin B (DB) was evaluated for its anti-tumorigenic and stemness properties using cellular viability, colony-formation tests. The mimic and inhibitor of miR-214 were used to treat colon cancer cells to show its functional association to DB treatment. In vivo mouse models were used to evaluate DB’s ability to suppress colon tumor-initiating ability and growth inhibitory function. Results: IGF1-cultured colon cancer cells showed a significant increase in 5-FU resistance and enhanced stemness properties, including an increased percentage of ALDH1+, side-population cells, tumor sphere generation in vitro, and increased tumor initiation in vivo. In support, using public databases showed that increased IGF1 expression was significantly associated with a poorer prognosis in patients with colon cancer. DB, a hexadepsipeptide mycotoxin, was able to suppress colon tumorigenic phenotypes, including colony and sphere formation. The sequential treatment of DB, followed by 5-FU, synergistically inhibited the viability of colon cancer cells. In vivo studies showed that DB suppressed the tumorigenesis by 5-FU resistant colon cells, and in a greater degree when combined with 5-FU. Mechanistically, DB treatment was associated with decreased the mammalian target of rapamycin (mTOR) and β-catenin expression and an increased miR-214 level. Conclusion: We provided evidence of DB as a potential therapeutic agent for overcoming 5-FU resistance induced by IGF1, and suppressing cancer stem-like properties in association with miR-214 regulation. Further investigation is warranted for its translation to clinical application. Cell Culture The human colon cancer cell lines HCT116 and DLD-1 used in this study were purchased from the American Type Culture Collection (ATCC) (Manassas, VA, USA) and maintained and passaged according to the protocols provided. For CSC-enrichment experiments, IGF1 (200 ηg/mL, Millipore, Taipei, Taiwan) was added to the culture medium for 48 h (after the first 24 h, the culture medium was discarded and fresh medium containing IGF1 was replenished and cells cultured for an additional 24 h). IGF1-treated cells were then harvested for further analyses. Stemness Assays To determine the enrichment of CSCs, the ALDEFLUOR™ assay (Stem Cell Technologies, Vancouver, BC, Canada) which measures the cellular aldehyde dehydrogenase (ALDH) activity was performed according to the manufacturer's protocol. Accuri™ (BD BioSciences, Taibei, Taiwan) was used to determine and analyze ALDH1 activity of colon cancer cells used in this study. We also used a side-population assay as another methodology for examining the cancer stemness. In brief, the percentage of side-population cells (SP) were identified in both IGF1-treated HCT116 and DLD-1 cells, as well as the relative changes in SP percentage after DB treatments (1.0 and 2.0 µM) using a flow cytomerter, the FACSAria™ III sorter (BD Biosciences, Taipei, Taiwan). Verapamil (100 µM final concentration) was used to specifically inhibit ABC pumps 15 min prior to the addition of Hoechst. Verapamil was used as a control to confirm SP identification. In Vitro Cell Viability and Drug Test Sulforhodamine B (SRB, Sigma-Aldrich, Taibei, Taiwan) assay was used for determining the cell viability of colon cancer cells under different conditions. Colon cancer cells were harvested and seeded into 96-well plates (5000 cells/well) for the assay. The cells were treated with different regimens: in the presence of 5-FU (ranging from 0 to 400 µM) and/or DB (ranging from 0 to 2 µM) for 48 h. In the case of the drug combination test, DB was added first for 24 h followed by the addition of 5-FU for an additional 24 h. Drug-treated cells were collected and fixed with 10% TCA. The fixed cells were then stained with 0.4% (w/v) SRB, which was then dissolved in 1% (v/v) acetic acid and solubilized in 20 mM Tris. The optical density (OD) of the samples was measured by a microplate reader (Molecular Devices, Sunnyvale, CA, USA) at 562 nm. Western Blot Analysis HCT116 and DLD-1 human colon cancer cells (parental and/or spheres) post different treatments were analyzed by SDS-PAGE and western blots using standard protocols. Protein samples (20-40 µg each well) were dissolved in sample buffer, denatured, and separated using 10% SDS-PAGE gels. The proteins were transferred onto nitrocellulose membranes and blocked in TBST (50 mM Tris-HCl pH 7.5, 150 mM NaCl, 0.2% Tween-20, 5% skim milk). Membranes were then incubated with respective primary and secondary antibodies. The protein-antibody interactions were determined by an enhanced chemiluminescence kit (ECL-Plus, Amersham Pharmacia Biotech, Piscataway, NJ, USA) and captured using the BioSpectrum ® Imaging System (Upland, CA, USA). Colony-Forming and Tumor Sphere-Forming Assays The colony-forming assay was carried out in the following conditions. Briefly, colon cancer cell lines, HCT116 and DLD-1 cells were seeded in six-well plates with (500 cells, 2.8 µM ovatodiolide, equivalent of IC10 values) and without ovatodiolide. The plates were then stained using 0.005% crystal violet, and the colonies were counted. The cells were allowed to grow for another week. The cells were then harvested, fixed, and counted. The migratory ability of the cells was examined using the Transwell migration assay (ThermoFisher, Taipei, Taiwan). To evaluate the self-renewal ability of cancer cells, we used a sphere-forming assay. Colon cancer cells were cultured under serum-deprived conditions and using Ultra-Low Attachment Plates (Corning Inc., Taipei, Taiwan). The culture conditions were modified slightly from Lo et al. [18]. Colon cancer cells (density: 10 4 cells/mL) were cultured in a medium composed of 20 ng/mL epidermal growth factor (EGF), 10 ng/mL basic fibroblast growth factor (BFGF), 5 µg/mL insulin, and 0.4% Bovine Serum Albumin (BSA). After approximately 5-7 days of culture (depending on the cell type), tumor spheres were formed, and the numbers were counted under a phase-contrast microscope (40× magnification). The self-renewal ability was represented by the average number of spheres generated. The average sphere number formed was obtained from three different views. Animal Study The in vivo experiments were performed by following the regulations of the Animal Care and User Committee at Taipei Medical University (Protocol #LAC-2017-0161). 8-week old NOD/SCID mice were purchased from BioLASCO (Taipei, Taiwan). Mice were housed in a specific pathogen-free (SPF) environment, and a week of acclimation was allowed prior to experiments. Two models were used in this study. The first model was to test the tumor-initiating ability of IGF-treated colon cancer cells in vivo. DLD-1 cells (50,000 cells per injection) cultured with and without exogenous IGF1 (100 ng/mL, 48 h) were subcutaneously injected. Tumor-initiating ability was measured and determined by the relative intensity of the bioluminescence (IVIS 200, Caliper, Taipei, Taiwan). In the second model, different drug regimens were tested. NOD/SCID mice were subcutaneously injected with 1 × 10 6 DLD-1 colon cancer cells and randomly divided into 4 groups consisting of the vehicle control, DB (5 mg/kg, i.p injection, 5 times a week); 5-FU (25 mg/kg, i.p injection, 2 times a week), and the combination regimen: a decreased 5-FU concentration (10 mg/kg, i.p injection, 2 times a week) while maintaining DB dosage. Once the tumors became palpable, the starting tumor volume was recorded, and the treatment commenced. The tumor volume was recorded once a week with a standard formula: tumor size = (L × W 2 )/2, where L is the length and W is the width of the tumor. The body weight of the mice was monitored weekly. After the experiment, the animals were humanely sacrificed using cervical dislocation, and tumor samples were harvested for further analysis. All animal experiments were performed in accordance with the institutional guidelines for the care and use of laboratory animals approved by the Animal Care and User Committee at Taipei Medical University (Protocol #LAC-2017-0161) and the National Institute of Health guide for the care and use of laboratory animals. Statistical Analysis All experiments were performed at least in triplicates. Two-tailed t tests were used for analyses by GraphPad Prism software where a p-value < 0.05 was considered statistically significant. Exogenous IGF1 Enriched Cancer Stem-Like Colon Cancer Cells and Induced 5-FU Resistance The insulin/IGF/mTOR system has been shown to play a key role in CRC development due to its complex involvement in the cancer's cellular metabolism, proliferation, and differentiation [19,20]. Here, we showed that the exogenous IGF increased the cancer stem cell properties. Increased ALDH1 activity has been used to identify normal stem cells and/or cancer stem cells [21,22]. Here, we used flow cytometry to demonstrate increased cellular ALDH1 activity in IGF1-treated HCT116 and DLD-1 cells as compared to their naïve counterparts ( Figure 1A). Notably, both HCT116 and DLD-1 cells pre-treated with IGF1 showed significantly increased ALDH1 activity. For instance, IGF1-treated HCT116 showed an approximately 6% increase in ALDH1 activity ( Figure 1A). These IGF1-induced CRC cells were subsequently isolated and cultured under serum-deprived conditions. We found that these IGF1-treated CRC cells exhibited an enhanced ability to generate CSC-like spheres, as compared to their IGF1-naïve counterparts ( Figure 1B). IGF1-treated CRC cells also demonstrated an increased ability to resist 5-FU treatment, as reflected by the higher IC 50 values, than their naïve counterparts ( Figure 1C). For instance, the IC50 values of 5-FU in naïve HCT11 increased from 11.2 to 41.9 µM, while it was from 15.9 to 60.0 µM in DLD-1 cells ( Figure 1C). More importantly, we demonstrated that IGF1 treatment promoted tumor initiation in vivo. DLD-1 tumor spheres cultured with and without IGF1 were injected into NOD/SCID mice for evaluation. We found that IGF1-cultured tumor spheres appeared to initiate tumorigenesis with a significantly higher rate than their counterparts without IGF1 treatment (60% versus 20%), respectively ( Figure 1D). while it was from 15.9 to 60.0 μM in DLD-1 cells ( Figure 1C). More importantly, we demonstrated that IGF1 treatment promoted tumor initiation in vivo. DLD-1 tumor spheres cultured with and without IGF1 were injected into NOD/SCID mice for evaluation. We found that IGF1-cultured tumor spheres appeared to initiate tumorigenesis with a significantly higher rate than their counterparts without IGF1 treatment (60% versus 20%), respectively ( Figure 1D). IGF1 and β-Catenin Expression Is Associated with Drug Resistance and Poor Prognosis in Colon Cancer Patients To add support to our in vitro and in vivo results, we analyzed the public databases and demonstrated that the IGF1 mRNA level was significantly higher in patients with colon cancer [23] ( Figure 2A). In addition, an increased IGF1 mRNA was detected in the methotrexate-resistant colon cancer cells [24] ( Figure 2B). Analysis from a cohort of colon cancer patients (GSE17536 series) [25] using PrognoScan software showed that higher IGF1 expression in the patients was significantly associated with shorter survival ( Figure 2C). Using another database (GSE14333) [26], we found that increased expression in both IGF1 and CTNNB1 (β-catenin) in patients was associated with significantly shorter disease-free survival ( Figure 2D). IGF1 and β-Catenin Expression Is Associated with Drug Resistance and Poor Prognosis in Colon Cancer Patients To add support to our in vitro and in vivo results, we analyzed the public databases and demonstrated that the IGF1 mRNA level was significantly higher in patients with colon cancer [23] ( Figure 2A). In addition, an increased IGF1 mRNA was detected in the methotrexate-resistant colon cancer cells [24] ( Figure 2B). Analysis from a cohort of colon cancer patients (GSE17536 series) [25] using PrognoScan software showed that higher IGF1 expression in the patients was significantly associated with shorter survival ( Figure 2C). Using another database (GSE14333) [26], we found that increased expression in both IGF1 and CTNNB1 (β-catenin) in patients was associated with significantly shorter disease-free survival ( Figure 2D). DB Treatment Suppressed Drug-Resistant Colon Tumorigenesis and Stemness Our previous studies have demonstrated that DB treatment significantly suppressed tumorigenesis in different cancer types [13][14][15]27,28]. In this study, we examined the potential inhibitory effect of DB using 5-FU resistant colon cancer cells, induced by IGF1. We found that DB treatment significantly suppressed the viability of both IGF-treated HCT116 and DLD-1 ( Figure 3A). For instance, DB treatment achieved the half maximal inhibitory effect on the cell viability on HCT116 and DLD-1 cells at 3.04 and 4.99 μM, respectively. In addition, DB (0.5 μM) markedly inhibited the Increased IGF1 and β-catenin expression is associated with a higher incidence of colon cancer and poorer prognosis. (A) Using the SurvExpress database, we found that an increased IGF1 mRNA level was associated with a higher risk of developing colon cancer. (B) Our GEO (Gene Expression Omnibus) database analysis showed that a significantly higher level of IGF1 mRNA was detected in methotrexate-resistant colon cancer cells than the more sensitive counterparts. (C) The Kaplan-Meier survival curve obtained from a small cohort (N = 177) showed that increased IGF1 expression is significantly associated with a lower survival ratio in patients with colon cancer. (D) Disease-free survival (DFS) analysis of the GSE14333 cohort indicates a shorter DFS in patients with colon cancer expressing a higher level of both IGF1 andβ-catenin (CTNNB1). (** p ≤ 0.01 as statistically significant). DB Treatment Suppressed Drug-Resistant Colon Tumorigenesis and Stemness Our previous studies have demonstrated that DB treatment significantly suppressed tumorigenesis in different cancer types [13][14][15]27,28]. In this study, we examined the potential inhibitory effect of DB using 5-FU resistant colon cancer cells, induced by IGF1. We found that DB treatment significantly suppressed the viability of both IGF-treated HCT116 and DLD-1 ( Figure 3A). For instance, DB treatment achieved the half maximal inhibitory effect on the cell viability on HCT116 and DLD-1 cells at 3.04 and 4.99 µM, respectively. In addition, DB (0.5 µM) markedly inhibited the formation of colonies and tumor spheres in both IGF-treated colon cancer cell lines ( Figure 3B,C). The percentage of the side-population activity in both IGF1-treated HCT116 and DLD-1 cells was also dose-dependently reduced ( Figure 3D). For example, IGF1-cultured HCT-116 cells were originally found to contain approximately 5.8% of SP cells, but with DB treatment (at 2 µM, 24 h), the percentage of SP cells significantly reduced down to approximately 0.17% (upper panels, Figure 3D). dose-dependently reduced ( Figure 3D). For example, IGF1-cultured HCT-116 cells were originally found to contain approximately 5.8% of SP cells, but with DB treatment (at 2 μM, 24 h), the percentage of SP cells significantly reduced down to approximately 0.17% (upper panels, Figure 3D). DB and 5-FU Synergistically Suppresses Viability of Colon Cancer Cells Next, we examined the plausibility of combining DB and the clinical chemotherapeutic agent 5-FU for treating colon cancer cells. We tested different combinations of DB (ranging from 0-5 μM) and 5-FU (from 0-15 μM) to determine the combination index (CI). Using CompuSyn software, we plotted isobolograms derived from the different concentrations of DB versus 5-FU. Several specific combinations of DB and 5-FU synergistically inhibited the cell viability of both HCT116 and DLD-1 ( Figure 4A,B respectively). In support, our Western blot analysis indicated that DB treatment led to the decrease in IGF downstream markers, including STAT3, mTOR, β-catenin, NF-kB, and c-Myc expression, while there was an increase in Bax expression ( Figure 4C). Numbers in bold and marked by an * indicates the most effective combination. DB and 5-FU Synergistically Suppresses Viability of Colon Cancer Cells Next, we examined the plausibility of combining DB and the clinical chemotherapeutic agent 5-FU for treating colon cancer cells. We tested different combinations of DB (ranging from 0-5 µM) and 5-FU (from 0-15 µM) to determine the combination index (CI). Using CompuSyn software, we plotted isobolograms derived from the different concentrations of DB versus 5-FU. Several specific combinations of DB and 5-FU synergistically inhibited the cell viability of both HCT116 and DLD-1 ( Figure 4A,B respectively). In support, our Western blot analysis indicated that DB treatment led to the decrease in IGF downstream markers, including STAT3, mTOR, β-catenin, NF-kB, and c-Myc expression, while there was an increase in Bax expression ( Figure 4C). Numbers in bold and marked by an * indicates the most effective combination. In Vivo Demonstration of DB Treatment Enhanced 5-FU Efficacy Subsequently, we aimed to validate our in vitro data using the xenograft mouse colon-cancer model. IGF1-treated DLD-1 cancer cells were subcutaneously transplanted into NOD/SCID mice, and the tumor-bearing mice were then divided into four groups: sham control, DB (5 mg/kg, 5 times/week), 5-FU (25 mg/kg, 2 times/week), and the combination of DB and 5-FU. Mice which received DB or 5-FU alone appeared to exhibit a similar tumor burden, whereas the tumor burden appeared to be lowest in the combination group ( Figure 5A). We also monitored the body weight of the test subjects and did not find any significant difference among them ( Figure 5B), suggesting no apparent systematic toxicity in all treatment regimens. A western blot analysis was performed on the tumor samples harvested. We found that DB-treated samples exhibited a markedly reduced expression of mTOR, c-Myc, and β-catenin, while there was an increase in Bcl2 ( Figure 5C). A similar observation was made in the 5-FU group, but not as significant as that of the DB-treated group. Treatment using a combination of DB and 5-FU showed the lowest expression in the aforementioned oncogenic markers. In Vivo Demonstration of DB Treatment Enhanced 5-FU Efficacy Subsequently, we aimed to validate our in vitro data using the xenograft mouse colon-cancer model. IGF1-treated DLD-1 cancer cells were subcutaneously transplanted into NOD/SCID mice, and the tumor-bearing mice were then divided into four groups: sham control, DB (5 mg/kg, 5 times/week), 5-FU (25 mg/kg, 2 times/week), and the combination of DB and 5-FU. Mice which received DB or 5-FU alone appeared to exhibit a similar tumor burden, whereas the tumor burden appeared to be lowest in the combination group ( Figure 5A). We also monitored the body weight of the test subjects and did not find any significant difference among them ( Figure 5B), suggesting no apparent systematic toxicity in all treatment regimens. A western blot analysis was performed on the tumor samples harvested. We found that DB-treated samples exhibited a markedly reduced expression of mTOR, c-Myc, and β-catenin, while there was an increase in Bcl2 ( Figure 5C). A similar observation was made in the 5-FU group, but not as significant as that of the DB-treated group. Treatment using a combination of DB and 5-FU showed the lowest expression in the aforementioned oncogenic markers. An Increased MicroRNA-214 Level Was Associated with DB Treatment Finally, we examined a small panel of microRNAs (miRs) as an attempt to explore the potential underlying molecular mechanism associated with DB treatment. Among the different miRs examined, the miR-214 level significantly increased post-DB treatment in both DLD-1 and HCT-116 cell lines ( Figure 6A). Phenotypically, increasing the level of miR-214 was associated with the decreased ability of HCT-116 tumor sphere formation ( Figure 6B), while the miR-214 inhibitor reversed the miR-214-associated anti-tumor sphere-forming ability ( Figure 6B). We identified a potential binding site of miR-214 in the 3′UTR of CTNNB1 (β-catenin, Figure 6C). More importantly, DLD-1 cells treated with miR-214 mimic molecules showed a concomitant decreased expression of several oncogenic and stemness markers, including β-catenin, mTOR, EZH2, cyclin D1, and c-Myc, while the reverse was true when cells were treated with the miR-214 inhibitor ( Figure 6D). An Increased MicroRNA-214 Level Was Associated with DB Treatment Finally, we examined a small panel of microRNAs (miRs) as an attempt to explore the potential underlying molecular mechanism associated with DB treatment. Among the different miRs examined, the miR-214 level significantly increased post-DB treatment in both DLD-1 and HCT-116 cell lines ( Figure 6A). Phenotypically, increasing the level of miR-214 was associated with the decreased ability of HCT-116 tumor sphere formation ( Figure 6B), while the miR-214 inhibitor reversed the miR-214-associated anti-tumor sphere-forming ability ( Figure 6B). We identified a potential binding site of miR-214 in the 3 UTR of CTNNB1 (β-catenin, Figure 6C). More importantly, DLD-1 cells treated with miR-214 mimic molecules showed a concomitant decreased expression of several oncogenic and stemness markers, including β-catenin, mTOR, EZH2, cyclin D1, and c-Myc, while the reverse was true when cells were treated with the miR-214 inhibitor ( Figure 6D). The potential 3′UTR site of CTNNB1 for miR-214 binding was detected using TargetScan software. Western blot analysis showed that miR-214 mimic treatment led to a decreased expression of mTOR, β-catenin, EZH2, cyclin D1, and c-Myc, whereas an increased expression of these markers was found in HCT-116 cells treated with the miR-214 inhibitor. (* p ≤ 0.05; ** p ≤ 0.01). Discussion The generation of cancer stem-like cells (CSCs) has shown to occur both experimentally and clinically [29]. It is believed that treatment failure and disease progression have been closely associated with the presence of CSCs. However, how CSCs are generated still remains unclear and the development of an antagonist proves difficult. Accumulating evidence has pointed out that deregulation of the insulin growth factor (IGF)-signaling cascade not only plays an instrumental role in cellular growth and metabolism, but also colon tumorigenesis [30,31]. Based on this unmet medical need, we first examined the role of IGF signaling in its relationship with CSC generation. We established 5-FU-resistant colon cancer cell lines by culturing these cells with exogenous IGF1. The exogenous IGF1 promoted CSC phenotypic changes in colon cancer cells. More importantly, we demonstrated that IGF1-cultured DLD-1 cells exhibited an enhanced tumor-initiating ability in vivo as compared to their naïve counterparts. Our analyses using the public database also indicated that an increased IGF1 mRNA was identified in patients with colon cancer as well as a shorter survival time, as compared to patients with a lower IGF1 mRNA level; HT29 colon cancer cells resistant to methotrexate expressed a significantly higher level of IGF1 mRNA as compared to their sensitive Discussion The generation of cancer stem-like cells (CSCs) has shown to occur both experimentally and clinically [29]. It is believed that treatment failure and disease progression have been closely associated with the presence of CSCs. However, how CSCs are generated still remains unclear and the development of an antagonist proves difficult. Accumulating evidence has pointed out that de-regulation of the insulin growth factor (IGF)-signaling cascade not only plays an instrumental role in cellular growth and metabolism, but also colon tumorigenesis [30,31]. Based on this unmet medical need, we first examined the role of IGF signaling in its relationship with CSC generation. We established 5-FU-resistant colon cancer cell lines by culturing these cells with exogenous IGF1. The exogenous IGF1 promoted CSC phenotypic changes in colon cancer cells. More importantly, we demonstrated that IGF1-cultured DLD-1 cells exhibited an enhanced tumor-initiating ability in vivo as compared to their naïve counterparts. Our analyses using the public database also indicated that an increased IGF1 mRNA was identified in patients with colon cancer as well as a shorter survival time, as compared to patients with a lower IGF1 mRNA level; HT29 colon cancer cells resistant to methotrexate expressed a significantly higher level of IGF1 mRNA as compared to their sensitive counterparts. A recent study corroborates with our view that IGF-signaling promotes the epithelial-to-mesenchymal transition (EMT) and stemness in colon cancer [32]. Fluorouracil (5-FU) is a standard chemotherapeutic drug for treating different types of cancer, including colon cancer. Unfortunately, tumor cells often develop resistance against it, resulting in relatively low efficacy (approximately 40%). Our observations where IGF1 promoted the phenotypic manifestation of colon CSCs, namely resistance against 5-FU, presents an important consideration when developing novel therapeutics. Statistically, in the area of cancer research, over the period of 1940 to 2014, of the 175 small molecules approved, 131 (75%) were either natural products or directly derived therefrom [33]. Thus, natural resources still remain as a valuable pool for anti-cancer drug discovery. Destruxin B (DB) is a cyclic depsipeptide produced by various species of fungi, and has been previously shown to inhibit proliferation and induce apoptosis in different cancer cells [12][13][14], making it a drug candidate for further development. Here, we showed that DB treatment not only suppressed colon tumorigenesis, but also CSC phenotypes. DB-medicated CSC-inhibitory effects were associated with the decreased expression of the IGF downstream oncogenic marker, mTOR/STAT3, as well as the stemness marker, β-catenin, both of which have been reported to contribute to treatment failure and the recurrence of colon cancer [23,34,35]. More importantly, we provided two lines of in vivo support for treating colon cancer with DB, either alone or in combination with 5-FU. DB alone could significantly delay DLD-1 tumorigenesis in the xenograft mouse model; when combined with 5-FU, the anti-cancer effect was even more pronounced, corroborating with the in vitro data where the combination of 5-FU and DB (at a lower dosage) could synergistically suppress the viability of colon cancer cells. Clinically, reduced dosage of 5-FU could prevent the development of side-effects and improve patient compliance. In another experiment, we showed that a single DB pre-treatment (below IC50) could significantly reduce the tumor-initiating ability of IGF1-cultured DLD-1 colon CSC-like cells in vivo. Mechanistically, DB treatment was associated with the decreased expression of a key CSC marker, β-catenin, which has been shown to be aberrantly increased in malignant colon cancer cells and associated with the generation of drug-resistance [36,37]. DB-mediated suppression of β-catenin was also accompanied with an increased miR-214 level, which has a potential binding site at β-catenin's 3 UTR. These findings implied that DB pre-treatment led to the reduced stemness of the CSCs, thereby reducing its tumor-initiating ability via suppressing the CSC-associated marker, β-catenin. On this point, DB may be used to prevent the development of colon cancer in high-risk individuals or for prolonging progression-free time for patients with colon cancer. Conclusions We provided evidence for the functional roles of DB as not only a therapeutic candidate drug for colon cancer, but also a CSC inhibitor. DB-mediated functions were attributed to its ability to suppress several major oncogenic pathways, namely mTOR/Akt, c-Myc, NF-kB, and the stemness maker, β-catenin. Further investigation is warranted in order for DB to be repurposed for treating drug-resistant colon cancer. Conflicts of Interest: The authors declare that they have no potential financial competing interests that may in any way gain or lose financially from the publication of this manuscript at present or in the future. Additionally, no non-financial competing interests are involved in the manuscript. DB Destruxin B CSCs Cancer stem-like cells 5-FU Fluorouracil SP Side population IGF1 Insulin-likegrowth factor 1 mTOR The mammalian target of rapamycin
2018-10-05T01:42:49.177Z
2018-09-25T00:00:00.000
{ "year": 2018, "sha1": "16f15d9bccae8ad2d5dfac43853eb8a70da5caa9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/10/10/353/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16f15d9bccae8ad2d5dfac43853eb8a70da5caa9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
250928683
pes2o/s2orc
v3-fos-license
Resveratrol reduces inflammatory response and detrimental effects in chronic cerebral hypoperfusion by down-regulating stimulator of interferon genes/TANK-binding kinase 1/interferon regulatory factor 3 signaling Inflammatory responses induced by chronic cerebral hypoperfusion (CCH) play a critical role in the progression of vascular dementia. Stimulator of interferon genes (STING) signaling function as a key mediator of inflammation and immunological responses in the central nervous system (CNS), and resveratrol (RES) exerts potent anti-inflammatory effects. However, the role of STING signaling and the relationship between RES and STING signaling in persistent hypoperfusion-induced cerebral inflammation remain unclear. In this study, Sprague–Dawley rats were subjected to either Sham or bilateral common carotid artery occlusion (2VO) surgery and received RES or vehicle daily by intraperitoneal injection for 4 or 8 weeks. Morris’s water maze was used for the analysis of cognitive function. The neuroinflammatory responses in white matter and hippocampus of the rat brain were assessed by Western blot, Immunofluorescence staining, and qRT-PCR analyses. Myelin integrity, neutrophil infiltration, and microglia proliferation were assessed by Immunohistochemistry and histologic analysis. We demonstrated that after CCH, neurons, microglia, and astrocyte under endoplasmic reticulum (ER) stress upregulated the expression of STING, TANK-binding kinase 1 (TBK1), and the transcription factor interferon regulatory factor 3 (IRF3), as well as translocation of IRF3 into the nucleus. These were accompanied by infiltration of neutrophils, activation of microglia, and overproduction of proinflammatory mediators. Improvements in cognitive deficits were related to reduced hippocampal neuronal cell death and increased myelin integrity in RES-treated rats. The neuroprotective effects of RES were associated with suppression of the expression of tumor necrosis factor-alpha (TNF-α), intercellular adhesion molecule 1 (ICAM-1), VCAM-1, interferon-β (IFN-β), and IL-1β, likely through mitigation of the STING/TBK1/IRF3 pathway. These inhibitory effects exerted by RES also inhibited the levels of myeloperoxidase, reduced excess expression of reactive astrocytes, and activated microglia. In conclusion, the STING/TBK1/IRF3 axis may be critical for proinflammatory responses in cerebral tissue with persistent hypoperfusion, and RES exerts its anti-inflammatory effects by suppressing STING/TBK1/IRF3 signaling. Inflammatory responses induced by chronic cerebral hypoperfusion (CCH) play a critical role in the progression of vascular dementia. Stimulator of interferon genes (STING) signaling function as a key mediator of inflammation and immunological responses in the central nervous system (CNS), and resveratrol (RES) exerts potent anti-inflammatory effects. However, the role of STING signaling and the relationship between RES and STING signaling in persistent hypoperfusion-induced cerebral inflammation remain unclear. In this study, Sprague-Dawley rats were subjected to either Sham or bilateral common carotid artery occlusion (2VO) surgery and received RES or vehicle daily by intraperitoneal injection for 4 or 8 weeks. Morris's water maze was used for the analysis of cognitive function. The neuroinflammatory responses in white matter and hippocampus of the rat brain were assessed by Western blot, Immunofluorescence staining, and qRT-PCR analyses. Myelin integrity, neutrophil infiltration, and microglia proliferation were assessed by Immunohistochemistry and histologic analysis. We demonstrated that after CCH, neurons, microglia, and astrocyte under endoplasmic reticulum (ER) stress upregulated the expression of STING, TANK-binding kinase 1 (TBK1), and the transcription factor interferon regulatory factor 3 (IRF3), as well as translocation of IRF3 into the nucleus. These were accompanied by infiltration of neutrophils, activation of microglia, and overproduction of proinflammatory mediators. Improvements in cognitive deficits were related to reduced hippocampal neuronal cell death and increased myelin integrity Introduction Vascular dementia (VaD), which presents with cognitive deficits and executive dysfunction, is the second most common cause of dementia. VaD causes cognitive impairment, induces disconnect from the outworld, burdens a patient's family, and remains a major challenge to worldwide public health (O'Brien and Thomas, 2015). The sustained inflammation that occurs during cerebral hypoperfusion is critical pathophysiology of VaD (Lecordier et al., 2021). In chronic cerebral hypoperfusion (CCH), neuroinflammation, which is characterized by the activation of microglia and astrocytes, contributes to neuronal loss and white matter lesions, and these effects lead to learning and memory dysfunction (Hase et al., 2018). Sustained STING stimulation may play a key role in inflammatory diseases. The STING-regulated inflammatory response contributes to pressure overload-induced cardiac hypertrophy . Obesity promotes mtDNA release into the cytosol, where it triggers activation of the STING/TBK1/IRF3 pathway and chronic inflammatory responses in adipose tissue (Bai et al., 2017). In a mice model of sepsis-induced cardiomyopathy, the activation of STING/IRF3 leads to inflammatory reactions and further increases the expression of the NLRP3 inflammasome, while STING knockdown suppresses myocardial and serum inflammatory cytokines and alleviates cardiac function (Li et al., 2019). Thus, a growing body of evidence suggests that STING is a critical signaling molecule in inflammation (Decout et al., 2021). Increasing studies have provided evidence of the regulatory role of STING/TBK1/IRF3 signaling in various neurological diseases (Paul et al., 2021). In a rat model of ataxiatelangiectasia, unrepaired damage to DNA leads to significant levels of cytosolic DNA in Atm-deficient neurons and microglia, activates the STING pathway, increases inflammatory microenvironment, and results in neuronal dysfunction and death (Quek et al., 2017). Both genomic and mitochondrial DNA can trigger STING signaling and drive neurodegeneration in the substantia nigra pars compacta, which leads to Parkinson's disease progression (Sliter et al., 2018). In an experimental model of stroke, cerebral ischemia promotes the release of selfderived dsDNA into the cytosol, which activates STING via cyclic GMP-AMP synthase (cGAS) and triggers inflammatory responses (Li Q. et al., 2020). The expression of STING mRNA is upregulated early and persistently until 60 days after TBI and is associated with chronic neurological deficits, lesions, and hippocampal neurodegeneration (Barrett et al., 2020). Although STING exhibits important anti-inflammatory effects in various CNS disease models, the role of STING in VaD has not been evaluated. Here, we sought to determine whether STING, TBK1, and IRF3 are involved in CCH injury. Pharmacological alleviation of the inflammatory response is one of the most promising avenues for VaD therapy. Resveratrol (RES) exhibits pleiotropic actions, including the induction of neuroprotection during cognitive decline through anti-inflammatory and antioxidative activity, activation of autophagy, and inhibition of neuronal apoptosis (Griñán-Ferré et al., 2021). The suppression of inflammatory factor TBK1, which mediates the transcriptional activation of NF-κB, AP-1, and IRF3, contributes to the broad-spectrum inhibitory activity of RES (Kim et al., 2011). Additionally, RES can improve neuroimmune dysregulation by inhibiting the kinase activity of TBK1 and the activation of IRF3 in vitro (Youn et al., 2005). However, whether RES can protect a brain with persistent hypoperfusion by regulating TBK1/IRF3 remains unknown. Given the well-described role of the STING/IRF3 pathway in inflammatory and neurological diseases, we used a rat model of 2VO to assess the hypotheses that STING/TBK1/IRF3 signaling exerts deleterious effects on cerebral hypoperfusion and that RES might suppress the inflammation induced by CCH through dampening the STING/TBK1/IRF3-mediated pathway. Experimental animals Adult male Sprague-Dawley rats (aged 8-10 weeks and weighing 240-260 g at the beginning of the study) were purchased from Vital River Laboratory Animal Technology Co. Ltd, Beijing, China. Animals were housed in a humiditycontrolled room on a 12-h light/dark cycle at 22 • C ± 3 • C, with free access to food and water. Animal surgery Bilateral common carotid artery occlusion surgery was performed as previously described (Farkas et al., 2007). Briefly, rats were anesthetized intraperitoneally with 2% sodium pentobarbital (3 mg/kg). The right and left common carotid artery was doubly ligated with 4-0 silk sutures and cut between the ligations. Rats in the sham group were operated on without ligation. Resveratrol (R8350, Solarbio, Beijing, China) was dissolved in 0.05% DMSO prepared with 0.9% NaCl to a final concentration of 20 mg/ml. RES-treated rats received daily intraperitoneal injections of RES solution (dosage: 20 mg/kg/day) for 4 or 8 weeks after 2VO. RES doses were selected based on previous studies (Anastácio et al., 2014;Gocmez et al., 2019). The timeframe of this study is presented in Figure 1. Morris water maze task Learning and spatial memory deficiencies caused by cerebral chronic hypoperfusion in rats were evaluated with the Morris Water Maze (Shanghai Jiliang Software Technology Company, Shanghai, China). Navigation trials were conducted for 5 days at the same time each day. Each rat was placed into a waterfilled tank with a platform from four different quadrants. Escape latency was the time required for a rat to find and stay on the platform for more than 5 s. If a rat failed to find the platform within 120 s, it was guided to the platform and allowed to stay there for 20 s, the latency would be recorded as 120 s. For spatial probe trials, on the 6th day, the platform was removed, and A complete flow chart of this study. Frontiers in Aging Neuroscience 03 frontiersin.org the rats were placed in water at the quadrant farthest from the original platform location. The swimming route in 120 s was recorded. Spatial learning and memory functions were analyzed from the escape latency, the number of platform crossings, and the amount of time spent in the target quadrant. Hematoxylin-eosin staining Brain tissues were fixed in 4% paraformaldehyde for 24 h. Coronal sections of the hippocampus CA1 area were cut at 5µm thick. The slides were deparafinized and then stained with hematoxylin and eosin. The sections were examined by a light microscope (Nikon, NI-SS, 933313). Immunohistochemical staining Immunohistochemical staining of coronal brain slices was performed. Briefly, deeply anesthetized rats were perfused intracardially with cold PBS and followed by 4% paraformaldehyde (PFA). Brain tissues were fixed in 4% PFA in PBS (0.01 M, pH 7.4) over 24 h at 4 • C and dehydrated in gradient alcohol, and embedded in paraffin. Coronal sections were cut at 5-µm thickness using a Leica R RM1850 rotary microtome (Leica Microsystem, IL, Hesja, Germany). Antigen retrieval was performed by heating in citric acid buffer (pH 6) in the microwave, followed by three washes with PBS (pH 7.4). The sections were then incubated with 3% H 2 O 2 to eliminate the endogenous peroxidase activity for 25 min, washed three times with PBS, blocked with 3% BSA for 30 min, and incubated overnight at 4 • C with anti-Iba-1 (1:100), anti-MBP (1:100) and anti-MPO (1:100) primary antibodies. The next day, the sections were rinsed with PBS and incubated with horseradish peroxidase (HRP)-labeled anti-rabbit (GB23303,1:200, Servicebio) and anti-mouse (GB23301,1:200, Servicebio) secondary antibodies at 37 • C for 40 min. Slices were developed with diaminobenzidine and counterstained with hematoxylin. The samples were then dehydrated in a graded series of alcohol, cleared in xylene, and examined under a light microscope (Nikon, NI-SS, 933313, Tokyo, Japan). Cell number and positive area were quantified by ImageJ software (NIH, United States). Luxol fast flue staining Luxol fast blue (LFB) staining was performed as previously described . Rats were deeply anesthetized, and transcardially perfused with PBS followed by 4% PFA. Brain sections were fixed in 4% PFA for 24 h. Coronal sections containing corpus callosum were embedded in paraffin and cut at 5-µm thick that were deparafinized in xylene, absolute ethanol, and 75% ethanol. Then, the slices were incubated in LFB solution (G1030, Servicebio) for 4 h at 60 • C. The slices were placed in a 0.05% lithium carbonate solution followed by 70% ethanol. To acquire images, light microscopy (Nikon, NI-SS, 933313) was used. The positive area was quantified by ImageJ software. Myeloperoxidase activity and 2 3 -cGAMP level assay Samples were rinsed, weighed, and then homogenized using freezing Dounce tissue grinder (JXFSTPRP-CL, XinJing, Shanghai, China). The MPO assay kit (A044-1-1, Jiancheng Bio, Nanjing, China) was used for MPO activity measurement according to the manufacturer's instructions. The levels of 2 3 -cGAMP, which is naturally synthesized by activated cGAS, were quantified by ELISA kit (501700, Cayman Chemical, United States) according to the manufacturer's instructions. Absorbance at 450 nm was measured using a microplate reader (Multiskan FC, Thermo Fisher) and normalized to overall protein content. MPO activity of white matter was expressed as U/g tissue. The 2 3 -cGAMP levels in the white matter and hippocampus homogenates were expressed as pg/mg protein. Statistical analysis Quantitative data were expressed as mean ± SD. Statistical analysis was performed by SPSS version 19 software (IBM, United States). Escape latency and frequency in the platform quadrant from the Morris Water Maze test were analyzed by repeated-measures ANOVA, with Tukey post hoc test (if equal variances were assumed) or Tamhane's T2 test (if equal variances were not assumed). One-way ANOVA followed by SNK and LSD tests were used for comparison between groups. The difference was considered significant when P < 0.05. Graphs were drawn by GraphPad Prism 6 software (GraphPad Software, La Jolla, CA, United States). Resveratrol improves cognitive function recovery after chronic cerebral hypoperfusion The effect of RES on neurological impairments was evaluated through the Morris water maze test of rats at 4 and 8 weeks after 2VO. The escape latency of the control rat significantly decreased from days 1 to 5, reflecting normal learning ability. On day 6, the control rat spent significantly longer time in the escape platform quadrant, indicating normal retrieval. As displayed in Figure 2, a statistical analysis showed that CCH induced a significant increase in the escape latency of the rat compared with that found for the control animals (2VO 4w vs. Sham 4w group, 2VO 8w vs. Sham 8w group, P < 0.05; Figure 2C), and this increase indicates an impairment in the spatial learning ability of the rat. Moreover, the escape latency of the 2VO 8w rat was longer than that of the 2VO rat sacrificed at 4 weeks (2VO 8w vs. 2VO 4w group, P < 0.05; Figure 2C), indicating a progressive impairment in learning over time after CCH. Treatment with RES at 20 mg/kg/day significantly reversed the CCH-induced increase in escape latency (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, P < 0.05; Figure 2C). In the probe test, the CCH animals spent a significantly shorter time in the target quadrant and had fewer platform crossings than the control rat, which also indicates an impairment in retrieval (P < 0.05; Figures 2D,E). Rat in the 2VO 8w group spent a decreased amount of time in the quadrant containing the escape platform than those belonging to the 2VO 4w group. RES treatment increased the time spent in the target quadrant and had a higher frequency of platform crossings at both time points (2VO + RES 8w vs. 2VO 8w group, 2VO + RES 4w vs. 2VO 4w group, all P < 0.05; Figures 2D,E). In summary, RES restored spatial memory and learning impairments of rats with CCH. Resveratrol decreases the pathological changes in the hippocampal CA1 region induced by chronic cerebral hypoperfusion The protective effect of RES was also assessed based on neuronal death in the hippocampus. The results showed that RES treatment partly reduced pyknosis of the cytoplasm and neuronal shrinkage and loss, as determined by HE staining (Figure 2B). Resveratrol reduces white matter damage in corpus callosum after chronic cerebral hypoperfusion Because white matter damage contributes to cognitive impairment following CCH, we examined the damage to the myelin sheath in the corpus callosum by measuring the loss of myelin via Luxol fast blue staining and by assessing the loss of myelin basic protein (MBP). Compare with corresponding Sham groups, the rat in 2VO groups showed reduced myelin density (Figures 3A,C) and percentage of MBP positive area in the corpus callosum ( Figures 3B,D), while RES treatment partially restored myelin density and percentage of area positive for MBP at 4 and 8 weeks. Our preliminary experiments have shown that, compared with the Sham group treated with vehicle for 4 weeks, there was no significant difference in escape latency, amount of time spent in the target quadrant, frequency of platform crossings, neuronal death, and myelin density when the vehicle was administered for 8 weeks in Sham rat after CCH (P > 0.05). Accordingly, we displayed one Sham group in subsequent qRT-PCR and Western-blotting studies. The 2VO model rat exhibited decreased MBP protein and mRNA levels relative to the Sham group (2VO 4w and 2VO 8w vs. Sham, all P < 0.05; Figures 3E,F). Additionally, demyelination was more severe in the 2VO 8w group compared to the 2VO 4w group (2VO 8w vs. 2VO 4w group, P < 0.05; Figures 3C-F), reflecting the progressive disruption of myelin over time following CCH. As expected, RES significantly elevated the expression of MBP at 4 and 8 weeks (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, all P < 0.05; Figures 3D-F). The results presented thus far suggest that RES protects against white matter damage. Resveratrol reduces the mRNA levels of inflammatory mediators To assess whether the protective effect of RES against CCH injury is related to its anti-inflammatory properties, we next examined classical neuroinflammatory responses following CCH. The gene expression of pro-inflammatory cytokines, tumor necrosis factor-alpha (TNF-α), the chemokine CXCL10, and adhesion molecules intercellular adhesion molecule 1 (ICAM-1) and vascular adhesion molecule 1 (VCAM-1), and anti-inflammatory cytokine IL-10 were assessed in the white matter of rat at 4-and 8-weeks post-surgery. The gene expression of ICAM-1 and VCAM-1 was also assessed in the hippocampus of rats. Chronic cerebral hypoperfusion induced a robust increase in the mRNA expression of TNF-α, CXCL10, ICAM-1, and VCAM-1 in white matter (2VO 4w and 2VO 8w vs. Sham group, P < 0.05, respectively; Figure 4A). In addition, the mRNA levels of TNF-α, and CXCL10 in the 2VO 8w group were higher than those found in the 2VO 4w group (2VO 8w vs. 2VO 4w group, P < 0.05, Figure 4A). The analysis demonstrated a significant reduction of TNF-α, CXCL10, ICAM-1, and VCAM-1 mRNA expression in RES-treated rats (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, P < 0.05, Figure 4A). Similar patterns were observed in mRNA expression of ICAM-1 and VCAM-1 in the hippocampus (Figure 4B). RES treatment after 4-and 8 weeks resulted in increased expression of IL-10 compared with the levels found in the 2VO group (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, P < 0.05; Figure 4A). stimulator of interferon genes, TANK-binding kinase 1, and interferon regulatory factor 3 expressions are involved in chronic cerebral hypoperfusion and inhibited by H-151 or resveratrol in bilateral common carotid artery occlusion cerebral tissue To examine whether STING, TBK1, and IRF3 are needed for the neuroinflammatory responses induced by CCH and whether the inhibitory effect of RES on cerebral inflammatory responses is related to the expression of STING, TBK1, and IRF3, we examined the temporal patterns of STING, TBK1 and IRF3 expression in the hippocampus and white matter of 2VO rat. Double immunofluorescent staining showed that STINGand phospho-IRF3-positive cells were abundantly colocalized with NEUN-positive cells in the hippocampus (Figures 5C,F). STING and phospho-IRF3 were colocalized with Iba-1 positive (Figures 5B,E). STING, phospho-TBK1, and phospho-IRF3 were also colocalized with GFAP-positive cells in white matter (Figures 5A,D,G). This suggests that neurons, microglia, and astrocytes are involved in the STINGmediated response after CCH. cells in white matter Furthermore, low numbers of GFAP-positive cells were detected in the white matter of the Sham group. In contrast, the 2VO group exhibited a significantly ascending number of positive GFAP marked astrocytes (2VO 4w group vs. Sham, P < 0.05, Figure 5H). RES treatment suppressed 2VO-induced reactive astrogliosis in white matter at 4 weeks after 2VO. Compared with the Sham group, the number of STING positive and Iba-1 positive microglia ascends significantly in the white matter of the 2VO group, and the number of STING positive microglia was decreased in the white matter of the REStreated group at 4 weeks after 2VO (2VO 4w group vs. Sham, 2VO + RES 4w group vs. 2VO 4w group, P < 0.05, Figures 5B,L). The same trend was found in a number of p-IRF3 positive microglia (2VO 4w group vs. Sham, 2VO + RES 4w group vs. 2VO 4w group, P < 0.05, Figures 5E,M). Compared to the sham group, the 2VO group showed an increased number of STING and GFAP, p-TBK1 and GFAP, p-IRF3 and GFAP double-positive cells (2VO 4w group vs. Sham, P < 0.05, Figures 5I-K). The number of these cells was reduced in the 2VO group treated with RES. (2VO + RES 4w vs. 2VO 4w group, P < 0.05, Figures 5I-K). Our data show that CCH increased hippocampal expression of STING protein in 2VO rat compared to Sham rat (2VO 4w and 2VO 8w vs. Sham group, P < 0.05, respectively; Figure 6A); however, STING protein expression was dramatically decreased after treatment with RES following 2VO (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, all P < 0.05; Figure 6A). Similar to STING, phospho-TBK1 and phospho-IRF3 protein expressions in the hippocampus were also increased after 2VO compared to the Sham group (2VO 4W and 2VO 8W vs. Sham group, all P < 0.05, respectively; Figure 6A). CCH-mediated increase of phospho-TBK1 and phospho-IRF3 expression in the hippocampus was considerably decreased by RES treatment following CCH. Moreover, the vehicle-treated CCH rat sacrificed at 8 weeks exhibited increased STING, p-TBK1, and p-IRF3 protein expression levels than rats of the 2VO 4w group (2VO 8w vs. 2VO 4w group, P < 0.05, Figure 6A). Similar protein expression patterns were observed in white matter at 4-and 8 weeks post-injury ( Figure 6A). Similarly, qRT-PCR assays identified increased mRNA expression of STING, TBK1, and IRF3 in 2VO rat treated with vehicle as compared to the Sham rat, and the expression of these genes was decreased in 2VO rat treated with RES at 4 and 8 weeks after 2VO (P < 0.05, Figure 6C). To further clarify the specificity of the STING pathway in the rat model of CCH, we administered rats with the STING inhibitor H-151. The protein expression of the STING axis was increased in white matter and hippocampus of rats subjected to CCH for 4 weeks, while H-151 potently inhibited STING expression, as evidenced by reduction of TBK1 and IRF3 phosphorylation without affecting respective controls (Supplementary Figure 1). In addition, ELISA analyses found that the concentration of 2 3 -cGAMP was markedly elevated at 4 weeks after 2VO than that in the Sham group, demonstrating that cGAS was activated following CCH (Supplementary Figure 2). Together, these results indicate that the cGAS/STING pathway was induced by CCH. Location and translocation of p-interferon regulatory factor 3 in neurons, astrocytes, and microglia after chronic cerebral hypoperfusion We examined the expression of p-IRF3 in neurons, astrocytes, and microglia 4 weeks after 2VO through immunofluorescence analyses. As shown in Figures 5E-G, in the Sham group, p-IRF3 was detectable in a few neurons and almost no astrocytes or microglia, and most cells labeled by p-IRF3 showed staining only in the cytoplasm. In the 2VO group, the expression of p-IRF3 was found in neurons, microglia, and astrocytes, and p-IRF3 was localized in both the cytoplasm and the nucleus. Compared with the results obtained for the model group, the number of cells positive for p-IRF3 in the RES group was reduced at 4 weeks after 2VO (2VO + RES 4w vs. 2VO 4w group, P < 0.05, Figures 5K,M). Moreover, some of the cells labeled with p-IRF3 showed staining only in the cytoplasm (Figures 5E-G). As indicated in Figure 6B, we also detected the protein expression of IRF3 in the nucleus and found that CCH could significantly induce the nuclear translocation of IRF3 without affecting the total protein expression of IRF3, whereas RES treatment counteracted the nuclear translocation of p-IRF3 (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, P < 0.05, Figure 6B). Stimulator of interferon genes/TANK-binding kinase 1/interferon regulatory factor 3 pathway mediated type-I interferon is suppressed in cerebral tissue of resveratrol-treated bilateral common carotid artery occlusion groups We then sought to determine whether the STING/TBK1/IRF3 signaling mediates the expression of proinflammatory cytokines in response to CCH and whether RES inhibits these proinflammatory cytokines. Previous studies have identified IFN-β as one of the main genes responding to STING-dependent IRF3 activation (Mathur et al., 2017). As shown in Figure 4A, the mRNA expression of IFN-β and IL-1β in white matter were markedly induced by CCH at 4 and 8 weeks (2VO 4W and 2VO 8W vs. Sham group, P < 0.05, respectively; Figure 4A). However, these results were markedly restrained in the RES-treated CCH group than in the model group (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, all P < 0.05; Figure 4A). Moreover, results revealed significant parallel alterations in the mRNA expression levels of IRF3 and IFN-β genes. These results indicate that the upregulation of these proinflammatory cytokines may be induced by STING/TBK1/IRF3 signaling in CCH tissue. Decreases of the IFN-β and IL-1β mRNA in RES-treated rats were likely due to inhibition of the STING/TBK1/IRF3 signaling. Activation of stimulator of interferon genes/TANK-binding kinase 1/interferon regulatory factor 3 signaling may regulate the microglial morphology Microglia are the primary immune cells enriched in brain tissue. These cells respond to insults by changing their morphology and cytokine production. Specifically, when activated, microglial cells change their morphology to exhibit larger nuclei and shorter processes. To further investigate the effect of RES on microglia following CCH, we performed Immunohistochemical staining for the microglial marker Iba-1 in white matter after CCH and showed the morphology of Iba1 positive glia in the white matter of rats at 4 and 8 weeks after sham or 2VO operation ( Figure 7A). As expected, a number of cells labeled with Iba-1 were elevated in the 2VO group (2VO 4W and 2VO 8W vs. Sham group, P < 0.05, respectively; Figure 7B) but partially suppressed by RES treatment at both time points (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, all P < 0.05; Figure 7B). In addition, the model rat sacrificed 4 weeks after 2VO showed less severe gliosis than those sacrificed at 8 weeks (2VO 8w vs. 2VO 4w group, P < 0.05, Figure 7B). A qRT-PCR analysis of the white matter extract showed that the M1 marker CD16 was increased after CCH but significantly decreased by RES treatment, whereas the M2-type genes (CD206 and IL-10 genes) were increased by RES at both post-surgery time points (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, P < 0.05, Figures 4A, 7C). Resveratrol prohibits neutrophil infiltration in chronic cerebral hypoperfusion Because we found the elevated expression of ICAM-1, VCAM-1, TNF-α, and IL-1β, we further detected the alterations in myeloperoxidase. Specifically, we quantified the number of MPO-positive cells, MPO activity, and MPO expression in white matter. Immunostaining of MPO confirmed the neutrophil presence in white matter after 2VO (Figure 7D). By week 8, neutrophils infiltrated white matter ( Figure 7D). As indicated, the number of MPO-positive cells was decreased in the RES groups at 4 and 8 weeks after 2VO (2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, P < 0.05, Figure 7E). A Western blot analysis demonstrated that hypoperfusion caused an increase in MPO expression in white matter at 4 and 8 weeks after 2VO and that RES treatment attenuated these increases (2VO 4W and 2VO 8W vs. Sham group, 2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, all P < 0.05, Figure 7F). Myeloperoxidase activity is a good indicator of inflammation and neutrophil accumulation and can be quantified via the MPO activity assay. As shown in Figure 7G, MPO activity in white matter was increased at 4 and 8 weeks after 2VO and was also significantly reduced in the rat belonging to the RES group at 8 weeks after 2VO. Here, we observed that, other than causing microglia transformation toward an anti-inflammatory phenotype, RES is capable to dampen neuroinflammatory response by reducing the migration of periphery neutrophils into the CNS. Resveratrol treatment decreases chronic cerebral hypoperfusion-induced endoplasmic reticulum stress-related markers Our immunofluorescence results showed that PERK was located in microglia ( Figure 8B), neurons (Figure 8C), and astrocytes ( Figure 8D). In addition, the protein expression levels of p-PERK were significantly elevated in 2VO groups but deregulated in the RES-treated groups (2VO 4w and 2VO 8w vs. Sham group, 2VO + RES 8w vs. 2VO 8w, 2VO + RES 4w vs. 2VO 4w group, all P < 0.05, Figure 8A). Therefore, our CCH model rat experienced ER stress, and RES treatment inhibited this ER stress. As PERK and STING are both located in neurons, microglia and astrocytes, and PERK protein expression was observed correlations with STING protein expression, STING was probably activated by ER stress in this 2VO model. Discussion This study provides the first demonstration that (1) Activation of the STING/TBK1/IRF3 signaling may be triggered by ER stress during CCH in rats; (2) Increased neutrophilic recruitment, gliosis, and generation of key proinflammatory mediators are positively correlated with activation of Increased expression of stimulators of type 1 interferon gene (STING), p-TANK-binding kinase 1 (p-TBK1), and p-interferon regulatory factor 3 (p-IRF3) in bilateral common carotid artery occlusion (2VO) Rats. The expression of STING, p-TBK1, and p-IRF3 was decreased in 2VO rats treated with resveratrol (RES). 2VO promotes the nuclear translocation of IRF3. RES treatment counteracted the nuclear translocation of IRF3. (A) Western blot and quantitative analysis of STING, TBK1, IRF3, p-TBK1, and p-IRF3 at weeks 4 and 8 in hippocampus and white matter extract after 2VO. β-actin was used as an internal control (n = 3 per group). (B) Western blot analysis of nucleus and cytosolic IRF3 in white matter and hippocampus in different groups after 2VO. β-actin was used as an internal control (n = 3 per group). (C) mRNA levels of STING, TBK1, and IRF3 in white matter were detected by qRT-PCR. GAPDH was used as an internal control (n = 4 per group). *p < 0.05 vs. Sham; & p < 0.05, 2VO 8w group vs. 2VO 4w group; # p < 0.05, 2VO + RES 8w group vs. 2VO + RES 4w group; $ p < 0.05, 2VO + RES 4w group vs. 2VO 4w group; p < 0.05, 2VO + RES 8w group vs. 2VO 8w group. Data are presented as mean ± SD. STING/TBK1/IRF3 signaling; (3) Enhanced STING expression, phosphorylation of TBK1 and IRF3 contributes to the progressive cognitive decline and white matter damage observed in CCH over time; (4) Activation of STING/TBK1/IRF3 signaling and subsequent neuroinflammatory processes are markedly attenuated in RES-treated rat after CCH; and (5) RES reduces the number of activated microglia, astrocyte and alleviates myelin loss after CCH. Neuroinflammation is known to be a key instigator of detrimental injury after VaD. The identification of key molecules that regulate neuroinflammatory processes could pinpoint a potential therapeutic target that could result in improved patient outcomes following VaD (O'Brien and Thomas, 2015;Lecordier et al., 2021). STING was initially characterized as a sensor of cytosolic DNA that promotes survival after infection (Moretti et al., 2017). Recent findings suggest that reduced STING/IRF3 signaling is associated with an attenuated neuroinflammatory response and subsequent neuroprotection in neurodegeneration (Quek et al., 2017;Hou et al., 2021), acute cerebral ischemia/reperfusion (I/R) injury (Li Q. et al., 2020;Liao et al., 2020) and TBI (Barrett et al., 2020). We provide the first demonstration that the STING/TBK1/IRF3 axis is robustly activated in a CCH rat model. Expression of STING, TBK1, and IRF3 phosphorylation, as well as the nuclear translocation of IRF3, were significantly elevated in the hippocampus and white matter of rats with CCH, and this elevation was paraleled by upregulated levels of key proinflammatory cytokines and increased neutrophil infiltration. Here, we found that STING, p-TBK1, and p-IRF3 were widely expressed in neurons, microglia, and astrocytes in brain tissues after CCH, corroborating previous findings that the STING pathway is activated in microglia and leads to IRinduced neuroinflammation and brain injury (Liao et al., 2020). STING also drives astrocyte and microglial reactivity in the TBI model (Barrett et al., 2020). STING/TBK1/IRF3 axis is found to be activated in neurons, which induces inflammatory cytokine production and leads to neuronal death in a rat model of ataxia-telangiectasia (Quek et al., 2017). Furthermore, we provide the first confirmation that STING/TBK1/IRF3 axis is possibly associated with deteriorating neuronal death, gliosis, white matter damage, and impairments in learning and memory after CCH in a time-dependent manner. Therefore, we infer from these findings that STING/TBK1/IRF3 axis contributes to the detrimental neuroinflammatory environment in a CCH animal model of VaD. However, a study has suggested that stimulation of the STING/IRF3 pathway induces a reduction in neuroinflammation in a transgenic mouse model of Alzheimer's disease . Another study showed that STING-KO mice display exacerbated endogenous retrovirus activation along with pronounced hippocampal neuron loss and gliosis (Sankowski et al., 2019). These discrepancies may be due to the use of different animal species and disease models and to variations in the sampled parts and time points. The biological effect of STING signaling exhibits considerable heterogeneity, varies according to the acute/chronic course, and shows low/high intensity. Thus, more detailed studies of the molecular mechanism underlying STING signaling and its regulators are needed. The mammalian IRFs family includes transcription factors that induce type I IFNs (IRF3, IRF7), propagate type I IFN responses (IRF1, IRF4, and IRF5), and play a central role in innate immunity (Günthner and Anders, 2013). IRF-1 resists cognitive decline under normal conditions, but no obvious effect on cognition was observed in a bilateral common carotid artery stenosis mouse model (Mogi et al., 2018). The transcription factor IRF3 is constitutively expressed in all cell types and is essential for the induction of the IFN genes, IFN-α and IFN-β (Akira et al., 2006). A previous study found that mice lacking IRF3 exhibit protection after acute cerebral I/R injury (Marsh et al., 2009). However, a subsequent study showed that both the mRNA and protein levels of IRF3 are significantly Frontiers in Aging Neuroscience 13 frontiersin.org increased in rats after transient global cerebral ischemia injury (Cui et al., 2013). Similar to the above-mentioned study, IRF3 ablation in rats decreased cerebral I/R-induced inflammation and the expression of some proinflammatory genes, such as IL-1β, TNF-α, and ICAM-1 . Therefore, IRF3 may exhibit functions during ischemic stroke in rats that is different from those found in mice. Consistent with these studies, this study revealed that IRF3 was activated in a 2VO rat model, accompanied by elevated expression of proinflammatory genes, such as IFN-β and IL-1β. Based on our immunofluorescence results, phospho-IRF3 was widely expressed in neurons, microglia, and astrocytes in brain tissues, and after injury, IRF3 was transported to the nucleus, in accordance with previous studies (Li et al., 2019). Our previous study validates that ER stress is one of the major contributors to secondary injuries during hypoperfusion (Niu et al., 2019). The present results revealed that PKR-like ER kinase (PERK) is induced in neurons, microglia and astrocytes, in agreement with former studies (Guthrie et al., 2016;Liao et al., 2016). We identified that an increase in PERK phosphorylation in the hippocampus and white matter initiates ER-stress in rats after 2VO surgery. We demonstrated that ER stress induces tissue damage in hippocampus and white matter regions, STING is abundantly present in the same areas, and PERK protein expression was correlated with STING protein expression, which implies that ER stress might modulate the initiation of STING after CCH. Nevertheless, we are aware that the STING signaling pathway may be triggered by damage-associated molecular patterns other than ER stress after CCH. The generation of 2 3 -cGAMP demonstrates the engagement of cGAS in this CCH model. Thus, we cannot exclude other potential activators of STING, in particular cytosolic double-stranded DNA, which was recognized by cGAS. Microglial and astrocyte reactivity is a generally acknowledged neuroinflammatory feature after CCH (Hase et al., 2018). Aberrant M1-type microglial activation participates in white matter injury after cerebral hypoperfusion (Zhang L. Y. et al., 2020), and M2-type microglia exhibit increased phagocytosis of myelin debris and secretion of trophic factors that stimulate oligodendrocytes to facilitate remyelination in white matter (Dudvarski Stankovic et al., 2016;Lee et al., 2019). In this study, we found that increased expression of Iba-1 positive cell staining and an altered microglial phenotype after CCH was correlated with activation of the STING cascade. We also detected elevated GFAP expression 4 weeks after CCH ( Figure 5H). White matter damage was consistent with the expression of microglia and astrocytes. RES-treated rats displayed diminished levels of microglial activation and astrocyte hypertrophy. In addition, RES-treated rats showed upregulation of CD206 mRNA and downregulation of CD16 mRNA, which suggested that RES intervention potentially promotes microglial polarization toward the M2 phenotype. Reductions in glial reactivity and white matter damage after CCH may contribute to the neuroprotective effects observed in the RES group, and these effects may be related to the inhibition of STING/TBK1/IRF3 signaling. Collectively, the results indicate that STING/TBK1/IRF3 signaling contributes to neuroinflammation partly by driving astrocyte and microglial reactivity. RES suppressed inflammation following 2VO by regulating microglial polarization and reduction in the release of inflammatory mediators, partly through inhibition of the STING/TBK1/IRF3 axis. These effects were consistent with those detailed in a recent review, which described that RES might mediate the regulation of microglial phenotypes and functions to control neuroinflammation in neural degenerative diseases and TBI (Maurya et al., 2021). A report also illustrated that RES enhances anti-inflammatory and decreases inflammatory cytokines by affecting the signaling pathways in microglia and astrocytes, and increases oligodendrocyte survival in the stroke model (Ghazavi et al., 2020). However, a recent study showed that STING activation reduces microglial reactivity and confers protection in multiple sclerosis animal models (Mathur et al., 2017). This dual function of STING in regulating microglial reactivity may be attributed to the different disease models used in the studies. Chronic inflammation is one of the major factors in the pathogenesis of VaD (Sinha et al., 2020). In CCH, reactive microglia and astrocytes release proinflammatory cytokines, chemokines, and inflammatory mediators, which results in neurotoxicity (Rosenberg et al., 2014). Neuroinflammation contributes to white matter lesions and neuronal loss and thereby results in compromised learning and memory dysfunction (Farkas et al., 2007). Inflammatory chemokines or cytokines stimulate the expression of adhesion molecules, which leads to the extravasation of neutrophils into the brain parenchyma (Konsman et al., 2007;Kreuger and Phillipson, 2016). During CCH, TNF-α, IL-1β, IL-6, VCAM-1, and ICAM-1 are well-established molecules that mediate the initiation of the inflammatory response and appear to exacerbate the hypo-perfused injury. In our study, a certain number of factors related to the inflammatory response, such as inflammatory cytokines (IL-1β and TNF-α), chemokines (CXCL10), and inflammatory mediators (ICAM-1, VCAM-1), were found to be partially reduced by RES. This finding is in line with an earlier study, which demonstrated that RES inhibits the expression of inflammation-associated genes, such as IFN-β, by reducing the kinase activity of TBK1, inhibiting IRF3 and NF-κB activation, and blocking the binding of active IRF3 to target gene promoters in RAW264.7 cells (Youn et al., 2005). Chronic cerebral hypoperfusion induces peripheral neutrophil recruitment, which leads to tissue damage and neurological deficits (Wu et al., 2020). The brain level of MPO was measured as a marker of neutrophil infiltration in the CCH rat model (Shin et al., 2008). We found that CCH increased neutrophil infiltration into the brain, as evidenced by increased MPO activity and expression. The level of MPO was in agreement with the results found for the STING/TBK1/IRF3 axis and pro-inflammatory molecules. This finding was in keeping with previous research that IRF3-knockout mice show abrogated neutrophil recruitment and reduced MPO activity in the pancreas (Akbarshahi et al., 2011). Altogether these results support a pathologic role of STING/TBK1/IRF3 cascade on inflammatory conditions. Our results indicate that RES modifies the chronic inflammatory processes of neutrophil recruitment following cerebral hypoperfusion, which agrees with previous reports that RES reduces neutrophil activation in a dose-dependent manner to attenuate acute lung injury (Tsai et al., 2019). Moreover, RES reduces MPO expression and attenuates brain damage in permanent focal cerebral ischemia (Lei and Chen, 2018). Our results indicate that the numbers of neutrophils and microglia were reduced and the activation of microglia was attenuated by RES treatment (Figure 7). RES can efficiently inhibit the migration of both resident microglia and peripheral neutrophils toward damaged tissue and the proliferation of microglia. There have been several studies that reported that RES has an ameliorative effect on impairments in spatial learning and memory and CNS pathology associated with CCH. Anastácio et al. (2014) found that RES (20 mg/kg, intraperitoneal) treatment to Wistar Rats after modified 2VO protocol significantly attenuated pyramidal cell death in the hippocampus CA1 region, along with maintaining the expression of nerve growth factor (NGF) until 45 days after surgery. Another study has proposed that four weeks of pretreatment with RES (40 mg/kg, intraperitoneal) alleviates LTP inhibition and dendritic spine loss, as well as the decreased expression of synaptic proteins, PSD95, NMDA receptor 2A/B (NR2A/B), and PSD93 induced by CCH. Four weeks after 2VO surgery, RES also increased the activity of protein kinase A (PKA), cAMP levels, and the phosphorylation of cAMP-responsive element-binding (CREB) protein, a critical transcriptional factor in the memory process (Li H. et al., 2016). A recent study indicates, that after RES (50 mg/kg, intragastrical) treatment for 21 consecutive days, reduced pathological damage and brain damage markers S-100β and NSE in the frontal cortex and hippocampal CA1 area were detected in Rats after 3, 6, and 9 weeks of CCH. RES activates autophagy, probably through down-regulation of the PI3K/AKT/mechanistic target of the rapamycin (mTOR) pathway. RES inhibited neuronal apoptosis detected by TUNEL staining and protein expression of Bcl-2, Bax, and cleaved-caspase3. Reduced levels of oxidative stress factors MDA, SOD, and GSH were observed in RES administered CCH group (Wang et al., 2019). It is shown in these previous studies that multiple molecular mechanisms are involved in the neuroprotection afforded by RES after CCH, namely activating autophagy, anti-oxidation, anti-apoptosis, and reversing the synaptic plasticity deficits; however, other pathways might also take part in it. Despite differences in drug-dose, drug-administration approach, deliver duration, and observation time points, our observations coincide with previous findings that RES restored cognitive deficits and reduced hippocampal neuronal cell death. We further extend the possible mechanism of RES on neuroinflammation in CCH rats. We reported here that RES promotes the myelin integrity, reduced excess expression of reactive astrocytes and activated microglia, and inhibited the levels of myeloperoxidase and neuroinflammatory mediators, possibly through mitigation of the STING/TBK1/IRF3 pathway. This study has several limitations. First, we find that hypoperfusion-induced ER stress is responsible for the activation of the STING pathway, but the exact molecules or mechanism of action that might be involved in STING activation in the 2VO model warrant further investigation. Given the various biological effects of STING, it will be important to find the optimal therapeutic levels to activate the STING pathway in a beneficial way. Elevation of STING axis expression was observed in neurons, microglia, and astrocytes, while their interplay is not clear. Further studies regarding the effects of STING/TBK1/IRF3 axis on CCH should be performed with a larger number of animals and other higher-order species. Second, although ER stress is proved in our CCH model, the cells which are primarily responsible for inducing ER-stress remain elusive. And the interaction between ER stress and the STING/TBK1/IRF3 axis warrants further study. Third, given the variety of subtypes and morphologies, the role of astrocytes and microglia in VaD warrants a more detailed investigation. Last, based on the above findings, the neuroprotective efficacy of RES was at least partly linked to STING/TBK1/IRF3 signaling inhibition, but some studies indicate that RES might target TBK1 to exert an anti-inflammatory effect. A previous study showed that RES enhances anti-inflammatory activity, decreases inflammatory cytokine levels by affecting microglial and astrocyte signaling pathways, and increases oligodendrocyte survival in stroke models. Based on well-documented evidence, RES exerts potential protective effects against extended and unrestricted ER stress (Ahmadi et al., 2021). Thus, RES may affect multiple targets, and the precise target and mechanism of RES during CCH merits further exploration. In summary, CCH induces a robust neuroinflammatory response that is associated with increased expression of the STING cascade. RES reduces neuroinflammation, resulting in cognitive function recovery. STING pathway might participate in the neuroprotective effect of RES in CCH of Rats. These studies suggest that therapeutic modulation of the STING/TBK1/IRF3 pathway may limit persistent neuroinflammation and the development of cognitive impairment following CCH. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. Ethics statement The animal study was reviewed and approved by Animal Ethical Committee of Hebei General Hospital. Author contributions PL, NK, and WJ contributed to conception and design of the study. NK, JS, YS, FG, YG, and MF performed the experiments. NK and FG performed the statistical analysis. NK wrote the manuscript. PL revised the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. Funding This project was supported by the 2020 Hebei Government Top Talent Funding Project (Grant No. 83587216) and 2020 Government-funded Program for Recruit Talents (Grant No. 2020-19-2).
2022-07-22T13:45:12.571Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "18c00a57357a9eb198ca758615ea68f07f28417d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "18c00a57357a9eb198ca758615ea68f07f28417d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
208526880
pes2o/s2orc
v3-fos-license
Dynamic Energy-Efficient Power Allocation in Multibeam Satellite Systems Power consumption is a major limitation in the downlink of multibeam satellite systems, since it has a significant impact on the mass and lifetime of the satellite. In this context, we study a new energy-aware power allocation problem that aims to jointly minimize the unmet system capacity (USC) and total radiated power by means of multi-objective optimization. First, we transform the original nonconvex-nondifferentiable problem into an equivalent nonconvex-differentiable form by introducing auxiliary variables. Subsequently, we design a successive convex approximation (SCA) algorithm in order to attain a stationary point with reasonable complexity. Due to its fast convergence, this algorithm is suitable for dynamic resource allocation in emerging on-board processing technologies. In addition, we formally prove a new result about the complexity of the SCA method, in the general case, that complements the existing literature where the complexity of this method is only numerically analyzed. Dynamic Energy-Efficient Power Allocation in Multibeam Satellite Systems Christos N. Efrem, and Athanasios D. Panagopoulos, Senior Member, IEEE Abstract-Power consumption is a major limitation in the downlink of multibeam satellite systems, since it has a significant impact on the mass and lifetime of the satellite. In this context, we study a new energy-aware power allocation problem that aims to jointly minimize the unmet system capacity (USC) and total radiated power by means of multi-objective optimization. First, we transform the original nonconvex-nondifferentiable problem into an equivalent nonconvex-differentiable form by introducing auxiliary variables. Subsequently, we design a successive convex approximation (SCA) algorithm in order to attain a stationary point with reasonable complexity. Due to its fast convergence, this algorithm is suitable for dynamic resource allocation in emerging on-board processing technologies. In addition, we formally prove a new result about the complexity of the SCA method, in the general case, that complements the existing literature where the complexity of this method is only numerically analyzed. Index Terms-Satellite communications, unmet system capacity, power consumption, resource allocation, multi-objective optimization, successive convex approximation, complexity analysis. I. INTRODUCTION M ULTIBEAM satellite systems (MSS) provide flexibility and efficient exploitation of the available resources in order to satisfy the (potentially asymmetric) traffic demand of users. Due to the fact that the satellite power is quite limited, resource allocation mechanisms should take into consideration not only the co-channel interference (CCI), but also the satellite power consumption in the downlink transmission. The joint problem of routing and power allocation in MSS is examined in [1], using Lyapunov stability theory. Moreover, the studies [2] and [3] deal with several resource allocation problems in MSS with and without CCI, respectively. In [4], a dynamic power allocation algorithm is proposed exploiting a rain attenuation stochastic model. A comparison between non-orthogonal frequency reuse (NOFR) and beam-hopping (BH) systems is presented in [5], where various capacity optimization schemes are reported. Furthermore, linear and nonlinear precoding techniques are investigated in [6] and [7]. Unlike previous works, a multi-objective approach that minimizes the USC together with the satellite power consumption is presented in [8]. In particular, a two-stage optimization is proposed to attain a set of Pareto optimal solutions using C. N. Efrem metaheuristics. However, these algorithms do not provide any optimality guarantee, and their performance is heavily affected by the optimization parameters. Besides, although this method is suitable for offline power allocation, it is rather inappropriate for online/real-time power allocation since it requires a lot of computation time to find nearly-optimal solutions. In this letter, we introduce a new performance metric, which has not been systematically studied so far, including both the USC and total power consumption. This is in contrast to the majority of recent studies that solely minimize either the former or the latter objective. Moreover, we develop an optimization algorithm which always converges and, assuming appropriate constraint qualifications, achieves a stationary point (first-order optimality guarantee) with relatively low complexity. In addition, numerical results show that the algorithm performance is almost independent of the initialization point. Consequently, the proposed algorithm can be used in dynamic wireless environments where the resource allocation should be decided in a very short time. Finally, a formal proof about the complexity of the SCA method is also given. The rest of this study is organized as follows. In Section II, the optimization problem is formulated and then transformed into an equivalent differentiable form. Afterwards, based on the SCA method, we design an energy-efficient power allocation algorithm in Section III. The performance of this algorithm is analyzed through simulations in Section IV, and some conclusions are provided in Section V. II. PROBLEM FORMULATION AND TRANSFORMATION Consider a multibeam satellite system with a geostationary satellite using N beams (N = {1, 2, . . . , N }) and K subcarriers (SCs) of bandwidth B SC (K = {1, 2, . . . , K}). For notation simplicity and without loss of generality, it is assumed that: 1) the total bandwidth, B tot = KB SC , is reused by all beams, i.e., the frequency reuse factor is equal to 1 (worstcase scenario), and 2) during a specific time slot, each beam serves only one user within its coverage area (user i is served by the i th satellite beam, ∀i ∈ N ). Moreover, we focus on the downlink (data transmission from the satellite to users) considering ideal, without noise and interference, feeder links between the gateways and the satellite. The signal to interference-and-noise ratio (SINR) of the i th user (i ∈ N ) on the k th SC (k ∈ K) is expressed by: j,i p is the transmit power of the j th satellite beam, σ 2 i,k is the thermal noise power of the i th user, and g j,i is the channel power arXiv:1912.00920v1 [cs.NI] 2 Dec 2019 gain between the j th satellite beam and the i th user, all over the k th SC. More precisely, g [k] j,i includes free-space path loss (FSPL), rain attenuation, transmit antenna gain of satellite beam as well as receive antenna gain of user. For the sake of convenience, the transmit power vector is denoted by p = p [1] , p [2] , . . . , In addition, the USC [9] is defined by: are the i th user's requested and offered capacity (in bps), respectively 1 . Moreover, the total radiated power is given by: Focusing on the multi-objective optimization, we study the following nonconvex minimization problem: is the maximum transmit power of the i th satellite beam, and P max tot is the maximum total radiated power of the satellite 2 . The fixed/predefined weight w ∈ [0, +∞) is measured in bps/W, and expresses the priority of the total radiated power with respect to USC. Consequently, a trade-off between the USC and total power consumption (which is proportional to the total radiated power) can be achieved for a specific value of w. In particular, w = 0 corresponds to USC minimization. Moreover, it can be proved that problem (3) is NP-hard by following similar arguments as in [8]. Nevertheless, as will be seen later, we can obtain a stationary point of the equivalent differentiable problem with reasonable complexity. Afterwards, by applying the transformation p = 2 y (p , where y = y [1] , y [2] , . . . , N , ∀k ∈ K, we obtain the equivalent nonconvex problem: with convex feasible set S = {y ∈ R N K : i P max tot }. Notice that the above transformation reduces the number of constraints by N K (lower complexity), since p ∈ R N K + becomes y ∈ R N K . Finally, in order to remove the non-differentiability of the objective function, we rewrite problem (4) in its 1 In case of adaptive coding and modulation (ACM), the offered capacity can be approximated by without altering the methodology, where ζ ∈ (0, 1) is obtained through curve fitting (offered capacity versus SINR). 2 It is possible to have additional minimum-capacity constraints for each user (C i (p) C min i , ∀i ∈ N ) in order to increase the system availability (the methodology remains the same). Algorithm 1. Energy-Efficient Power Allocation 1: Select a starting point p ∈ Z, and a tolerance > 0 2: Set = 0, y = log 2 (p), t i = max C req i − C i (p), 0 , ∀i ∈ N and F 0 = F (y, t) 3: repeat 4: Solve the convex minimization problem (8) with approximation pointȳ = y in order to achieve a global optimum (y * , t * ) 5: Set = + 1, y = y * , t = t * , p = 2 y and F = F (y, t) 6: until |F − F −1 | |F −1 | equivalent epigraph-form [10] using the auxiliary variable t = [t 1 , t 2 , . . . , t N ]: , ∀i ∈ N and y ∈ S}. Observe that the new objective F (y, t) is convex now, and the first two constraints in Ω are equivalent to t i max (C req i − C i (2 y ), 0), ∀i ∈ N . Furthermore, problem (5) is equivalent to problem (4) in the following sense: (y, t) is a global optimum of (5) if and only if y is a global optimum of (4) and III. ENERGY-EFFICIENT POWER ALLOCATION Subsequently, we utilize the mathematical tool of SCA (refer to the Appendix) in order to tackle problem (5) with relatively low complexity. Firstly, the offered capacity can be written as follows: [k] are convex functions given by (note that the log-sum-exp function is convex [10]): Now, for a given approximation pointȳ ∈ R N K , we can construct the next convex minimization problem: ,ȳ), ∀i ∈ N and y ∈ S}, where: l,i 2ȳ j,i 2ȳ Algorithm 1 presents an iterative process based on the SCA method. In particular, we provide the next proposition which readily follows from Theorems 1 and 2 in the Appendix. Note that the number of variables and constraints of problem (8) is polynomial in N and K (N K + N and 3N + 1, respectively). IV. NUMERICAL SIMULATIONS AND DISCUSSION In this section, we examine a MSS with the parameters given in Table I. Unless otherwise specified, the tolerance and the starting point of Algorithm 1 are selected as = 10 −3 and p = (P max tot /(N K)) 1 1×N K , where 1 1×N K is the all-ones 1× N K vector. As concerns the requested capacities of the users, we have assumed an asymmetric traffic distribution according to the linear model: C req i = r i, ∀i ∈ N , where r is the traffic slope measured in bps. Furthermore, each satellite beam antenna has the following radiation pattern [6], [8]: , where θ is the angle between the corresponding beam center and the user location with respect to the satellite, G max is the maximum satellite beam antenna gain (G(0) = G max ), u = 2.07123 sin(θ) sin(θ3dB) with θ 3dB the 3-dB angle (G(θ 3dB ) = G max /2), and J 1 (u), J 3 (u) are respectively the first and third order Bessel functions of the first kind. All graphs, except for Fig. 3, present statistical averages derived from 200 independent Monte Carlo simulations, where each user is uniformly distributed within its beam coverage area. For the sake of comparison, we have used a conventional scheme, namely, uniform power allocation (UPA), where p [k] i,U P A = P max tot /(N K), ∀i ∈ N , k ∈ K. Firstly, we investigate the convergence speed of the proposed algorithm for w = 0, 10 Mbps/W and different starting points. As shown in Fig. 1, Algorithm 1 achieves nearly the same convergence rate and final objective value regardless of the starting point. Given the tolerance = 10 −3 , the proposed algorithm requires about 10 iterations to converge for both values of w and for all the starting points under consideration. Secondly, Fig. 2 illustrates the USC and total radiated power achieved by the conventional scheme and Algorithm 1 (for two different weights) versus the traffic slope. Although the UPA scheme makes full use of the available power, it has the highest USC. On the other hand, for w = 0 (USC minimization) we have the lowest USC using less power than UPA. In addition, the last scheme with w = 10 Mbps/W achieves an USC that lies between the other two schemes, but with much less power (high energy savings). This is expected because higher priority is given to the total radiated power as the weight w increases. Last but not least, Fig. 3 compares the performance of the proposed method with the two-stage approach [8]. In particular, the 5 operating points attained by the proposed approach belong to the Pareto boundary obtained from [8]. It has been observed that many values of w achieve operating points on the Pareto boundary, but we only present 5 points for better illustration. Therefore, the proposed method shows similar performance with [8]. Note that in multi-objective optimization, there is no objectively optimal solution, but only Pareto/subjectively optimal solutions. In summary, [8] presents a posteriori method where the network designer selects an operating point after the computation/visualization of the Pareto boundary, while this letter introduces a priori method where the weight w is specified before any computation, and then a single solution is obtained. Finally, we would like to emphasize that the former approach is appropriate for offline power allocation (no strict limitations on processing time), whereas the latter approach is suitable for online/dynamic power allocation due to its rapid convergence. V. CONCLUSION In this letter, we have designed a SCA-based optimization algorithm with high convergence speed, which is suitable for real-time power allocation in MSS with strict computation/processing-time requirements. The proposed multi-objective approach enables network designers to achieve a compromise between the USC and total power consumption. Numerical simulations have also verified the advantage of this approach. Moreover, the complexity of the SCA method, in its general form, has been studied theoretically. APPENDIX SUCCESSIVE CONVEX APPROXIMATION METHOD SCA is an iterative method that attains a stationary point of a nonconvex optimization problem by solving a sequence of convex problems [11]. Despite the fact that the achieved solution may or may not be globally optimal, this technique has reasonable computational complexity. More specifically, the following theorem is provided, where all the functions are assumed to be differentiable (and therefore continuous). Theorem 1 ( [11]). Let P be a nonconvex minimization problem with objective ψ 0 (x), and nonempty-compact feasible set and v i (x) are convex functions. Let P j j 1 be a sequence of convex minimization problems with objective ψ 0,j (x, x * j−1 ), compact feasible set for all the accumulation/limit points x of the sequence x * j j 0 , and (c) assuming suitable constraint qualifications, all the accumulation points x are stationary points of P (i.e., satisfy the corresponding Karush-Kuhn-Tucker conditions), and L = lim j→∞ ψ 0 (x * j ) = ψ 0 (x), wherex is some stationary point of P. Taking advantage of the fact that SCA generates a monotonically decreasing sequence of objective values, and using the property of telescoping sums: M l=1 (a l−1 − a l ) = a 0 − a M for any integer M 1, we introduce and prove the following result concerning the complexity of the SCA method.
2019-10-31T09:09:17.658Z
2019-12-02T00:00:00.000
{ "year": 2019, "sha1": "41402a075d2c89545c6d7a7c50529980e5c4df66", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1912.00920", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5d727068e6fff8f08ae21fb7664973f0e193f721", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
237397999
pes2o/s2orc
v3-fos-license
DownregulationofSHCBP1InhibitsProliferation,Migration, and Invasion in Human Nasopharyngeal Carcinoma Cells Background. SHC SH2 domain-binding protein 1 (SHCBP1), one of the members of Src homolog and collagen homolog (Shc) family, has been reported to be overexpressed in several malignant cancers and involved in tumor progression. However, the expression of SHCBP1 in nasopharyngeal carcinoma (NPC) remains unclear, and its clinical significance remains to be further elucidated. Methods. 0e expression of SHCBP1 mRNA in 35 pair samples of NPC and adjacent normal tissues of NPC was detected by RT-qPCR. 0e expression level of SHCBP1 protein and mRNA in the selected cells was detected by western blot and RT-qPCR, respectively.0e effects of SHCBP1 on NPC in vitro were observed byMTTmethod, colony formation assay, apoptosis assay, cell cycle assay, wound healing assay, transwell migration assay, and transwell invasion assay. Results. SHCBP1 was highly expressed in clinical tissues and NPC cell lines, and SHCBP1 knockdown significantly inhibited NPC cell proliferation. Overexpression of SHCBP1 promoted NPC cell proliferation, migration, and invasion in NPC cell lines. Silencing SHCBP1 expression can delay cell cycle and inhibit cell apoptosis. Conclusion. Our results suggest that SHCBP1 may promote proliferation and metastasis of NPC cells, which represents that SHCBP1 may act as a new indicator for predicting the prognosis of NPC and a new target for clinical treatment. Introduction Nasopharyngeal carcinoma (NPC) is one of the most common head and neck cancers in China. It accounts for 38.29% and 40.14% of the incidence and death of NPC in the world, respectively, with higher morbidity and mortality rates than the world average (1.2/10 5 and 0.7/10 5 ) [1]. Due to the concealed occurrence of NPC and its rapid development and high degree of malignancy, most of the patients were in the middle and advanced stages when they visited the doctor. Despite advances in diagnosis and treatment, including chemotherapy, radiation, and surgery, the 5-year relative survival rate for nasopharyngeal cancer is only 43.8% [2]. erefore, it is urgent to elucidate the molecular mechanism involved in the progression of NPC and to identify biomarkers for early diagnosis and potential therapeutic targets. Src homolog and collagen homolog (Shc) encode p46Shc, p52Shc, and p66Shc [3]. Each of them has a unique highly conserved structure domain PTB-CH1-SH2: a phospho-serine-binding domain (PTB), a carboxy-terminal Src homology domain (SH2), and a central proline-rich collagen-homologous region (CH1) [4,5]. Besides, p66shc contains an amino terminal region (CH2), which plays an important role in the oxidative stress and apoptosis [6]. Multiple signaling pathways such as insulin growth factor receptor (IGFR), insulin receptor (IR), fibroblast growth factor receptor (FGFR), and epidermal growth factor receptor (EGFR) can be activated by Shc [7]. Shc plays an important part in regulating oxidative stress and various physiological functions through activation of PI3K/Akt and Ras-raf-MAPK signal pathway [3,8,9]. SHC SH2 domain-binding protein 1 (SHCBP1), mapped on a region of chromosome 16q11.2, is a key linker protein on the SH2 domain of the SHC protein [10]. e present study suggested that SHCBP1 may be involved in the occurrence and development of cancer [11][12][13][14][15]. Feng et al. [11] found that SHCBP1 was highly expressed in breast cancer and significantly correlated with the proliferation and apoptosis of the human malignant breast cancer cell line, whereas SHCBP1 knockout can restrain the proliferation of breast cancer cells. SHCBP1 was also found to be remarkably upregulated in human hepatocellular carcinoma (HCC) samples, and downregulation of SHCBP1 inhibited the proliferation and colony formation of HCC cells [12]. Dong et al. [13] found that SHCBP1 is overexpressed in GC tissues and significantly correlated with proliferation, metastatic potential, and poor prognosis. Meanwhile, several studies also confirmed that SHCBP1 is highly expressed in synovial sarcomas and may play a critical role in cell proliferation, adhesion, migration, and cell cycle progression [14,15]. Liu et al. [16] found that EGF induces SHCBP1 into the nucleus, promotes the binding of β-catenin to CBP, and regulates the cell progression of non-small-cell lung cancer. In addition, studies on the mechanism of SHCBP1 in tumor growth suggested that the upregulation of SHCBP1 may be related to the activation of TGF-β/Smad and MEK/ERK signaling pathways [12,15]. Together, these findings revealed that SHCBP1, as an important intracellular signaling pathway protein, plays an important role in regulating cell cycle and promoting cell migration and invasion. However, little is known about the expression and mechanism of SHCBP1 in NPC. In the current study, we observed that SHCBP1 was significantly upregulated in NPC tissues and cell lines. SHCBP1 expression was positively associated with the cell proliferation and apoptosis. Knocking down SHCBP1 expression can promote cell apoptosis and significantly inhibit cell proliferation and invasion in vitro. Our study suggests that SHCBP1 is a tumor-promoting factor in NPC and may be a potential biomarker and therapeutic target for NPC. Human NPC cell lines (CNE-2Z, 5-8F) and normal nasopharyngeal cell NP69 were purchased from the Hunan Fenghui Biological Technology Co. Ltd. (Hunan China). 5-8F and NP69 were cultured in RPMI 1640 (Gibco, USA) which contained 10% fetal bovine serum (FBS, Gibco). NP69 was maintained in keratinocytes supplemented with 100 ug/ml penicillin-streptomycin (Gibco). All cells were cultured in a humidified incubator at 37°C and 5% CO 2 . Stable Cell Line Construction. Targeted shRNA sequence and negative control RNA sequence of SHCBP1 were synthesized and inserted into the lentiviral core vector expressing the GV115 reporter gene and puromycin resistance. e recombinant lentivirus was provided by Shanghai Genochemistry Co., Ltd. (Shanghai, China). e cells were infected with the corresponding lentivirus and screened with 1 g/ml puromycin for 7 days after 72 hours. e expression level of SHCBP1 in selected cells was determined by RT-qPCR and western blot. MTT Assay. NPC cells were inoculated into 96-well plates at a density of 1 × 10 3 cells/well, and the cell viability was detected by MTT (3-2,5-diphenyl tetrazolium bromide) assay (Sigma, St. Louis, USA) for 5 days. e OD was measured at the wavelength of 450 nm with a microplate reader (TEK, BioSaxony, USA). Colony Formation Assay. e same number of NPC was seeded into 6-well plates at a density of 500 cells per well, respectively. e cells were then cultured in a culture dish for 2-3 weeks and cell colonies were counted (>50 cells/colony), stained with crystal violet, and photographed. 2.6. Apoptosis Assay. Annexin V was used to double-stain NPC cells according to the manufacturer's instructions. Cells stained with Annexin V were considered to be apoptotic. e stained cells were analyzed by flow cytometry and the data were analyzed by FACScan flow cytometry (BD Biosciences, USA). Wound Healing Assay. NPC cells were placed in a 6-well plate at a cell density of 3 × 10 5 /ml. After 24 hours of culture, cells were infected with shCtrl, shSHCBP1, and control. e wound was scratched with a plastic straw. e cells were washed twice with phosphate buffer saline (PBS) and then incubated with serum-free RPMI 1640 medium (Gibco, USA). e cells migrated into the wounded empty space and were photographed at 0 and 24 hours after wounding. e migration rate was calculated based on the ratio of the closed wound distance to the original wound area. Transwell Migration Assay. After 72 h of infection with shCtrl, shSHCBP1, and control, cells were collected and resuspended in serum-free medium. en cells were added to the upper chamber at a density 5 × 10 5 cells/ml (200 μl per chamber), and 600 μl of medium containing 10% fetal bovine serum was added to the lower chamber. Cells migrated for 24 h in a CO 2 incubator at 37°C. en the migrating cells on the bottom surface were fixed with methanol and stained with 0.1% crystal violet, and 5 microscope fields were randomly selected for counting in each well. Transwell Invasion Assay. An 8.0 m well transwell insert (Millipore, Billerica, MA, USA) is precoated with 100 μl 200 μg/ml Matrigel (BD Biosciences, Franklin Lakes, NJ, USA) and put it in a 24-well plate at room temperature for 60 minutes. After 72 h of infection with shCtrl, shSHCBP1, and control, the cells were collected and resuspended in serum-free medium. 5 random field invading cells were counted in each chamber, and pictures were taken under a light microscope at 200 times. Statistical Analysis. All experiments were conducted independently at least three times. Statistical analysis was performed using Social Science Statistical Software Package (SPSS) 20.0 software (SPSS Inc., Chicago, IL) and Prism 5.0 software (GraphPad software, La Jolla, CA). Student's t-test was used for comparison between the two groups. One-way ANOVA was used to compare data consisting of more than two groups. Pearson χ 2 test and Log-rank test were used to evaluate the statistical differences. e data displayed is expressed as mean ± standard deviation (SD). p < 0.05 was considered statistically significant. Increased Expression of SHCBP1 in NPC Tissues and Cell Lines. To determine the expression pattern of SHCBP1 in NPC tissues, we compared the relative mRNA levels of SHCBP1 in 35 pairs of matched NPC tissue samples using RT-qPCR. Compared with adjacent nontumor tissues, SHCBP1 in tumor tissues was significantly upregulated (Figure 1(a)). In addition, we found that SHCBP1 expression was higher in CNE-2Z and 5-8F cells compared with NP 69 cells (Figure 1(b)). e results showed that, in NPC tissues and cell lines, the relative expression of SHCBP1 mRNA was significantly increased. For further functional experimentation, we selected 5-8F cells with a lower expression of SHCBP1 for the experiment. 5-8F cell lines were selected to construct knockdown models. SHCBP1 Promoted Proliferation and Colony Formation of NPC Cells. To explore the effect of downregulation of SHCBP1 gene on the proliferation of NPC cells, shRNA was used to downregulate the expression of endogenous SHCBP1 gene in 5-8F cell lines. shRNA was transfected into NPC cells and RT-qPCR was used to evaluate the efficiency of interference. e shRNA interference significantly reduced the SHCBP1 mRNA expression level of 5-8F cells compared with the knockdown control group (shCtrl) (Figure 2(a)). Downregulation of SHCBP1 significantly inhibited the growth of 5-8F cells compared with shCtrl ( Figure 2(b)). shCtrl and shSHCBP1 were used to assay cellular proliferation. MTT assay and Celigo assay inhibited SHCBP1 expression for 5 days and significantly inhibited 5-8F cell proliferation (Figures 2(c)-2(e)). erefore, the above data suggested that knocking down SHCBP1 can inhibit the proliferation of NPC cells. In order to explore the effect of SHCBP1 on the colony formation in NPC cell, 5-8F cells treated with SHCBP1-shRNA or control-shRNA lentivirus were allowed to grow for 14 days to form colonies. Compared with the control group (shctrl), the number of cell colonies in the knockdown group (shSHCBP1) was reduced, suggesting that SHCBP1 was largely correlated with the clonal formation ability of NPC cells (Figures 2(f ) and 2(g)). e above data implied that SHCBP1 may play a key role in colony formation of NPC cells. SHCBP1 Suppressed Apoptosis in NPC Cells. Using Annexin V-APC stained with FACS in 5-8F cells after lentivirus infection, the effect of SHCBP1 on cell apoptosis was explored (Figure 3(a)). As shown in Figure 3(b), the apoptotic rate of SHCBP1-shRNA lentivirus-infected cells was significantly higher than that of shctrl lentivirusinfected cells, and the apoptotic rate was significantly different between shctrl and shSHCBP1 (p < 0.05). Flow cytometry analysis showed that inhibition of SCHBP1 expression can significantly induce apoptosis of NPC cells. SHCBP1 Was Required in Migration and Invasion of NPC Cells. To investigate the role of SHCBP1 in sinus cell migration, the effect of SHCBP1 knockout on migration ability was examined in the wound healing assay. Compared with shCtrl, the migration distance of shSHCBP1 (Figure 4(a)) cells was significantly reduced. In the invasion and metastasis assay, we evaluated the invasion and migration ability of 5-8F cells treated with transwell. Compared with shCtrl cells, the number of cells invaded and migrated by shSHCBP1 cells was significantly reduced (Figure 4(b)). ese data suggest that reduced SHCBP1 expression inhibits the invasion and metastasis of NPC cells. SHCBP1 Promoted Cell Cycle with Increased Expressions of CDK1 and Cyclin B. In order to further explore the molecular mechanism of SHCBP1 regulating NPC cell cycle, we detected the expression levels of CDK1 and cyclin B, key proteins in cell cycle regulation, by western blot (Figure 5(a)). As Figure 5(b) showed, it was found that the expressions of CDK1 and cyclinB were lower in the knockdown group (shSHCBP1) than in the knockdown control group (shctrl) (p < 0.05). e results indicated that SHCBP1 may stimulate cell cycle progression via upregulation of CDK1 and cyclin B. Discussion Although the mortality rate of NPC has decreased with the improvement of diagnosis and treatment, NPC is still one of the most deadly head and neck malignancies in adults, mainly because the specific molecular pathogenesis has not been fully elucidated and due to the lack of targeted therapy [18,19]. In recent years, the upregulation of SHCBP1 has been shown to be associated with malignant transformation in many types of tumors [11][12][13][14][15]. In this work, we found that SHCBP1 was upregulated in NPC tissues compared with that of adjacent nontumor tissues. Furthermore, we found that the expression of SHCBP1 was higher in CNE-2Z and 5-8F cells compared with that of NP 69 cells. ese results are consistent with previous studies that have shown that elevated SHCBP1 expression is detected in a variety of cancers and may have implications for early diagnosis [11][12][13][14][15]. erefore, SHCBP1 can be used as a potential therapeutic target for NPC and may be helpful for the diagnosis and prognosis of NPC. To the best of our knowledge, this is the first time that the role of SHCBP1 in NPC has been revealed. e relationship between SHCBP1 expression and the clinical characteristics of NPC is worth further study. In order to explore whether SHCBP1 plays a role in the proliferation and colony formation of NPC cells, we used shRNA to knock down SHCBP1 in the endogenous 5-8F cell line. en MTT assay and Celigo assay were used to evaluate the effect of interfering with SHCBP1 expression on the proliferation and colony formation of NPC cells. Furthermore, wound healing assay, flow cytometry, and transwell assay were used to evaluate the effect of knockdown of SHCBP1 on migration and invasion of NPC cells. We found that reducing SHCBP1 expression significantly reduced the migration and invasion of NPC cells. erefore, knockout of SHCBP1 might prevent NPC by inhibiting the metastasis and invasion of NPC cells. Defects in apoptosis mechanisms play important roles in tumor pathogenesis, allowing neoplastic cells to survive over intended lifespans [20]. Cyclin B, the key cell cycle regulating protein, could accumulate in the S and G2 phases to form an inactive mitosis-promoting factor (MPF) with CDK1 during the normal cell cycle stages [21]. In this study, we found that silencing SHCBP1 can significantly inhibit the expression of CDK1 and cyclin B in NPC cells. ese results reveal that the mechanism of SHCBP1-mediated proliferation and apoptosis may be related to the alternations of the expression of cyclin B and CDK1. e molecular mechanism that underlies the development of NPC is not fully understood. It is traditionally believed that the occurrence and development of NPC are related to Epstein-Barr virus infection environment and diet, local chronic inflammation, and so on. Recently, the molecular targeted therapy of vascular endothelial growth factor receptor (VEGFR), epidermal growth factor receptor (EGFR) family, and PI3K/Akt/mTOR emerged, whereas the side effect is much; curative effect is still not ideal. Previous research suggests that PI3K/Akt TGF-β/Smad, NF-κB signaling pathways, and EGFR mutations are involved in promoting the growth, proliferation, metastasis, cell cycle, and apoptosis of NPC cells [22][23][24]. erefore, it is urgent to find new therapeutic targets. For the first time we reported the association between SHCBP1 and NPC. At present, our laboratory is further exploring the possible molecular mechanism of SHCBP1 in the proliferation and invasion of NPC cells. Conclusion SHCBP1 is significantly upregulated in clinical NPC cell lines and tumor samples. Reducing SHCBP1 expression can inhibit cell proliferation and metastasis, induce apoptosis, and suppress cell cycle in NPC cell lines, suggesting that SHCBP1 may be linked to NPC progression and serve as a potential biomarker and therapeutic target for NPC. Data Availability e data used and/or analyzed during the current study are available from the corresponding author upon request.
2021-09-04T05:18:09.567Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "c8d031f89a8c0d7d8ef36a55c0994adb0fe6d17e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c8d031f89a8c0d7d8ef36a55c0994adb0fe6d17e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3626937
pes2o/s2orc
v3-fos-license
PoCos: Population Covering Locus Sets for Risk Assessment in Complex Diseases Susceptibility loci identified by GWAS generally account for a limited fraction of heritability. Predictive models based on identified loci also have modest success in risk assessment and therefore are of limited practical use. Many methods have been developed to overcome these limitations by incorporating prior biological knowledge. However, most of the information utilized by these methods is at the level of genes, limiting analyses to variants that are in or proximate to coding regions. We propose a new method that integrates protein protein interaction (PPI) as well as expression quantitative trait loci (eQTL) data to identify sets of functionally related loci that are collectively associated with a trait of interest. We call such sets of loci “population covering locus sets” (PoCos). The contributions of the proposed approach are three-fold: 1) We consider all possible genotype models for each locus, thereby enabling identification of combinatorial relationships between multiple loci. 2) We develop a framework for the integration of PPI and eQTL into a heterogenous network model, enabling efficient identification of functionally related variants that are associated with the disease. 3) We develop a novel method to integrate the genotypes of multiple loci in a PoCo into a representative genotype to be used in risk assessment. We test the proposed framework in the context of risk assessment for seven complex diseases, type 1 diabetes (T1D), type 2 diabetes (T2D), psoriasis (PS), bipolar disorder (BD), coronary artery disease (CAD), hypertension (HT), and multiple sclerosis (MS). Our results show that the proposed method significantly outperforms individual variant based risk assessment models as well as the state-of-the-art polygenic score. We also show that incorporation of eQTL data improves the performance of identified POCOs in risk assessment. We also assess the biological relevance of PoCos for three diseases that have similar biological mechanisms and identify novel candidate genes. The resulting software is publicly available at http://compbio.case.edu/pocos/. Introduction Genome-wide association studies (GWAS) have a transformative effect on the search for genetic variants that are associated with complex traits, since they enable screening of hundreds of thousands of genomic variants for their association with traits of interest [1]. Recently published GWAS lead to the discovery of susceptibility loci for many complex diseases, including type 2 diabetes [2], psoriasis [3], multiple sclerosis [4], and prostate cancer [5]. For improved identification of risk variants, researchers draw information from clinical, microarray, copy number, and single nucleotide polymorphism (SNP) data to build disease risk models, which are then used to predict an individual's susceptibility to the disease of interest [6,7]. Several companies, such as deCODE genetics (http://www.decode.com) and 23andme (https://www. 23andme.com) have started using SNPs identified by GWAS, to provide personal genomic test services in the United States and health related genomic test services in Canada and the United Kingdom. An important problem with GWAS is that the identified variants account for little heritability [8,9]. However, empirical evidence from model organisms [10] and human studies [11] suggests that the interplay among multiple genetic variants contribute to complex traits. Epistasis among pairs of loci, i.e., significantly improved association with the phenotype when two loci are considered together, is also shown to provide provide further insights into disease mechanisms [12][13][14]. Therefore, recent studies focus on identifying the interactions among pairs of genomic loci, as well as among multiple genomic loci [15][16][17]. These studies suggest that consideration of more than one locus together can better capture the relationship between genotype and phenotype. For this reason, genetic markers that aggregate multiple genomic loci can be used to design effective strategies for risk assessment and guide treatment decisions [18]. The Polygenic score is a commonly used method to identify the joint association of a large mass of the loci to predict disease risk [19]. The first application of polygenic score on GWAS data shows that the genetic risk for schizophrenia is a predictor of bipolar disorder [20]. There are also several studies demonstrating that polygenic risk score is a powerful tool in risk prediction [20][21][22]. However, polygenic score does not make use of prior biological knowledge, which may be useful in generating more robust features by incorporating the functional relationships among individual variants. Furthermore, according to a recent comparative assessment of various classification algorithms, there are no statistically significant differences between state-of-the-art classification algorithms in terms of performance in risk assessment [23]. This observation suggests that research on construction of features for risk assessment can be useful in improving the classification performance of these algorithms. Since detection of epistasis and higher order interactions is computationally expensive, many methods first assess the disease association of individual loci and then use functional knowledge to integrate these associations [24][25][26]. The key idea behind these methods is that functionally related variants, e.g., those that induce dense subnetworks in protein-protein interaction (PPI) networks, can provide stronger statistical signals when they are considered together [27]. Based on similar insights, some researchers integrate GWAS with pathway information to identify statistically significant pathways that are associated with the disease [28,29]. Recently, Azencott et al. propose a method to discover sets of genomic loci that are associated with a phenotype while being connected in an underlying biological network [30]. They use an additive model to integrate the genotypes of loci and use connectivity patterns in the network to select a functionally coherent set of disease associated SNPs. While this method works on a network of genomic loci, the network is constructed based on the interactions among genes and mapping of loci to genes. For this reason, the application of these methods is limited to the variants in coding regions or in regions that are in close proximity to genes. However, 88 percent of genotyped variants in GWAS fall outside of coding regions [31]. Several risk variants are found in non-coding regions of the genome and it is shown that the functional effects of these variants are regulatory (e.g., mRNA expression, microRNA expression) as opposed to directly influencing protein structure or function [32]. In this paper, we propose a new algorithm for the identification of multiple functionally related genomic variants that are collectively associated with a phenotype. The proposed method builds on the concept of "Population Covering Locus Sets" (POCOs) [33,34]. A POCO is a set of loci that harbor at least one susceptibility allele in samples with the phenotype of interest. Here, we extend the notion of POCOs to enable adaptive identification of "susceptibility genotype" (as opposed to susceptibility allele) for each locus. We also develop a method for aggregating the genotypes of multiple loci in a POCO to compute representative genotypes for use in risk assessment. Finally, in order to capture the functional relationship between genomic loci, we integrate GWAS data with human protein-protein interaction (PPI) network and regulatory interactions identified via expression quantitative trait loci (eQTL). We use the POCOs identified by the proposed framework to construct features that can be used in risk assessment. We evaluate the performance of POCOs in risk assessment via cross-validation on seven GWAS case-control data sets obtained from the Wellcome Trust Case-Control Consortium (WTCCC). We compare the risk assessment performance of models built using POCOs to that of models built using individual loci and polygenic score. Our experimental results show that POCOs significantly outperform individual loci and polygenic score in risk assessment. Furthermore, we assess the information added by the incorporation of PPI and eQTL and observe that inclusion of these data leads to more parsimonious models for risk assessment. In the next section, we describe the proposed procedure for modeling the genotypes and identifying POCOs. Then we describe how we use POCOs to develop a model for risk assessment. Subsequently, we present comprehensive experimental results on GWAS data sets for Type 2 Diabetes (T2D), Psoriasis (PS), Type 1 Diabetes (T1D), Hypertension (HT), Bipolar Disorder (BD), Multiple Sclerosis (MS) and Coronary Artery Disease (CAD). Our results show that the proposed method significantly outperforms individual variant based risk assessment model as well as the state-of-the-art polygenic score. We also observe that integrating prior biological information leads to more parsimonious models for risk assessment. Methods In this section, we first present the set-up for genome-wide association studies. We then define "Population Covering Locus Sets" (POCOs) and describe the algorithm we use to identify POCOs. Finally, we describe our feature selection framework for the selection of POCOs to be used for risk assessment. The workflow of the proposed method is presented in Fig 1. Genome-Wide Association Data The input to the problem is a genome-wide association (GWA) dataset D = (C, S, g, f), where C denotes the set of genomic loci that harbor the genetic variants (e.g., single nucleotide polymorphisms or copy number variants) that are assayed, S denotes the set of samples, g(c, s) denotes the genotype of locus c 2 C in sample s 2 S, and f(s) denotes the phenotype of sample s 2 S. Here, we assume that the phenotype variable is dichotomous, i.e., f(s) can take only two values: if sample s is associated with the phenotype of interest (e.g. diagnosed with the disease, responds to a certain drug etc.), s is called a "case" sample (f(s) = 1), otherwise (e.g., was not diagnosed with the disease, does not respond to a certain drug etc.), s is called a "control" sample (f(s) = 0). We denote the set of case samples with S 1 and the set of control samples with S 0 , where S 1 [ S 0 = S. While we focus on qualitative traits here for brevity, the proposed methodology can also be extended to quantitative traits (i.e., when f(s) is a continuous phenotype variable). Identifying Genotypes of Interest The minor allele for a locus is usually defined as the allele that is less frequent in the population. While it is common to focus on the minor allele as the risk allele, specific genotypes can also be associated with a phenotype [35][36][37]. Different types of encoding may represent different biological assumptions. In an additive model, each genotype is encoded as a single numeric feature that reflects the number of minor alleles (homozygous major, heterozygous, and homozygous minor are respectively encoded as 0, 1 and 2). This model does not capture combinatorial relationships between locus genotypes and phenotype, since the assumption is that one of the alleles quantitatively contributes to risk. In the recessive/dominant model, each genotype is encoded as two binary features (presence of minor allele and presence of major allele). This model does not capture the difference between homozygous and heterozygous genotypes, since it only accounts for the presence of an allele. Here, we argue that considering the effect of all possible genotype combinations can provide more information in distinguishing case samples from control samples. The five models proposed here capture all potential relationships, in that differences in heterozygosity vs. homozygosity, presence vs. absence of a specific risk allele are represented by different genotype models. This notion is particularly useful when the genotypes of multiple loci are being integrated. For example, heterozygosity on one locus can be associated with increased susceptibility to a disease, while homozygous minor allele on another locus may be protective at the presence of heterozygosity in the former locus [38]. In this case, the interaction between the two loci can be detected by considering the association of all possible genotype combinations with the phenotype. We adaptively binarize the genotypes of each locus by considering all possible allele combinations. Given the genotype of a locus, we consider five different binary genotype models m (i) , i 2 {1, . . . 5}. Based on each model, we generate a binary genotype profile for each locus. Namely, we consider the following genotype models: 1. Homozygous Minor Allele: This corresponds to the case when the possible effect of the minor allele is "recessive", i.e., the locus is considered to harbor a genotype of interest if both copies contain the minor allele. Presence of Minor Allele: This corresponds to the case when the possible effect of the minor allele is "dominant", i.e., the locus is considered to harbor a genotype of interest if at least one copy contains the minor allele. This is the complement of m (3) . Presence of Major Allele: The locus is considered to harbor a genotype of interest if at least one copy contains the major allele. This is the complement of m (1) . Note that, although models m 4 and m 5 are complements of other models, we consider them separately. This is because, as we discuss in the next section, the 1s and 0s in the binary genotype profiles are considered asymmetrically while integrating the genotypes of multiple loci. Also note that "homozygous minor allele or homozygous major allele" is not considered since it is not associated with a specific risk allele. To select a genotype model for each locus, we separately assess the association of the resulting five genotype profiles with the phenotype of interest. Subsequently, we choose the model that leads to greatest discrimination between cases and controls, and use the respective binary genotype profile as the representative genotype of that locus. This process is illustrated in Fig 2. For each locus c, binarization according to the five different genotype models produces five |S|-dimensional binary genotype profiles m (i) (c), i 2 {1, . . . 5}. For each binary genotype profile m (i) (c), we compute the difference in the fraction of case and control samples that harbor the genotype of interest as follows: where 1 denotes a vector of all 1's and <.> denotes the inner product of two vectors. We then determine the binary genotype model for each locus as the model that maximizes the difference of relative coverage between case samples and control samples, i.e.: Based on the selected model for each locus, we compute the binary genotype profile accordingly: Mðc; sÞ ¼ m ðkðcÞÞ ðc; sÞ: ð8Þ Population Covering Locus Sets (POCOs) Once we compute the binary genotype profiles for all loci, we identify Population Covering Locus Sets (POCOs). In previous work, we define and use POCOs in the context of prioritizing locus pairs for testing epistasis [33]. In this earlier definition, the genotypes of interest are limited to the presence of the minor or major allele; i.e., only the last two models described in the previous section are used to determine the binary genotype profile of each locus. Here, we generalize the concept of POCO to utilize five different models for determining the genotypes of interest, as described in the previous subsection. Blue squares indicate the presence of the genotype of interest in the respective sample for each model (respectively, homozygous minor allele, heterozygous, homozygous major allele, presence of minor allele, presence of major allele). The resulting binary genotype profiles for each locus are shown on the right. Red squares indicate the existence of genotype of interest according to the selected model. In this example, models m (4) , m (1) , m (5) , and m (2) are respectively selected for the four loci. A POCO is a set of genomic loci that collectively "cover" a larger fraction of case samples while minimally covering control samples. Namely for a given set P C of loci, we define the set of case and control samples covered by P respectively as EðPÞ ¼ [ c2P fs 2 S 1 : Mðc; sÞ ¼ 1g ð9Þ and TðPÞ ¼ [ c2P fs 2 S 0 : Mðc; sÞ ¼ 1g: We define a POCO as a set P of loci that satisfies |E(P)| = |S 1 | while minimizing |T(P)|. Note that, since we are interested in finding all sets of loci with potential relationship in their association with phenotype, we do not define an optimization problem that aims to find a single POCO with minimum |T(P)|. We rather develop an algorithm to heuristically identify all non-overlapping POCOs with minimal |T(P)|. Identification of POCOs To identify all non-overlapping POCOs, we use a greedy algorithm that progressively grows a set of loci to maximize the difference of the fraction of case and control samples covered by the loci that are recruited in a POCO. In another words, we initialize P to ; and at each step, add to P the locus that maximizes The algorithm stops when all case samples are covered. We then record P, remove the loci in P from the dataset, and identify another POCO. This process continues until it is not possible to find a set of loci that covers all case samples. We develop two methods to identify two different types of POCOs. The first type of POCOs (named "network-free POCOs") are identifed using the greedy algorithm described above, without the use of any prior biological information. The second type of POCOs are NETPOCOs, which are identified by restricting the search space to connected subgraphs of a network of potential functional relationships among genomic loci. As we describe below, this network is constructed by integrating established locus-gene associations from eQTL studies and protein-protein interaction (PPI) data that contains functional relationships among genes. Network-free POCOs. For network-free POCOs, the search space for the problem contains all the loci that are genotyped and no restriction is applied on the search space. We use δ(.) to guide the search for POCOs, and require the search to proceed until all case samples are covered. NETPOCOs. Since our aim is to find sets of variants that are related to each other in their association with a phenotype, interaction data can provide a useful functional context for POCOs. This approach is inspired by the NETCOVER algorithm that is used to identify dysregulated subnetworks in the context of cancer [39]. To identify NETPOCOs, in addition to GWAS data, we utilize a heterogeneous network that represents the functional relationships among genomic loci. The network contains two types of nodes: genomic loci and genes/proteins. More precisely, the set U C contains all genomic loci that are genotyped in the GWAS and are located in the gene region of interest or are expression quantitative trait loci. The set V contains all human genes/proteins. The interactions and associations between these nodes are represented by three different sets of edges: • F contains an edge between locus c 2 U and gene v 2 V if c is in the region of interest (RoI; defined as 50Kb up-and down-stream of the coding region in our experiments) of v. We call these edges RoI edges. • Q contains an edge between locus c 2 U and gene v 2 V if c is found to be significantly associated with the expression of v in an expression quantitative trait loci (eQTL) screen. We refer to these as eQTL edges. • E contains an edge between two genes u and v if u and v code for interacting proteins. We refer to these as PPI edges. Note that Azencott et al. [30] also propose the idea of integrating multiple types of networks to drive the search for phenotype-associated genomic loci. However, the heterogenous network model proposed here encapsulates more biological information in a sparser network by allowing nodes and edges to represent different types of biological entities and interactions/associations. Moreover, the incorporation of eQTL links in the network makes this method particularly powerful since these links capture functional associations also for loci that are outside coding regions or RoIs of genes. The algorithm for identifying NETPOCOs is illustrated in Fig 3. This algorithm proceeds similarly to the algorithm for identifying network-free POCOs. However, while growing POCOs, the set of loci that can be added to a growing POCO P is constrained by the network. Namely, at any step of the algorithm, only loci that are at most 3 hops away from at least one locus in P are considered as candidates for addition into P. This ensures that the loci in a NETPOCO are functionally related to each other. In other words, reachability within three hops captures all functional association patterns between a pair of loci in this heterogeneous network: • ROI-ROI association: Two loci that are in the RoI of the same gene are within 2 hops of each other. • ROI-eQTL association: A locus that is in the RoI of a gene u is 2 hops away from loci that are associated with u's expression. • ROI-PPI-ROI association: Two loci that are in the RoI of the genes coding for two interacting proteins are within 3 hops of each other. • ROI-PPI-eQTL association: A locus that is in the RoI of a gene u is 3 hops away from a locus that is associated with the expression of gene v such that the products of u and v interact with each other. When the algorithm terminates, it returns the set P of all discovered POCOs. As we discuss in the next section, each identified POCO contains multiple loci and most of the loci in the dataset are not assigned to any of the POCOs in practice. For this reason, we usually have |P| < < |C|. Model Development for Risk Assessment One potential utility of the POCOs is risk assessment. By construction, POCOs (NETPOCOs) contain (functionally associated) loci that exhibit improved power in distinguishing cases from control. Consequently, as compared to individual variants, they may provide more robust and reproducible features to be used in predictive models. To investigate the utility of these multilocus features in risk assessment, we use POCOs to build a model for risk assessment using L1 regularized logistic regression classifier. Representative genotypes of POCOs. To facilitate the use of POCOs for risk assessment, we compute a representative genotype for each POCO. For this purpose, we use the fraction of the Each v i represents a protein (V) and each c j represents a genomic locus (U). Blue edges represent the interactions between proteins (E), purple edges indicate that the respective locus is in the RoI of the coding gene for the respective protein and red edges represent the eQTL links. Initially, P is empty and all loci are considered and the locus (c 5 ) that maximizes δ(.) is added to P. After this point, the search space is restricted to loci that are at most three hops away from c 5 . We continue this loci in the POCO that harbor a genotype of interest in the respective sample. To be more precise, for each POCO P 2 P, we compute the profile of P as for all s 2 S. The set of features utilized by the classifier is comprised of h(P,.) for all P 2 P. Next, we perform feature selection to identify a parsimonious set of POCOs to be used in risk assessment. Feature selection and model building. High dimensionality is always an important problem in GWAS (a.k.a. "large p small n"). The large number of features makes feature selection quite challenging. In particular, the models can be easily over-fit if too many features are entered into the model. For this reason, many researchers suggest filtering algorithms for dimension reduction and feature selection [40][41][42]. Furthermore, building the L1-regularized logistic regression model is computationally expensive, and reduction in the number of features can greatly reduce runtime. Motivated by these considerations, to find the optimal set of POCOs to be used for risk assessment, we use a two-step feature selection method. The first step implements filtering-based feature selection, and the second step incorporates feature selection into model building by using a L1-regularized logistic regression classifier that enforces sparsity. Note that feature selection is applied within a cross-validation framework, so that test samples are not used in the identification and selection of the POCOs that are used in the model. For filtering-based feature selection, we compute a p-value representing the significance of the association of each POCO with the disease. For this purpose, we use two different methods: We then apply a threshold on these p-values to reduce the number of POCOs that are used in model building. Namely, for a given threshold α, we filter out all POCOs with p-value greater than α and retain all other POCOs to be entered into model building. This is done separately for each of the filtering methods. Let H be the matrix in which rows represent samples and columns represent POCOs that pass the filtering stage, such that H(s, p) = h(p, s). As before, f denotes the vector composed of the phenotypes of samples. Then the L1-regularized logistic regression classifier computes a vector β to solve the following optimization problem: Here, q denotes number of POCOs that are entered into the model and λ is a non-negative regularization parameter. The second term in the objective function is a penalty function that enforces sparsity of the model and the parameter λ controls the number of POCOs selected in procedure until the set of selected loci cover a sufficient fraction of the case samples. Cyan nodes and gold nodes show the selected loci and proteins respectively. the model (i.e., the number of non-zero entries in β). For larger λ, the model is expected to be more sparse. Performance evaluation for risk assessment. To evaluate the performance of POCOs in risk assessment, we use nested K-fold cross validation. Namely, we divide the set of samples into K subsets {T 1 , . . ., T K }, while keeping the proportion of case and control samples fixed across all subsets. For the kth subset of samples, we reserve the samples in this subset as test samples. We divide the training group further into K groups and use this partitioning to perform genotype identification, PoCo identification, feature selection, filtering-based feature selection and model building using the L1-regularized logistic regression classifier. Once the model is optimized in the inner fold, we use the resulting model to predict the class of each sample in the kth subset, and evaluate prediction performance on this outer fold. This process is iterated for k = 1, 2, . . ., K and the performance of classification is evaluated based on the predictions across all samples. The typical choices of K are 5 or 10 and here we use 5-fold cross validation in our experiments. We also repeat the randomization of folds five times and report the averages of performance figures across these randomizations. Risk assessment models produce quantitative predictions of susceptibility to the disease of interest. To evaluate the predictive ability of these risk assessment models, we apply different thresholds on the predicted risk to obtain a binary prediction for each test sample. Using these binary predictions, we obtain the counts of true positives (predicted to be in risk, has the disease), false positives (predicted to be in risk, does not have the disease), and false negatives (predicted not to be in risk, has the disease), and compute the precision (fraction of true positives among all predicted to have risk) and recall (fraction of true positives among all who have the disease) figures based on these counts. We assess the performance of each risk assessment model based on the area under the ROC curve (AUC), which characterizes the ability of the model in trading off precision and recall for varying thresholds on the quantitative prediction. Polygenic score. We compare the performance of PoCo-based risk assessment models against models based on individual loci, as well as Polygenic score. Polygenic score is a commonly used method for risk assessment in GWAS. It is based on the assumption that the joint effect of multiple loci on the phenotype is additive [20]. Based on this assumption, the polygenic score for an individual is defined as the summation of the effect sizes of multiple loci, weighted by effect sizes of individual loci. To estimate effect sizes, the p-value of the association of each loci with the phenotype is calculated. For a given parameter α, L is defined as the set of loci with p-value less than α. Subsequently, the polygenic score for a sample s is the defined as follows: Here, γ(c) denotes the effect size of locus c which can be estimated using an appropriate regression model(i.e. logistic for a binary phenotype or linear for a continuous phenotype). Results To assess the ability of POCOs in producing informative multi-locus features, we evaluate their utility in the context of risk assessment. For this purpose, we use GWAS data from the Welcome Trust Case-Control Consortium (WTCCC), which includes data from studies for seven complex diseases, namely type 1 diabetes (T1D), type 2 diabetes (T2D), psoriasis (PS), bipolar disorder (BD), coronary artery disease (CAD), hypertension (HT), and multiple sclerosis (MS). On each dataset, we first identify POCOs, select features to build a model for risk assessment, and then evaluate the performance of the resulting model. To control for overfitting and to ensure that the performance figures are not biased, we use cross validation. We first compare the risk assessment performance of the multi-locus features against the standard approach of using individual significant loci. To facilitate fair comparisons, we use the classification and feature selection methods described in the "performance evaluation for risk assessment" section identically for all types of multi-locus and individual-locus based features. We also compare the performance of NETPOCOs against Polygenic Score, which is a commonly used method for risk assessment. Subsequently, to gain insights into the information provided by network data and specifically eQTL-based regulatory interactions, we also compare the performance of NETPOCOs, network-free POCOs, and eQTL-free POCOs. Moreover, we investigate the effect of λ in the L1 regularized logistic regression classifier, i.e. the parameter that controls the parsimony of the model. We also assess the biological relevance of some of the selected POCOs using enrichment analysis and a literature-driven list of genes and processes that have been reported to be associated with diseases. Finally, we compare the most frequently recruited genes in POCOs in different diseases to gain insights into shared genetic bases of different diseases. This analysis also suggests novel potential susceptibility genes for these diseases. Experimental Setup GWAS datasets. We use genome wide association data for all seven diseases obtained from the Wellcome Trust Case-Control Consortium (WTCCC) [43][44][45]. For each dataset, we use the genotypes generated by Chiamo algorithm. We filter out the loci with minor allele frequency (MAF) 5%. While identifying the POCOs, in order to avoid marginal effect of individual loci and reduce the risk of artifacts, we filter the loci with nominal p-value of individual association less than 10 −7 (this corresponds to a corrected p-value threshold of 0.05). Since we utilize the PPI networks and eQTL data to identify NETPOCOs, we include in our analyses the SNPs that are either within 50kb upstream and downstream of coding regions or are identified by eQTL to be associated with the expression of a gene. The number of loci and the the number of samples for each dataset are shown in Table 1. Protein-protein interaction (PPI) dataset. We use a human PPI network downloaded from BioGRID (The Biological General Repository for Interaction Datasets) database. The Bio-GRID PPI network contains 194639 interactions among 18719 proteins. Expression quantitative trait loci (eQTLs) datasets. We use an eQTL dataset obtained from RegulomeDB which aims to annotate noncoding common variants from association studies [46]. This database contains high throughput datasets from The Encyclopedia of DNA Elements (ENCODE) [47] and other resources, as well as computational prediction and manual annotation. We extract all the variants that are identified to have direct effect on gene expression and also have been shown to be on transcription binding sites through ChIP-seq and DNase with either a matched PWM to the ChIP-seq factor or a DNase footprint. SNP-gene mapping. To identify network-free POCOs, we do not use gene information. To facilitate the identification of NETPOCOs, we map SNPs to genes by defining the region of interest (RoI) for a gene as the genomic region that extends from 50kb upstream to 50kb downstream of the coding region for that gene. Association analysis for individual loci. We identify individually significant loci using PLINK [48], a well-established toolkit for GWAS analysis. We assess the disease association of all loci in each dataset based on minor allele frequency, obtaining a p-value for the association of each locus with the disease. We adjust the p-values for multiple hypothesis testing using Bonferroni correction. Performance of POCOs in Risk Assessment For each dataset, we divide the population into 5 groups while preserving the proportion of case and control samples in each group. We reserve one group for testing and we identify NET-POCOs on the remaining four groups. Then, we use these four groups for feature selection and model building. Finally, we test the performance on the group reserved for testing. All of the reported performance figures are averages across five different cross-validation runs. The number of POCOs identified on each dataset and the size of these POCOs are presented in Table 2. Please note that the variance in number of POCOs does not have a significant effect on the performance (S1 Fig). Comparison of NETPOCOs against individual loci and polygenic score. To investigate the benefits of using NETPOCOsin risk assessment, we first compare the performance of NETPOCObased risk assessment models against that of individual-locus based models and the well-established Polygenic Score. As described in the Methods section, we select NETPOCOs to be used in model building using a filtering based feature selection method, which uses p-values (of either the coefficient in logistic regression model or the KS-statistic for difference in the distribution between case and control samples) as the filtering criterion. Similarly, we filter individual loci based on the statistical significance of their association with the disease (after correction for multiple hypothesis testing). Polygenic risk score, which is commonly used in risk assessment, is a sum of the scores of associated loci, weighted by effect sizes, which are estimated using the training set. For polygenic score, the features are also selected using the p-value threshold in training samples and they are used to score the individuals in test samples. To comprehensively understand the effect of filtering, we test all methods using different thresholds on p-value for filtering (α). Namely, for each α 2 {5E − 8, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3}, we build the risk assessment model using the NETPOCOs or loci with p-value less than α. Note that we use p-values to rank and select individual loci or PoCos to be entered into the model as features. As discussed in Methods, the p-values for POCOs reflect the significance of logistic regression coefficients or KS-test, whereas for individual loci, the p-value reflect the significance of case/control association analysis as computed by PLINK. Since p-value are used for ranking, correction for multiple hypotheses does not influence the behavior of the methods. Nevertheless, the p-value thresholds shown in the figure are based on Bonferroni-corrected pvalue. For model building, we use the L1 regularized logistic regression classifier described in the Methods section, for both NETPOCOs and individual locus based features. L1 regularized logistic regression provides a second layer of feature selection through the regularization term in the associated objective function. Polygenic score has its own classification algorithm by definition. The results of cross-validation for using individual loci (using L1 regularized logistic regression), polygenic score, and NETPOCOs with two different filtering criteria (logistic regression pvalues vs. KS-statistic) are shown in Fig 4. The results shown in Fig 4 suggest that filtering of NETPOCOs based on regression p-value provides favorable prediction performance when a strict threshold is used for statistical significance (i.e., for smaller α). However, as the threshold increases (i.e., more NETPOCOs are entered into model building), the performance of regression based filtering declines. On the other hand, the prediction performance of NETPOCOs filtered based on KS p-value is improved with increasing threshold on significance. This observation suggests that, while regression p-value tends to rank the most informative NETPOCOs at the top, KS-statistic based ranking provides a more reliable set of NETPOCOs for L1 regularized logistic regression to choose from when more NETPOCOs are entered into the model (S2 Fig). Comparison of Polygenic Score and NETPOCO-based risk assessment in Fig 4 shows that NETPOCO-based models consistently outperform Polygenic Score for all diseases, perhaps with the exclusion of multiple sclerosis. Overall, Polygenic Score has a peak performance at relatively stricter thresholds on the significance of individual loci included in the model, but this figure remains under the peak performance of NETPOCO-based models. Individual locus based classifier performs more favorably when more loci are entered into the model (which is expected since L1 regularized logistic regression effectively performs feature selection), but the performance of the classifier that uses individual locus based features remains below the performance of the classifier that uses NetPoco-based features. These results suggest that NETPOCOs are useful in "feature construction" for risk assessment, i.e., they bring together robust sets of loci to be used together in risk prediction (S3 Fig). It is also possible that, as compared to using standard genotype coding for individual loci, our method for computing representative genotypes for POCOs improves prediction performance, since it potentially captures non-linear relationships among POCOs as well. To facilitate thorough comparison of NETPOCOs, individual locus based features, and Polygenic Score, we also report the best average AUC and the number of features in the final model across all p-value thresholds used for filtering. These results are shown in Fig 5. As seen in the figure, models that use NETPOCO-based features consistently outperform individual locus based features and Polygenic Score in risk assessment for all diseases, and they provide more parsimonious models as compared to Polygenic Score. However, it is interesting to note that PoCoS do not provide significant improvement in risk assessment for MS. This is the dataset that has the smallest number of loci. To this end, this behavior may be indicative of the need for higher coverage to be able to identify more informative POCOs. NETPOCOs vs. network-free POCOs. Many computational methods are developed to integrate the GWAS data with other biological datasets that provide information on the functional relationships between individual biological entities (here, genomic loci). In this study, we integrate PPI data and eQTL data in the identification of NETPOCOs. Since the identified NETPOCOs are guided by the PPI network and eQTL data, we expect that NETPOCOs would be more informative and robust as compared to network-free POCOs, since they are composed of functionally related loci. To investigate whether this hypothesis is supported empirically, we compare the performance of NETPOCOs in risk assessment to that of network-free POCOs. For this purpose, since the computation of network-free POCOs is computationally expensive, we limit our analyses to three diseases: bipolar disorder (BD), type II diabetes (T2D), and coronary-artery disease (CAD). The results of these analyses are shown in Fig 6. Note that, in these analyses, network-free POCOs have been identified using all genotyped loci and the search space is not limited to the loci that can be mapped to gene regions. Therefore, network-free POCOs can include some loci that are out of gene regions as well, providing them with an advantage over NETPOCOs. However, as seen in the figure, NETPOCOs outperform the network-free POCOs for T2D. In contrast, the results for BD and CAD suggest that constraining the search space by functional interactions based on PPIs and eQTL may slightly reduce the predictive power of POCOs. However, importantly, when we consider model size, we observe that NETPOCOs provide more parsimonious final models for all three diseases. We implement the procedure for the identification of POCOs in MATLAB. We assess the runtime of this procedure using Intel(R) Xeon(R) CPU E5-4620 with a 2.2 GHz processor with 50 GB RAM. The results of this analysis are shown in Fig 7. These results suggest that incorporating interactions among proteins and eQTL data can effectively improve the quality of POCOs by providing more parsimonious models. Furthermore, using prior knowledge makes the problem computationally feasible since it drastically reduces the running time. Information added by eQTL data. An important limitation of network-based analyses of GWAS data stems from the constraints posed by the lack of regulatory interactions in network models. If the functional relationships that are used to drive the search are limited to proteinprotein interactions (PPIs), the search is limited to loci that are in close proximity to coding regions and regulatory interactions that involve non-coding loci are not considered [31]. One important contribution of this study is the incorporation of eQTL-based interactions along with PPIs to drive the search for NETPOCOs. To assess the benefits of including eQTL-based interactions, we also identify PPI-based POCOs using a network that does not contain eQTL edges, and compare the risk assessment performance of these POCOs against that of NETPOCOs (which are identified using PPI and eQTL data). Note that removal of eQTL edges causes the removal of loci that are connected to the network just by eQTL edges. Such loci are usually those that are not in close proximity of coding regions. The results of this analysis are presented in Fig 8. As see in the figure, the performance of PPI-only POCOs and eQTL+PPI-based NETPOCOs is similar for all three diseases. However, for BD and CAD, the predictive models provided by the incorporation of eQTL data are significantly more parsimonious than the models provided by PPI-only NETPOCOs. For T2D, the incorporation of eQTL edges leads to more complex models, but the prediction performance is enhanced with the inclusion of eQTL edges. These observations suggest that incorporation of eQTL data indeed provides biologically relevant information in the discovery of NETPOCOs. Effect of model complexity. In L1 regularized logistic regression, the parameter λ in Eq 13 is used to tune the trade-off between model fit and model complexity (number of features included in the model). Larger λ forces the model to be more parsimonious. Therefore, as λ grows, the learning task becomes more difficult, in that L1 regularized logistic regression tries to simplify the model by compromising model fit. For this reason, if the features that are input into the classifier are "high-quality" features, the classifier can be expected to be more robust to this parameter. Based on this premise, we assess the "quality" of the features constructed from NETPOCOs by comparing the models based on NETPOCOs and individual loci in terms of their performance as a function of λ. For this purpose, we fix the p-value threshold (0.05) for both NETPOCOs and individual SNPs and compute the AUC in cross-validation for a range of different values of λ. The results of this analysis are shown in Fig 9. As seen in the figure, as lambda gets larger, the risk assessment performance of individual loci quickly becomes equivalent to that of a coin toss. This observation suggests that the classifier needs to incorporate a large number of features to maintain model fit, which may make the classifier vulnerable to overfitting. This is also true for NETPOCOs, but NETPOCOs can tolerate larger lambdas. In all other results reported in this section, we use λ = 0.001 which provides a reasonable balance between the complexity and predictive performance of the model. Biological interpretation of NETPOCOs. We assess the biological relevance of the predictive NETPOCOs using pathway analysis, Gene Ontology enrichment analysis, and literaturedriven list of genes and processes that are reported to be associated with disease. For this analysis, we focus on three diseases (T2D, CAD and BD) which are shown to have similar molecular mechanisms [49,50] and share common risk pathways [51]. Type II Diabetes (T2D). We focus on NETPOCOs that have highest coefficient in the model constructed by L1-regularized logistic regression classifier. Top two NETPOCOs are shown in Fig 10. The NETPOCO shown in Fig 10(a) induces a subgraph that does not contain any PPI edges. These NETPOCOs are consistently selected by L1 regularized logistic regression in the final model for risk prediction. The circle nodes represent proteins and rectangular nodes represent SNPs. Red dashed lines represent the eQTL association between a SNP and a gene, purple lines indicate that a SNP is in the ROI of the respective gene, and the blue edges represent a protein-protein interaction (PPI) between the products However, eQTL edges are able to capture the functional relationship between the SNPs and the genes in this NETPOCO. Interestingly, some of the genes in this NETPOCO are previously reported to be associated with T2D [52], while some may have links to T2D although no direct associations are previously reported. More precisely, Wong et al. [53] show that SIRPA is a T1D risk gene in the non-obese diabetic mouse. The inclusion of this gene in a NETPOCO that is used in risk assessment for T2D suggests that this gene can be a potential novel candidate for association with T2D as well. We also use ontologizer for Gene Ontology enrichment analysis [54]. The Gene Ontology enrichment analysis shows that this POCO is enriched in isocitrate metabolic process (p-value = 0.001) and also NADH metabolic process(p-value = 0.004), which both contribute to the amplification of insulin secretion [55]. The POCO shown in Fig 10 contains both PPI and eQTL-based edges. STC1 and LOXL2 are genes that are previously reported to be associated with T2D [52]. It is notable that TINAGL1 is involved in Glucose/Energy metabolism pathway and CHRNA9 is involved in Postsynaptic nicotinic acetylcholine receptors pathway with other genes such as CHRNA2, CHRNA4 and CHRNA6 that are previously reported to be associated with T2D [52]. This observation suggests that TINAGL1 and CHRNA9 can be potential candidate genes for T2D. Additionally, it is known that acetylcholine can enhance glucose-stimulated insulin secretion from pancreatic beta-cells [56]. This POCO is also enriched in calcium ion homeostasis (p-value = 0.001) which is one of the T2D associated pathways. Note that, for T2D, non-genetic risk factors including age, sex, and body-mass index (BMI) play an important role in risk. These factors can be also combined with genetic factors to obtain better performance in risk assessment [57]. Janipalli et al. [58] combine 32 genomic loci with other conventional risk factors to obtain an AUC of 0.63 in an Indian population. Therefore the performance improvement provided by the multi-locus features as compared to the individual locus based features in a genetic factor only setting suggests that combination of multilocus genomic features with other factors may lead to an even greater predictive performance in risk assessment. Coronery-Artery Disease (CAD). Two NETPOCOs that have highest coefficient in L1 regularized logistic regression for CAD are shown in Fig 11. The genes that are highlighted in gold code for proteins that are previously reported to be associated with CAD [59]. The NETPOCO in Fig 11(a) is enriched in positive regulation of STAT protein (p-value = 0.0003), positive regulation of cardiac muscle cell proliferation (p-value = 0.002), cardiac muscle tissue regeneration (p-value = 0.0003), and activation of MAPKK activity (p-value = 0.02). These pathways are previously reported to be associated with susceptibility to CAD [59]. Although ERBB4 is not previously reported to be associated with CAD, it plays a role in MAPK pathway, which is one of the top pathways for CAD [59]. Therefore, ERBB4 can be a potential candidate gene for CAD as well. The NETPOCO in Fig 11(b) is also enriched in muscle cell proliferation (p-value = 3.3E-6), prostate glanduar acinus development (p-value = 5.92E-6), and muscle cell differentiation(pvalue = 5.72E-5). This NETPOCO is also enriched in positive regulation of calcieneurin-NFAT signaling pathway (p-value = 0.0006) and positive regulation of insulin-like growth factor receptor signaling pathway. IGF1 and RXRA are both involved in a pathway named "Pathways in cancer" which is known to be related to CAD. More than 20 genes in this pathway are known to be associated with CAD [59]. This observation suggests that RXRA may be a novel CAD risk factor. of respective genes. The genes that are previously reported to be associated with T2D are highlighted in gold. (a) A NETPOCO enriched in isocitrate metabolic process and NADH metabolic process(p-value = 0.004), which both contribute to the amplification of insulin, (b) a NETPOCO enriched in calcium ion homeostasis. Fig 12(a) is enriched in regulation of dopamin metabolic process (p-value = 8.67e-6), which plays a central role in bipolar disorder [60]. The NETPOCO in Fig 12(b) is enriched in regulation of neurotrasmitter secretion (p-value = 0.0007), cell migration involved in coronary angiogenesis (p-value = 0.0008), and insulin receptor signaling pathway(p-value = 0.003). Shared molecular bases among diseases. Identifying the links between the molecular etiologies of different diseases can provide an insights on the underlying mechanisms of these diseases. Elucidation of such relationships can also help to detect the novel candidate genes for diseases. For example, patients with bipolar disease frequently have coexisting medical conditions such as obesity, cardiovascular disease, and diabetes mellitus [49]. Torkamani et al. [50] also show a strong genetic correlation between BD and metabolic disorders CAD and T2D. Note that the results of Gene Ontology enrichment analysis reported above also suggest that NETPOCOs can capture the relationship between diseases. For example, the NETPOCO in Fig 11 (b), which is associated with CAD, is enriched in regulation of insulin-like growth factor receptor signaling pathway, which is also associated with T2D. To gain further insights into the shared molecular bases of T2D, CAD, and BD, we examine the genes that appear most frequently in the NETPOCOs selected by L1 regularized logistic regression in the risk assessment models for these diseases. For each disease, we identify the top 10 most frequent genes. We then assess whether they are previously reported to be associated with T2D [52], CAD [59] and BD [61,62] as well. The results of this analysis are shown in Table 3. The first ten rows show the most frequent genes in NETPOCOs identified in CAD dataset. Among these genes, WWOX and CD36 are previously reported candidates for CAD. They are also known to be associated with BD. This result suggests that, for example, GRID1 can also be a potential susceptibility gene for CAD. This hypothesis is also supported by the observation that GRID1 plays a role with 14 other known CAD genes in neuroactive legand-receptor interaction. WWOX also can be a good candidate for T2D, considering that it plays a role in apoptosis and autophagy pathway, which is the main form of beta-cell death in T2D [63]. Note that NETPOCOs do not overlap at the SNP-level, however, they may overlap at the gene-level since multiple SNPs can be mapped to the same gene. This shows the power of NET-POCOs in identifying molecular bases of diseases, since multiple NETPOCOs can arise from similar functional contexts, providing stronger statistical evidence for the involvement of genes that are associated with these NETPOCOs. Discussion In this paper, we propose a novel criterion to assess the collective disease-association of multiple genomic loci (POCOs) and investigate the utility of these multiple-loci features in risk assessment. We also perform extensive experiments to evaluate the effect of using network information to drive the search for multi-locus features on risk assessment. We also investigate the effect of the variants that have regulatory effects (i.e. eQTL data) on performance for risk assessment. Moreover, we compare the proposed method with the polygenic score which has been shown to be successful in different studies. Our result show that our method is significantly more powerful in risk assessment. products of respective genes. The genes that are previously reported to be associated with CAD are highlighted in gold. (a) A NETPOCO enriched in cardia muscle tissue regeneration (p-value = 0.0003) and activation of MAPKK activity (p-value = 0.02), (b) a NETPOCO enriched in positive regulation of insulin-like growth factor receptor signaling pathway (p-value = 0.0006). doi:10.1371/journal.pcbi.1005195.g011 Our results show that multi-locus features improve prediction performance as compared to individual locus based features. We also observe that integrating functional information provided by protein-protein interaction data and expression quantitative trait loci (i.e. eQTL) data leads to more parsimonious models for risk assessment. However, inclusion of functional data does not yield significant improvement in prediction performance. This may be indicative of that are previously reported to be associated with CAD are highlighted in gold. (a) A NETPOCO enriched in regulation of dopamin metabolic process which plays a central role in bipolar disorder (p-value = 8.67e-6), (b) a NETPOCO enriched in regulation of neurotrasmitter secretion and insulin receptor signaling pathway (pvalue = 0.0007). doi:10.1371/journal.pcbi.1005195.g012 For each disease, ten most frequent genes that are involved in NETPOCOs selected by L1 regularized logistic regression in risk prediction are listed. Previously reported association of these genes with the three diseases are indicated with a "Yes" or "No" in the respective column of each row. doi:10.1371/journal.pcbi.1005195.t003 the limitations of genomic data in risk assessment. Furthermore, since PoCos contain loci that are related to each other in the context of a phenotype, PoCos that are discovered without the inclusion of functional information also likely contain functionally related loci. However, utilization of functional information reduces the search space to render the problem computationally feasible, and brings forward PoCos that are more functionally relevant and robust, thereby leading to more parsimonious models. Based on the success of multi-locus genomic features in risk assessment, we conclude that combining these features with non-genetic risk factors and other biological data may lead to further improvements in risk assessment. The proposed method is implemented in MATLAB and provided in the public domain (http://compbio.case.edu/pocos/) as open source software.
2018-04-03T02:54:54.331Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "fa121571456cba9e24c8b53de6af08a75d3eadad", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005195&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa121571456cba9e24c8b53de6af08a75d3eadad", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
259642907
pes2o/s2orc
v3-fos-license
Operationalizing racialized exposures in historical research on anti-Asian racism and health: a comparison of two methods Background Addressing contemporary anti-Asian racism and its impacts on health requires understanding its historical roots, including discriminatory restrictions on immigration, citizenship, and land ownership. Archival secondary data such as historical census records provide opportunities to quantitatively analyze structural dynamics that affect the health of Asian immigrants and Asian Americans. Census data overcome weaknesses of other data sources, such as small sample size and aggregation of Asian subgroups. This article explores the strengths and limitations of early twentieth-century census data for understanding Asian Americans and structural racism. Methods We used California census data from three decennial census spanning 1920–1940 to compare two criteria for identifying Asian Americans: census racial categories and Asian surname lists (Chinese, Indian, Japanese, Korean, and Filipino) that have been validated in contemporary population data. This paper examines the sensitivity and specificity of surname classification compared to census-designated “color or race” at the population level. Results Surname criteria were found to be highly specific, with each of the five surname lists having a specificity of over 99% for all three census years. The Chinese surname list had the highest sensitivity (ranging from 0.60–0.67 across census years), followed by the Indian (0.54–0.61) and Japanese (0.51–0.62) surname lists. Sensitivity was much lower for Korean (0.40–0.45) and Filipino (0.10–0.21) surnames. With the exception of Indian surnames, the sensitivity values of surname criteria were lower for the 1920–1940 census data than those reported for the 1990 census. The extent of the difference in sensitivity and trends across census years vary by subgroup. Discussion Surname criteria may have lower sensitivity in detecting Asian subgroups in historical data as opposed to contemporary data as enumeration procedures for Asians have changed across time. We examine how the conflation of race, ethnicity, and nationality in the census could contribute to low sensitivity of surname classification compared to census-designated “color or race.” These results can guide decisions when operationalizing race in the context of specific research questions, thus promoting historical quantitative study of Asian American experiences. Furthermore, these results stress the need to situate measures of race and racism in their specific historical context. Introduction Why is historical research important for discussing anti-Asian racism and health? Scholars have consistently identified gaps in the literature concerning Asian-American health (1,2) and associations between racial discrimination and health for Asian Americans (3). Calls to address these gaps have taken on new urgency in the United States, where scholars and activists have identified a rise of anti-Asian discrimination and hate crimes during the global COVID-19 pandemic (4)(5)(6)(7)(8). This surge in discrimination is not a new phenomenon; it exemplifies the racist association of Asian bodies with disease that originated on the West Coast of the United States in the mid-19th century (9,10). As many other scholars have attested, addressing contemporary racism and its impacts on health requires understanding its historical roots (11)(12)(13). This paper does not examine a specific health outcome, rather expands the discussion on methods and assumptions critical to historical health research. We explore the strengths and limitations of two different approaches to operationalizing racialized exposures, surname matching and enumerator racial classification, using historical census data from 1920, 1930, and 1940 (14) as a case study for Asian Americans. Operationalizing racism in different time periods requires carefully considering processes of racialization and the specific origins of different historical data sources (13, 15,16). Archival secondary data such as historical census records lend themselves to quantitative analysis of structural dynamics that affect the health of Asian immigrants and Asian Americans. Historical census data overcome common weaknesses of other data sources, such as small sample size and aggregation of Asian sub-groups. However, white supremacist and eugenic ideologies informed census enumeration procedures (9,10,17), raising questions about the validity of census racial measures over time. This presents challenges when operationalizing racialized exposures of Asian Americans using historical census data. A brief outline The remaining three subsections of our introduction further establish the theoretical groundwork for the comparison of surname matching and enumerator racial classification, synthesizing the varied and sometimes conflicting literature definitions of race and related terms, detailing challenges specific to quantitative historical research on Asian Americans, and outlining how racial classification and surname matching criteria operationalize racism or racialized exposures. The methods section describes the generation of the census datasets and surname lists used in the analysis, how well our populations of interest meet underlying methodological assumptions for application of surname criteria, the definition of validity measures calculated, and the analytical process. In the results section, we first present demographics and descriptive statistics of the three census populations, then tabulate the validity statistics we calculated alongside those calculated by Lauderdale and Kestenbaum with 1990 census data, and finally describe individual results for sensitivity, specificity, and PPV in more detail. Our discussion section compares our results to the validity measures for the 1990 census, contextualizes our validity measures for our populations of interest for analytical applications, offers possible explanations for lower-than-expected validity measures and the disagreement between the two classification methods, and connects our questions about the validity of these methods to literature examining similar questions for other populations or in other time periods. Finally, our conclusion connects our findings back to the broader research implications of our results and highlights the importance of these types of research questions to contemporary health outcomes. What do we mean when we talk about race? Many public health studies use racial classification as a proxy for racialized exposures. Unfortunately, many of these same studies fail to provide adequate methodological explanation of how race is conceptualized and operationalized when included in a study. In fact, a systematic review that examined a stratified sampling of publications from five major epidemiology journals from 1995 to 2018 found that out of 329 studies including data on individuals' race and/or ethnicity, only four studies provided even a working definition of this construct and the majority of studies were unclear about how they measured race and/or ethnicity (16). As Roberts and Adkins-Jackson et al. assert, researchers who do not sufficiently illustrate their basic conceptualization and operationalization of race in their studies end up "filter[ing] out the impact of race" (18) or reifying "erroneous assumptions about the biological differences between racialized groups" (19). Furthermore, Adkins-Jackson et al. identify problems associated with using race as a variable in place of racism (19) and a growing body of literature investigates more salient methods to measure and analyze racialized exposures and racism at structural, institutional, and interpersonal levels (19)(20)(21)(22)(23)(24)(25). Public health researchers conducting prospective studies should strongly consider incorporating these more nuanced methods into their study design, data collection, and analysis (16,(19)(20)(21)(22)(23)(24)(25). However, racial classifications remain an important if imperfect proxy (18), especially when conducting retrospective and historical research. Operationalizing racism in a meaningful way by using Frontiers in Public Health 03 frontiersin.org existing classification data requires a thorough understanding of several concepts related to and often conflated with race. Ethnicity, national origin, and ancestry are often incorrectly used as euphemisms for race (18,20,26). Factors such as immigration (9,10) and the collapsing of national or ethnic categories (27, 28) require special consideration in the context of Asian racial formation in the United States. The remainder of this subsection outlines the conceptualization of race and related terms that we employ in this study, in line with recommendations for epidemiology and other health fields (16). Race is now widely recognized as a social and political construct rather than an inherent, biologically-determined characteristic (17-19, 29, 30). Throughout history, varying physical characteristics have been ascribed social and political meaning to enforce hierarchies of power, with whiteness situated at the top (18,29,31). This racialization of bodies is highly context specific (10,29), developing and changing over time and across geographic location in a process Omi and Winant call racial formation (29). Racist scientific rhetoric helped maintain unequal and exploitive power structures, using flawed methodologies developed in the fields of phrenology and eugenics to assert that race was a measure of innate biological superiority or inferiority (17). Rejecting the biologic basis for race does not mean it is immaterial in the realm of health (18,29). Health inequities among racial groups stem from the social consequences of racialization, impacting health through biological mechanisms such as access to health resources and stress associated with institutional and interpersonal racism (18). Ethnicity and race are not only conflated in meaning, but are often combined into a single term, "race/ethnicity" (20). In some ways a reaction to the externally ascribed nature of race (29), ethnicity is typically conceptualized as self-selected membership in a cultural group (20,22,29). As with race, it is informed by a mix of nationality, ancestral national origin, and physical appearance (20,29,32). More nuanced definitions of ethnicity have incorporated a relational dimension, acknowledging external hierarchical influences on cultural identity and ethnicity (20). Importantly, ethnicities can also function as subcategories of racial groups (20). For example, Chinese-Americans, Indian-Americans, Japanese-Americans, Korean-Americans, and Filipino-Americans (along with numerous other ethnic groups) would comprise the pan-ethnic racial category of Asian-American (27, 29). The current United States census definition of ethnicity incorporates the basic tenets of the cultural definition of ethnicity described above, but differs markedly in that it only delineates two ethnic groups, Hispanic and non-Hispanic (33, 34), and allows those with Hispanic ethnicity to fall into any other racial category (21). National origin refers to a person's country of birth (20,26). Nationality is sometimes used equivalently, but it constitutes a legal status associated with naturalization (10,35) and thus may also refer to a person's country of citizenship after migration. Asian ethnic groups are often condensed in the United States context postimmigration to adhere to national origin boundaries. However, this equivalency of ethnicity and national origin constitutes erasure of multi-ethnic states of origin, consolidating culturally diverse populations (36,37) into single American ethnic groups. For example, the Chinese population is made up of 56 officially recognized ethnic groups and many additional ethnic groups that do not have official government recognition (37). Yet ethnic groups within China such as Han, Zhuang, and Hui (37) rarely translate into hyphenated American identities in the way of Chinese-American identity. Furthermore, equating ethnicity and national origin does not account for international migration of previous generations (18), changes in state borders over time (10,18), and the existence of stateless peoples (10). Ngai argues that the supposedly objective characteristic of "national origin" had differential importance in defining social hierarchies for whites and non-whites when it was first created and defined in the early 20th century. Non-whites were grouped together mainly by race with national origin de-emphasized, whereas the foregrounding of national origins for Europeans served to selectively exclude "undesirable" European immigrants under the Immigration Act of 1924 (10). Parental national origin or nationality informs ethnic identity and constitutes a component of ancestry. Ancestry typically denotes a person's broadly defined heritage or descent (22). More specifically, ancestry can refer to either ancestral national or cultural origin (20) or genetic or geographic ancestry (18). Roberts cautions against equating genetic or geographic ancestry with race given that the former concepts are biologically-defined and do not map onto discrete, socially created racial categories. This equivalence only serves to reify problematic conceptualizations of races as natural divisions among humans. Ancestry, when applied correctly, is a highly individual characteristic rather than a homogenous group identity. It has the added conceptual advantage of allowing mixed ancestral nationalities and not needing the delineation of mutually exclusive categories (18). Immigration plays a key role in the racialization of Asian immigrants and Asian Americans alike. The racial triangulation theory posits that racialization occurs along two axes: inferiorsuperior and foreigner-insider (38). Public health narratives and xenophobic, racist rhetoric consistently portrayed Asian populations as unassimilable, perpetual foreigners, creating what Ngai calls "alien citizens" (10). Despite the demonstrated history of racializing immigrants on the basis of their perceived foreignness, research on immigrant populations in the United States tends to prioritize ethnicity at the expense of race. Some researchers have thus called for a "racialization" of immigration studies to incorporate critical race theory (39,40). What challenges do we face when conducting historical quantitative research on Asian Americans? Beyond the complexity in defining race and related concepts, historical quantitative research on Asian Americans is further complicated by methodological challenges and characteristics of available datasets. Historical data sources do not always systematically classify race, but racialization processes were nevertheless salient in the lives of the people in the dataset. For example, our analyses of the racialized implementation of California's eugenic sterilization program relied on Spanish surname (39) and Asian nativity (40) rather than explicit racial classification. The use of proxies to operationalize a racialized exposure was motivated by the inconsistent collection of race and ethnicity on the institutional forms that comprised our dataset. Historical research is limited to data that have already been collected and often cannot incorporate the many innovative methodologies that prospective survey data collection can facilitate. Frontiers in Public Health 04 frontiersin.org As previously discussed, the boundaries of racial categories changed over time (41) and were politically motivated (10,42). Since researchers operate under their own contemporary racial socialization (15,16), they could potentially generate research questions predicated on contemporary understandings of race rather than the racial environment of the period of study. Unless rooted in the appropriate historical racial context (13, 15), a flawed underlying conceptual model or inappropriate terminology could bias the research. Similarly, biases introduced into the data at the time of collection must be thoughtfully considered to properly operationalize the information therein. Determining which people to classify as Asian can be difficult if they are described in discriminatory or anachronistic language rather than as Asian or Asian-American. Various national and ethnic Asian subgroups were ascribed a group racial identity of Asian, "Asiatic, " or "Oriental" through the early 20th century (10), but Asian-American only emerged as a named racial identity decades later during the civil rights era (29). In longitudinal studies the racial lexicon and hierarchies of multiple time periods must be taken into account, as well as the processes that produced changes in them over time. Thus, race-related variables may not be directly comparable and could require a harmonization process across time. Aggregation of different Asian subgroups can statistically mask disparate health outcomes (3,28). The aggregation of Asian subgroups into a larger Asian-American or Asian American Pacific Islander category can falsely homogenize the experiences of diverse populations. As a pan-ethnic group, Asian Americans in some studies have been shown to have better economic outcomes compared to the overall United States population (43) and similar or better health outcomes compared to white Americans (44). However, aggregation can statistically mask important ethnic differences in residential and occupational segregation (45), economic inequality (43), and health disparities (3,28,46). Decisions to aggregate Asian-American subgroups into a single racial category often stems from limitations in data sources, sample size, and feasibility of sampling or analysis rather than from a theoretically salient research question. As is true with contemporary data sources, historical data may lack granular racial or ethnic information. For example, vital statistics compiled by the Los Angeles County Health Department in Annual Health Reports from 1915 to 1926 include only five "racial" categories: White, Black, Mexican, Japanese, and Other. Depression-era reports present vital statistics by the two categories of White and Mexican (47). How do census racial classification and surname matching operationalize racism or racialized exposures? Census classification One approach to operationalizing racialized exposures is by using census racial classification as a proxy for racialized exposures. Selfenumeration did not become the standard until the 1970 census (48); in prior years this variable measures the census enumerator's external and socially-informed judgment of a person's racial identity. Census enumeration instructions (see methods and figures for more detail) did not clarify how the enumerator should make this judgment (49-51), implying that elements of such a classification system were commonly known and accepted. Census procedures in the early 20th century did not preclude racial self-identification, but phenotypic observation, residential proximity to ethnic neighborhood enclaves, national origin, parental birthplace, or a combination of those factors likely also influenced the enumerator's ultimate choice of classification. Thus, this classification method captures the observed and known racial ancestry dimensions of race, with possible influence of selfclassification as well (52). Census racial classification has numerous strengths for examining health at the population-level, whether by itself or in conjunction with other datasets. Health researchers frequently employ the demographic information provided in the census as exposures (neighborhood-level characteristics, socioeconomic status), outcomes (morbidity, mortality, disease incidence), or covariates (age, sex). In addition, census data can provide population-level denominators; stratifying these denominators by race can reveal racial disparities (53). While self-identification of race is currently the standard in federal data collection (54), Kaplan and Bennet argue that "self-report may not fully capture the effects of discrimination, which is more likely to be based on observers' perceptions than on self-perception" (55) and Cobb et al. illustrate how "socially-assigned" dimensions of race shape health disparities (56). In 1970, the Census Bureau compared self-identified race with enumerator observed race. Although agreement was fairly high between the two measures for white and black populations (>95% agreement), a much lower level of agreement (73%) was found for Asian and Native American populations (48). The racialized nature of census enumeration means these historical enumerator categorizations may more directly capture some elements of structural racism beyond the proxy-level. Ironically, the messy conflation of race, ethnicity, national origin, and even religion (Asian Indians were called "Hindus" regardless of religion) may constitute a relative strength of early twentieth-century census data: rather than a pan-ethnic "Asian" category, the census documented multiple separate "races" (Chinese, Japanese, etc.), which provides disaggregated data for what today would be considered Asian-American subgroups. Surname criteria Surname classification has been used to supplement racial or in place of ethnic classification, when racial or ethnic information is absent or limited for many different racial, ethnic, religious and national origin groups. This includes Hispanic or Latino groups (57), people of Arab ancestry (58, 59) European ethnic groups or descendants from specific European countries (60), American Jews (61, 62), South Asians, Asian Americans (63), and others (64,65). Methods range from matching surnames to existing lists, using surnames in combination with other information such as geographic residence (66), and using hot deck imputation procedures that use surnames in conjunction with racial or ethnic information from similar people in a dataset (67). Although often used as a proxy for race [capturing elements of the interaction-based observed race and known racial ancestry dimensions of race (52)], it is more accurate to say that surnames may provide insight into ethnicity or ancestral national origin. Historically, many surnames have been distinctive to particular language, culture and ethnic groups. Surname lists are sometimes classified by country of origin (e.g., German, Japanese), but may also be used to distinguish multiple ethnic groups within a particular country, or may identify ethnic groups that span multiple countries. As surnames are typically passed down through families, a person's Frontiers in Public Health 05 frontiersin.org surname may provide information about the cultural origin of at least one line of ancestry. Some experimental studies have found that surnames in themselves can lead people to be exposed to racist discrimination (68,69). At a population level, surnames can provide a valuable clue about the distribution of ancestral national origin and open up analytic possibilities for data sources that do not have reliable information about race and ethnicity. Surname matching could improve sampling when oversampling or restricting to individuals from a specific ethnic group (70). In addition, when data sources such as the census have inconsistent racial categorizations over time, surnames can provide the needed standardization to classify people for longitudinal research (13). However, using surnames as a proxy for ascribed race or ethnicity relies on many assumptions, and the usefulness of surname matching will vary across populations and time periods depending on the prevalence of different ancestral groups in the population, enduring legacies of colonization and enslavement, family name practices (e.g., name order, name changes at marriage, patronymic vs. matronymic surnames), rates of marriage between ancestral groups, and other factors. The use of specific surnames as a proxy for nationality or ethnicity rests upon four main methodological assumptions: 1. Though name order varied by culture, family name was accurately recorded as surname in the source data. The validity of surname matching as a proxy for race or ethnicity depends on the extent to which these assumptions are met. Past research, primarily examining Spanish surname criteria, has found that the validity of surname matching criteria varied according to sex (surname criteria had better sensitivity and specificity for men than women) (71,72); social class (surname criteria had better sensitivity and specificity for people of low socioeconomic status compared to high socioeconomic status) (72, 73); colocation of ethnic groups with similar surnames (e.g., Spanish surname criteria are less valid in populations that also have high concentrations of Filipino, Italian, or Portuguese individuals) (71,73,74). This paper evaluates the validity of contemporary Asian surname matching classifications to enumerator racial classification in the 1920,1930, and 1940 censuses. Census data We used restricted Preliminary Complete Count United States census microdata for 1920, 1930, and 1940 from IPUMS United States (75) which includes individual-level name and demographic information. These datasets were generated by IPUMS USA through collaboration with Ancestry.com. Ancestry.com digitized and transcribed the original handwritten census broadsheets ( Figure 1) and IPUMS abstracted these transcriptions into a dataset and performed cleaning and quality checks. More details on the production of these datasets are available elsewhere (76)(77)(78)(79)(80)(81). We restricted this analysis to census data from California, a state that has long been home to multiple Asian national origin groups. For the present analysis, we used information on individuals' sex, age, assigned race, and surname. During this historical period, census enumerators collected data on handwritten "population schedules. " Instructions to Enumerators documents from 1920, 1930, 1940 lend insight into the norms and standards for data collection during this time (49-51). For each census, enumerators were instructed to approach each dwelling in their assigned enumeration district and record information on each resident of the household. The instructions do not specify how information is to be obtained, whether through respondent self-report or enumerator assessment. The use of interpreters was not encouraged; the 1920 instructions suggest: "In the case of an occasional family that does not speak English or any language which you speak, you can usually get along without the aid of a paid interpreter. If you cannot make the head of the family understand what is wanted, call upon some other member of the family; and if none of the family can understand, then, if possible, obtain the unpaid assistance of some neighbor of the same nationality. " The instructions do describe a process for arranging for interpreter services, but state that "the law does not contemplate that interpreters shall be employed to assist enumerators except in extreme cases" (49). Nearly identical instructions were used in 1930 and 1940 (50,51). Enumerator instructions for sex, age, and place of birth are consistent across the 1920, 1930 and 1940 censuses. Enumerators were instructed to classify sex as "M, " or "F"; age in years as of April 1 of the census year; and place of birth (country or US. State). Some census racial categories ("i.e., Mexican" and the terms used for Black Americans) changed across the three decennial census years in this study, but the categories for people of Asian origin remained consistent: "Chinese, Japanese, Filipino, "Hindu, " and Korean. Figure 2 depicts excerpts from the instructions to enumerators in each of the census years. Instructions varied across census years regarding respondents who did not fit into the specified categories: in 1920, enumerators were instructed to write "Ot" for other and write the respondent's race in the margin; in 1930 and 1940, they were to "write the race in full. " The 1940 instructions further specified that "Any mixture of white and nonwhite should be reported according to the nonwhite parent. Mixtures of nonwhite races should be reported according to the race of the father, except that Negro-Indian should be reported as Negro. " Instructions on recording surnames are brief: "Enter first the last name or surname, then the given name in full, and the initial of the middle name, if any" (49-51). Surname classification We used Lauderdale and Kestenbaum's validated surname lists for Asian Indian, Chinese, Filipino, Japanese, Korean, and Vietnamese origin groups, which together include a total of 20,693 surnames (70). These six subgroups constituted approximately 90% of Asian individuals in the dataset used to generate the lists. These lists, originally published in 2000, continue to be applied in multiple disciplines, including political science (82,83), psychology (83) (2) data on all persons entitled to social security benefits or enrolled in Medicare, regardless of nativity. Surnames were considered "predictive" if at least 50% of persons with the surname were associated with a specific national origin and" strongly predictive" if at least 75% of persons with the surname were from a specific national origin. The authors generated "conditional" lists for use with surname data that can be restricted to people classified as Asian race, and "unconditional" lists for use in datasets with no race information. To improve the specificity of the Filipino unconditional list, that list excludes all surnames on the Spanish surname list used by the United States Census Bureau. The authors validated the lists against a subfile of 1990 census data, which included a younger population and a higher proportion of United States nativity than the original data sources (70). Lauderdale and Kestenbaum's lists are methodologically strong compared to other Asian surname lists, in that they have the broadest coverage of Asian ethnic groups and were constructed from a reference population of sufficient size (65). To classify surnames in the census data, we matched the unconditional, predictive surname list to the surname field in the census data and created indicator variables for individuals whose surname matched with each one of the origin groups. We used the "unconditional" list (as opposed to the list used conditional on classification of Asian race) because census data did not use an overall "Asian" category but rather used Asian subgroups. We used the "predictive" list (as opposed to the "strongly predictive" list) to expand the sensitivity, or coverage of the list. We excluded Vietnamese surnames from this analysis because, unlike the other 5 origin groups, Vietnamese origin was not classified as a "race" in the census years under study. Extent to which surname matching assumptions apply to census data on Asian Americans in 1920-1940 Family name was accurately recorded as surname in the source data While some Asian cultural groups use a so-called "Eastern name order, " in which surname precedes individual or given name, name order practices have varied over time and across contexts. For example, Japanese passports used the Eastern order of naming until 1896, when they adopted the Western naming order, which would have been in use in Japanese passports during the period of this study (35). Conversely, Chinese and Korean cultures maintained the practice of listing family names first (35). Meanwhile, though Filipino and South Asian names often followed the "Western order" they may have incorporated multiple family names or surnames, based on maternal maiden names, caste, religion, geography, or honorifics (89,90). This could lead to incorrect segmentation or transposition of multicomponent surnames, as has been observed for two-and threecharacter Chinese names (91). The discussion section will further elaborate on the potential impact of census enumerators incorrectly entering surnames (Figure 2). Low prevalence of intermarriage with other ethnic or racial groups Name change at marriage may be less of an obstacle to validity in the present study for a number of reasons. First, name change at marriage is not the norm in all Asian subgroups (35). Furthermore, intermarriage between whites and Asians was legally prohibited by California's anti-miscegenation law. However, this statute did not regulate Asian interethnic marriages or marriages of Asian individuals to other non-white individuals (92, 93). Some interracial couples likely found ways to circumvent the prohibitions on interracial marriage, but the degree to which these unions occurred is largely unknown (93). Research using 1990 census data identified relatively low rates of out-marriage in Asian immigrant adults aged 65 or older. Among Chinese, Filipino, Indian, Japanese, Korean and Vietnamese men, few had married outside of their own Asian subgroup, with prevalence of out-marriage ranging from 4% of older Chinese men to 12% of older Filipino men. For older married women, the proportions of out-marriage ranged from 6% of Chinese women to 13% of Japanese women and 16% of Korean women. It is thus a reasonable assumption that the prevalence of out-marriage was similarly low in 1920-1940 (94). Second and subsequent generations have similar surnames to those of first-generation immigrants Although there are certainly cases of Asian immigrants changing or anglicizing surnames after arrival in the United States (95), scholarship suggests that anglicizing names was not as common of an assimilation strategy for Asian immigrants as for some other racialized groups (96). Furthermore, because the surname list comes from administrative data on Asian immigrants, the surname list likely includes anglicized versions of Asian surnames that are prevalent in the immigrant population. As the source data for the surname list comes 1990, an additional assumption is that an Asian immigrant surname list from 1990 would not be missing important surnames of Asian immigrants and Asian Americans in the early 20th century. We do not have reason to believe that a surname list developed in the 1990 would be inappropriate to apply to populations from the early 20th century. Although the distribution of ethnic subgroups among Asian Americans has shifted over time and early migration from China in the time of the Chinese Exclusion Act often centered around particular clans that shared the same family name (97), a surname list has no indication of the frequency or distribution of specific surnames--it is simply a list of all names, and most names present in the Asian and Asian American population in 1920-1940 would likely still be captured on a surname list in 1990. The population would not contain multiple subgroups with similar surnames Of the four assumptions, this one is most doubtful when considering Asian Americans in 1920-1940. First, the authors who developed the surname list excluded six surnames (Ha, Jung, Ko, Lee, Lim, Tan) that are common across multiple Asian subgroups and could not reliably predict a specific subgroup. While most subgroups have quite distinctive surnames, Filipinos in California have substantial surname overlap with Latinos due to their common histories of Spanish colonization, which means a criterion based solely on common Filipino names would falsely identify many people of Spanish or Latin American descent. To avoid this, the creators of the unconditional surname list excluded all Filipino names that are on the Spanish surname list from the 1990 census (57), reducing the number of non-Filipinos who are classified as Filipinos, but also missing many Filipinos with Spanish surnames. Analysis All statistical analyses were performed using Stata version 16 (StataCorp LP, College Station, TX). We restricted the analysis to individuals with complete data on assigned race, sex, age, and surname. We calculated descriptive statistics for each census year, presenting frequencies within census year for assigned race, sex, age, and surname categories. To assess the validity of the surname lists in the census data in each census year, we calculated the sensitivity, specificity, and positive predictive value (PPV) of each surname subgroup classification, using census-designated race as the comparison. Though these measures are often used in clinical settings to quantify the validity of diagnostic tests or screenings, they can also be used to examine the validity of a dichotomous exposure variable (98), such as membership in a racialized group. Sensitivity indicates the proportion of "true positives, " or people in a census racial group whose surname is on the list for that subgroup. Specificity indicates the proportion of "true negatives, " or people who were not assigned a given racial group in the census whose surnames were also not on the surname list for that group. Finally, the positive predictive value (PPV) refers to the proportion of people in a surname group who are also assigned that census racial group (71). The PPV is highly variable across populations because it is influenced by the prevalence of the exposure in the population of interest (98). See Table 1 for the formulas used to calculate these probabilities, using the Japanese surname list as an example. See the supplement for the final two by two frequency tables used to calculate these three measures for each of the five surname lists. Frontiers in Public Health 08 frontiersin.org Validity calculations require designation of one of the two methods as the "reference standard, " more commonly referred to as the "gold standard. " However, the term "gold standard" implies credibility even if the validity and accuracy of the reference itself is uncertain. Thus, we emulate others in using the more neutral "reference standard" terminology instead (99). We have chosen to label the census racial categories our reference standard, not because we believe it to be theoretically more valid than a surname match, but because an explicit racial classification in a data source is generally used as the default unless it is unavailable. Census racial designations are not objective truths; rather, census enumerators were subject to their own implicit and conscious biases and played active roles in a racial project of categorizing people. We further comment on issues pertaining to use of census designated race as a reference (or "gold") standard in the discussion section. We compare the validity measures from the 1920, 1930 and 1940 censuses to those calculated by Lauderdale and Kestenbaum when applying the same unconditional predictive surname classification list to a subsample of the 1990 census. Results We began with the complete count of data for California (n 1920 = 3,433,668, n 1930 = 5,669,757, and n 1940 = 6,879,664), and excluded people missing data on assigned race, age, sex and surname for a final sample of (n 1920 = 3,260,722, n 1930 = 5,317,087, and n 1940 = 6,558,462). Table 2 displays demographic characteristics of California's population in each decennial census year. California's population grew substantially between 1920 and 1930. All of the Asian subgroups included in this analysis grew as well, but some did not keep pace with statewide population growth such that their percentage of the total population declined (e.g., from 2.03% Japanese in 1920 to 1.78% in 1930). The Filipino population grew dramatically, from 1,619 in 1920 to 21,099 in 1930. Between 1930 and 1940, the number of people classified as Chinese or Filipino stayed relatively constant, while there were decreases in the number of people classified as "Hindu, " Japanese and Korean. Across successive census years the age distribution of California's population grew slightly older, and the sex distribution shifted to be more balanced, with a higher proportion of female residents each year. The Asian surname groups are smaller than the corresponding census-assigned race groups for each year. Table 3 presents the sensitivity, specificity and PPV comparing the two classification approaches for each Asian subgroup, by census year. We also include sensitivity and PPV from comparing the surname list with 1990 census data, published elsewhere (70). With the exception of Indian surnames, the sensitivities of surname criteria are lower in 1920-1940 census data than in the 1990 census. The extent of the difference varies by subgroup; the sensitivity of the Chinese surname criteria in 1930 (0.67) is not far from the sensitivity in 1990 (0.70). By contrast, the sensitivity of Japanese surname criteria throughout 1920-1940 is substantially lower than in 1990. The sensitivity of Indian surname criteria was substantially higher in 1920-1940 (0.54-0.61) than in 1990 (0.38). Trends in sensitivity across census years also vary by subgroup. Discussion This paper examined the effectiveness of using surnames to classify Chinese, Indian, Japanese, Korean and Filipino subgroups in census data from 1920 to 1940. We found remarkably lower agreement between surname category and census-designated race in 1920-1940 compared to the application of the same surname criteria in 1990. Sensitivity, or (proportion of "true positives") indicates the proportion of people in a census racial group whose surname is on the list for that subgroup. Surname criteria identified more than half of people assigned to the Chinese, Indian and Japanese census racial groups across census years 1920-1940. However, the Chinese and Japanese surname list identified a lower proportion of people classified in those racial groups than when the same lists were used with the 1990 census. The sensitivity of the Korean surname list was lower, identifying 40-45% of people the census categorized as Korean. The Frontiers in Public Health 10 frontiersin.org sensitivity of the Filipino list was even lower, only identifying 10-21% of people assigned Filipino race on the census. All surname lists had specificity (proportion of "true negatives") greater than 99%, meaning that nearly all the people who were not assigned a given racial group were also not on the surname list for that group. Fewer than 1 % of people were falsely identified through surname criteria for that group. Positive predictive value (PPV, proportion of people in a surname group who are also assigned that census racial group) varied widely across subgroups and census years. While sensitivity and specificity describe the validity of a classification system itself, PPV varies with the population prevalence of the characteristic being measured. This explains much of the variation in PPV in historical census data compared to the 1990 census comparison. For example, the low PPV of the Korean surname list in 1920-1940 corresponds to the much smaller Korean population during those years compared to 1990. By contrast, Chinese and Japanese groups were a larger proportion of the California population in 1920-1940 than the total United States population in 1990. The Filipino population grew dramatically between the 1910 and 1920 censuses; as expected the PPV increased in turn. Generally speaking, operationalizing Asian racial subgroups using surname underestimates the size of the groups in historical census data, but minimally misclassifies non-Asian people as members of Asian subgroups. As expected, Filipino surname criteria had the lowest sensitivity of the five subgroups in 1920-1940 and 1990. Korean surnames had very low positive predictive value in 1920-1940. Overall, this raises caution about the use of validated Asian surname criteria as a proxy for racial origin in historical data, particularly for people of Filipino and Korean descent. Limitations One key limitation of our study is that our calculation of validity measures demonstrates the level of agreement between the two classification methods but does not reveal whether they are statistically different because each comparison examines a separate dichotomous variable (i.e., Japanese and non-Japanese as defined by each of the two methods) rather than a complete racial distribution. While the Census racial designation dichotomous variables are drawn from a categorical racial distribution, the surname method does not easily generate such a distribution. The creators of these surname lists caution against using a combination of the lists to identify an overall "Asian-American" group as it would lead to overrepresentation of surname groups whose lists have higher sensitivity (70). Future research could attempt to adjust for the different sensitivities of each surname list to enable a formal statistical comparison of categorical racial distributions from surname lists and census racial classifications. Another limitation of this study is that we were unable to quantify potential error in the census dataset or fully account for the impact of this error on our validity measures. While errors introduced during the original enumeration might be of interest to researchers in and of themselves, digitization or indexing introduces another layer of error. Transcription errors of Asian surnames at both stages of dataset creation remain underexamined in the literature. One study comparing two independent transcriptions of the 1940 census found for individuals born in England (chosen to represent the Englishspeaking foreign-born population) versus those born in Italy (chosen to represent the non-English-speaking foreign-born population) both first name (7.2% vs. 14.3%) and surname transcriptions (17.0% vs. 31.5%) disagreed almost twice as often for those born in Italy (100). We found only one paper that considers transposition of family name and individual name for an Asian subgroup specifically (91). Postel identifies three types of issues commonly found in the recording of Chinese names: segmentation, name order, and standardization. These types of mistakes were geographically and temporally inconsistent across enumeration contexts. For example, segmentation errors during indexing led 79% of Chinese individuals to have either their first or last name missing because all the components of their name were allocated to a single variable rather than being split into a personal name and a family name (91). All of these factors could undermine our assumption that family names were accurately recorded under surname for Chinese immigrants. Finally, an important consideration and caveat when comparing validity statistics across census years is that the reference standard, census racial classification, is far from a gold standard, and certainly varied substantially in its accuracy in 1920-1940, when census enumerators assigned race, compared to 1990 when race was supposed to be self-identified. Our study did not account for the enumerator bias present within the census racial classifications themselves, but the work of other scholars (9,101,102) can serve as a guide to future efforts to quantify bias in census racial classifications. As such, differences in validity statistics may reflect inadequacies of census racial classification as well as the appropriateness of surname classification. With this caveat in mind, we believe that the factors contributing to the lower validity of the surname criteria found in our analysis are many and complex, and thus need additional research to extricate. The following section highlights some possible explanations for the lower validity, each of which represents a promising path for future improvement of the use of surname criteria, census racial classifications, or both. Possible explanations for disagreement between surname criteria and census classifications Based on the historical research and limited quantitative analysis of the dynamics at play in the changes in surname patterns and the assignment of race during census enumeration, we can speculate about a few possible factors. Census enumeration instructions from each decade reinforce the agency given to individual enumerators in assigning race, even within the bounds of their official instructions and training, as well as challenges they faced in their task. The Census Bureau did not prioritize use of translators and instead relied upon the unpaid translation work of family members or neighbors (49-51). Census workers collecting information from more recent immigrants were attempting to communicate with people who may have spoken an unfamiliar language with an unfamiliar alphabet, which likely produced errors in both the spelling and romanization of surname and the categorization of race. These communication barriers could further interact with imbalanced power dynamics in a variety of congregate living settings, with foremen or institutional authorities of a different race making decisions about racial classification and spelling of names even further removed from the individual being described than in a typical enumerator observation. Beyond the impact of language barriers, both the implicit and conscious biases of census enumerators likely impacted their (101,102). They provide evidence that census instructions and procedures sometimes conflicted with enumerators' sociallydefined conceptualization of race based on appearance or phenotype and emphasize the active role census enumerators played in this act of racialization. They encountered thousands of instances across both censuses where a small group of census supervisors "corrected" the racial categorizations in post-enumeration edits of the census broadsheets. These edits to an individual's race were usually performed on the basis of parental race or similar rules of racial heritability or "racial logic, " suggesting contested racialization processes and a degree of error inherent in attempting to impose simplistic logic onto ambiguous sociopolitical categories (101,102). While the Asian population of Puerto Rico was small and thus did not feature in their commentary, it stands to reason that the enumerators hired and trained under the same federal agency, the United States Census Bureau, were able to play similarly active roles in the racialization of the people they enumerated, albeit under a different regional and sociopolitical context. It is unclear whether a similar editing process took place in the California censuses, but investigating the original census broadsheets could be a rich avenue for future study if data use agreements allow. Shah presents an earlier relevant example of the role of enumerator racial bias-specifically anti-Chinese bias-on census data collection. In the 1870 United States census, two census enumerators with known biographical information produced vastly different counts of the number of Chinese women with an occupation of "prostitute" in their respective enumeration districts in San Francisco's Chinatown. One enumerator often divided the Chinese residents in congregate living situations into two families by sex and listed the occupation of all the men as "laborer" and all the women as "prostitute. " This resulted in 90% of Chinese women over the age of 12 being designated prostitutes in his district. The other enumerator, who was more sympathetic to Chinese immigrants, recognized more complex family delineations and only designated 53% of Chinese women over the age of 12 as prostitutes (9). As briefly mentioned in the introduction, ethnic and regional diversity within the Asian countries of origin may contribute to the lack of agreement between census racial categorization and surname match. In the age of Chinese exclusion, which covers the entirety of our study period, Chinese immigrants navigated a complex array of United States immigration and naturalization laws. Thus, continued migration, though occurring at lower numbers than before, was often facilitated by clan (i.e., surname) associations of Chinese immigrants and their American-born children already in the United States (97). While the sensitivity of the Chinese surname list as applied in our analysis was comparable to the sensitivity reported by Lauderdale and Kestenbaum (70), the contextual knowledge of prevalence of specific Chinese surnames could perhaps be used to further improve sensitivity in future applications. In contrast, our sensitivity values for the Japanese and Filipino surname lists were much lower overall than those reported by Lauderdale and Kestenbaum. There is also evidence of regional emigration patterns and labor recruitment practices in Japan and the Philippines (103,104) and migrants from these regions could have had distinct surname patterns that changed more over time than Chinese surname distributions. Global power structures, especially in an age of imperialism, had a significant impact on surnames in certain contexts. Of particular note, our period of study coincides with periods of colonial oppression of two national origin groups: the Japanese occupation of Korea from 1910 to 1945 and American control of the Philippines from the late 1890s to 1934. In 1939 the Japanese government enacted legislation pressuring Koreans to assimilate to Japanese society by changing their surnames, resulting in many ethnic Koreans possessing Japanese surnames in the later period of Japanese occupation (104). This practice likely occurred too late to affect Korean immigrants or Korean-Americans in our study population; however, some scholars claim this practice began earlier (105), both involuntarily and voluntarily, with some upper-class Koreans adopting Japanese surnames to increase social status (106). Furthermore, Korea had only recently attained independence from Chinese rule in the late 1890s, so the influence of Chinese rule on surnames likely persisted as well (106). Unlike in the case of shifting political boundaries in Europe (e.g., German Poland, Russian Poland, Alsace-Lorraine, Bavaria etc. in 1920), the census enumeration instructions did not specify how the Japanese occupation of Korea would affect the recording of race or birthplace for either Japanese or Korean individuals (49-51). The United States occupation of the Philippines followed several centuries of Spanish colonization of the archipelago. Early Filipino immigrants to the United States were largely pensionados, or government-sponsored scholarship students from upper-class Filipino families, and self-supporting students from middle-class families. The drastic increase in the number of Filipinos in the United States through the 1920s (especially after the Immigration Quota Act of 1924 barred immigration from other Asian countries that had previously provided a steady source of immigrant workers) was driven primarily by laborers (10). If this shift in socioeconomic status of Filipino immigrants manifested in differential surname patterns, it may have contributed to the jump in sensitivity from 10 to 21% between the 1920 and 1930 censuses. Implications and conclusion This paper adds to the literature by extending Asian surname criteria matching to historical data. While Asian surnames have been used in multiple health studies (86)(87)(88), we encountered only a few examples of applying surname criteria in the context of historical research or historical health research (107)(108)(109)(110)(111). Spanish surname lists have been used extensively, but the potential to identify Asian persons in data sources without information on race, and to differentiate among Asian subgroups within historical data sources with less specific racial classifications remains relatively untapped. Historical data sources present rich opportunities to document and analyze dynamics of anti-Asian racism that underpin current inequities. Historical events still affect contemporary health outcomes, whether they manifest through intergenerational trauma or in the biases of the very data relied upon for longitudinally assessing population-level health (13). Public health scholars can heed calls to examine our history in order to understand and dismantle contemporary injustices (11)(12)(13). A growing literature uses historical data to examine structural drivers of Black-white racial inequalities in health (11,(112)(113)(114)(115), but research extending Frontiers in Public Health 13 frontiersin.org this approach to other racialized groups is limited, partly because of the inconsistency or unavailability of historical data on these populations. The lower level of agreement between the surname-criteria and census designation in measuring race does not mean the data are not useful or valid, only that one method may be more valid for specific research questions and that each has its own limitations that should be accounted for in discussing results. In fact, some of the possible biases in the census racial data and their effects on the dataset pose interesting research questions in and of themselves. Additional research could explore multifactor measures of race and ethnicity (17,20) and explicitly test the underlying assumptions of surname analysis. Class-based paradigms of race (29) suggest occupation in the context of exploitative labor practices could be one census variable used in such a multifactor measure. Molina's analysis of discrimination against Chinese launderers in the name of "public health" in early 1900s Los Angeles further supports this suggestion (47). Quantitative researchers may shy away from the complexity of conducting historical research about racism and health, but we hope this study exemplifies how variables can be used thoughtfully and contextually while still producing categories feasible for analysis. Demography and statistics were once used by white supremacists and eugenicists as tools to "prove" the biological inferiority of non-white people. The census was not only used in service of racist research, but was in turn shaped by the research goals of those same political actors. Unless we adequately interrogate our usage of this same data, we risk reproducing the harm of racist power structures. Data availability statement The data analyzed in this study is subject to the following licenses/ restrictions: Researchers can access restricted complete count data (including names and string variables) for United States censuses 1870-1940 through a research agreement with IPUMS United States. Requests to access these datasets should be directed to ipums@ umn.edu. Author contributions MK, NN, and SG conceptualized the study and developed the methodology. MK developed the background and theory in consultation with MK, NN, SG, SH, and AS. MK and SG cleaned the data. SG and NN conducted the statistical analysis. MK and NN prepared the manuscript draft. All authors contributed to the article and approved the submitted version.
2023-07-11T15:52:48.340Z
2023-07-06T00:00:00.000
{ "year": 2023, "sha1": "9f7a310c44d8a58681afd2b4ea551f3a68a80aec", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.983434/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0af2e9e365dd9f7a350735f10f6fe8c381c331b6", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
252155954
pes2o/s2orc
v3-fos-license
Design, Synthesis, and Bioactivities of Novel Tryptophan Derivatives Containing 2,5-Diketopiperazine and Acyl Hydrazine Moieties Based on the scaffolds widely used in drug design, a series of novel tryptophan derivatives containing 2,5-diketopiperazine and acyl hydrazine moieties have been designed, synthesized, characterized, and evaluated for their biological activities. The bioassay results showed that the target compounds possessed moderate to good antiviral activities against tobacco mosaic virus (TMV), among which compounds 4, 9, 14, 19, and 24 showed higher inactivation, curative, and protection activities in vivo than that of ribavirin (39 ± 1, 37 ± 1, 39 ± 1 at 500 mg/L) and comparable to that of ningnanmycin (58 ± 1, 55 ± 1, 57 ± 1% at 500 mg/L). Thus, these compounds are a promising candidate for anti-TMV development. Most of these compounds showed broad-spectrum fungicidal activities against 13 kinds of phytopathogenic fungi and selective fungicidal activities against Alternaria solani, Phytophthora capsica, and Sclerotinia sclerotiorum. Additionally, some of these compounds exhibited larvicidal activities against Tetranychus cinnabarinus, Plutella xylostella, Culex pipiens pallens, Mythimna separata, Helicoverpa armigera, and Pyrausta nubilalis. Introduction Plant viruses, which are composed of nucleic acids and proteins [1], cause global economic losses as high as USD 60 billion every year [2][3][4][5][6]. They can change the normal metabolic process of host plants, interfere with or destroy the activity of respiratory photosynthetic enzymes and the metabolism of auxin and other hormones, in addition to robbing some nutrients of infected plants. Thus far, about 1100 kinds of viruses have been found. TMV (tobacco mosaic virus) is one of the oldest known plant viruses and ranks first among the top 10 plant viruses, causing economic losses of more than USD 100 million per year. There is no antiviral agent that can completely inhibit plant viruses, and the development of novel and more practical antiviral reagents is sorely needed [7,8]. Natural products are secondary metabolites retained by natural selection after a long time of evolution. Natural products are often characterized by chemical structure and biological activity diversity, which makes them of great value in drug development and utilization [9,10]. By September 2019, among the 185 small molecule anticancer drugs approved for sale by the FDA, 120 are related to natural products, accounting for 64.9% [11]. Tryptophan is a biosynthetic precursor in notable bioactive compounds [12][13][14][15], it also has a central role in metabolism, protein structure, and signaling, and analogs are frequently used to probe enzyme function or alter enzyme properties. In our previous work, we found, for the first time, that tryptophan showed moderate anti-plant virus activity [16], which can be used as an antiviral lead for subsequent studies. The acyl hydrazone structure is a complex of hydrogen bond donors and receptors. In our previous work, it was found that the acyl hydrazone structure could enhance the anti-TMV activity of the compound, possibly because the hydrogen bond receptor or donor of the acyl hydrazone enhanced the interaction with the amino acid residues of TMV CP, thus preventing the assembly of the virus [21][22][23]. In this work, to improve the anti-virus activity of tryptophan, we designed and synthesized a series of novel tryptophan derivatives containing diketopiperazine (DKP) and acyl hydrazon moieties and first evaluated their biological activities ( Figure 2). In addition, the fungicidal and larvicidal activities of the newly synthesized tryptophan derivatives were also studied to expand their potential agricultural applications. The acyl hydrazone structure is a complex of hydrogen bond donors and receptors. In our previous work, it was found that the acyl hydrazone structure could enhance the anti-TMV activity of the compound, possibly because the hydrogen bond receptor or donor of the acyl hydrazone enhanced the interaction with the amino acid residues of TMV CP, thus preventing the assembly of the virus [21][22][23]. In this work, to improve the anti-virus activity of tryptophan, we designed and synthesized a series of novel tryptophan derivatives containing diketopiperazine (DKP) and acyl hydrazon moieties and first evaluated their biological activities ( Figure 2). In addition, the fungicidal and larvicidal activities of the newly synthesized tryptophan derivatives were also studied to expand their potential agricultural applications. For acylhydrazone derivatives, 4-25, the types, position, and number of substituents on the benzene ring had an important influence on the anti-TMV activity. The introduction of strong electron-withdrawing groups on the benzene ring, such as nitro (5,17), and trifluoromethyl (10), was detrimental to the activity. For the substituents at the para position of the benzene ring, electron-donating groups (6,9) and weak electron-withdrawing group (8) were favorable for maintaining the activity. The position of the substituents on the benzene ring had a significant effect on the activity and showed a significant ortho-position effect; that is, the activities of the ortho-substituted derivatives were significantly better than that of the derivatives substituted at other positions (14 versus 8, 9, and 19 versus 9, 18). For example, when the benzene ring has a methoxy substituted on the benzene ring, the order of bioactivity levels is 19 (2-OMe) > 9 (4-OMe) > 18 (3-OMe); different from this, when the substituent was chlorine, the order changed to 14 (2-Cl) > 13 (3-Cl) > 8 (4-Cl). The anti-TMV activities of 14 (inhibition rate for inactivation, curative, and protection activities in vivo: 54 ± 3, 50 ± 3, 45 ± 2% at 500 mg/) and 19 * When the inactivation effect of a compound was less than 40%, its protection and curative effects were not determined. For acylhydrazone derivatives, 4-25, the types, position, and number of substituents on the benzene ring had an important influence on the anti-TMV activity. The introduction of strong electron-withdrawing groups on the benzene ring, such as nitro (5,17), and trifluoromethyl (10), was detrimental to the activity. For the substituents at the para position of the benzene ring, electron-donating groups (6,9) and weak electron-withdrawing group (8) were favorable for maintaining the activity. The position of the substituents on the benzene ring had a significant effect on the activity and showed a significant ortho-position effect; that is, the activities of the ortho-substituted derivatives were significantly better than that of the derivatives substituted at other positions (14 versus 8, 9, and 19 versus 9, 18). For example, when the benzene ring has a methoxy substituted on the benzene ring, the order of bioactivity levels is 19 (2-OMe) > 9 (4-OMe) > 18 (3-OMe); different from this, when the substituent was chlorine, the order changed to 14 (2-Cl) > 13 (3-Cl) > 8 (4-Cl). The anti-TMV activities of 14 (inhibition rate for inactivation, curative, and protection activities in vivo: 54 ± 3, 50 ± 3, 45 ± 2% at 500 mg/) and 19 * When the inactivation effect of a compound was less than 40%, its protection and curative effects were not determined. For acylhydrazone derivatives, 4-25, the types, position, and number of substituents on the benzene ring had an important influence on the anti-TMV activity. The introduction of strong electron-withdrawing groups on the benzene ring, such as nitro (5,17), and trifluoromethyl (10), was detrimental to the activity. For the substituents at the para position of the benzene ring, electron-donating groups (6,9) and weak electron-withdrawing group (8) were favorable for maintaining the activity. The position of the substituents on the benzene ring had a significant effect on the activity and showed a significant ortho-position effect; that is, the activities of the ortho-substituted derivatives were significantly better than that of the derivatives substituted at other positions (14 versus 8, 9, and 19 versus 9, 18). For example, when the benzene ring has a methoxy substituted on the benzene ring, the order of bioactivity levels is 19 (2-OMe) > 9 (4-OMe) > 18 (3-OMe); different from this, when the substituent was chlorine, the order changed to 14 (2-Cl) > 13 (3-Cl) > 8 (4-Cl). The anti-TMV activities of 14 (inhibition rate for inactivation, curative, and protection activities in vivo: 54 ± 3, 50 ± 3, 45 ± 2% at 500 mg/) and 19 * When the inactivation effect of a compound was less than 40%, its protection and curative effects were not determined. For acylhydrazone derivatives, 4-25, the types, position, and number of substituents on the benzene ring had an important influence on the anti-TMV activity. The introduction of strong electron-withdrawing groups on the benzene ring, such as nitro (5,17), and trifluoromethyl (10), was detrimental to the activity. For the substituents at the para position of the benzene ring, electron-donating groups (6,9) and weak electron-withdrawing group (8) were favorable for maintaining the activity. The position of the substituents on the benzene ring had a significant effect on the activity and showed a significant ortho-position effect; that is, the activities of the ortho-substituted derivatives were significantly better than that of the derivatives substituted at other positions (14 versus 8, 9, and 19 versus 9, 18). For example, when the benzene ring has a methoxy substituted on the benzene ring, the order of bioactivity levels is 19 (2-OMe) > 9 (4-OMe) > 18 (3-OMe); different from this, when the substituent was chlorine, the order changed to 14 (2-Cl) > 13 (3-Cl) > 8 (4-Cl). The anti-TMV activities of 14 (inhibition rate for inactivation, curative, and protection activities in vivo: 54 ± 3, 50 ± 3, 45 ± 2% at 500 mg/) and 19 * When the inactivation effect of a compound was less than 40%, its protection and curative effects were not determined. For acylhydrazone derivatives, 4-25, the types, position, and number of substituents on the benzene ring had an important influence on the anti-TMV activity. The introduction of strong electron-withdrawing groups on the benzene ring, such as nitro (5,17), and trifluoromethyl (10), was detrimental to the activity. For the substituents at the para position of the benzene ring, electron-donating groups (6,9) and weak electron-withdrawing group (8) were favorable for maintaining the activity. The position of the substituents on the benzene ring had a significant effect on the activity and showed a significant ortho-position effect; that is, the activities of the ortho-substituted derivatives were significantly better than that of the derivatives substituted at other positions (14 versus 8, 9, and 19 versus 9, 18). For example, when the benzene ring has a methoxy substituted on the benzene ring, the order of bioactivity levels is 19 (2-OMe) > 9 (4-OMe) > 18 (3-OMe); different from this, when the substituent was chlorine, the order changed to 14 (2-Cl) > 13 (3-Cl) > 8 (4-Cl). The anti-TMV activities of 14 (inhibition rate for inactivation, curative, and protection activities in vivo: 54 ± 3, 50 ± 3, 45 ± 2% at 500 mg/) and 19 * When the inactivation effect of a compound was less than 40%, its protection and curative effects were not determined. For acylhydrazone derivatives, 4-25, the types, position, and number of substituents on the benzene ring had an important influence on the anti-TMV activity. The introduction of strong electron-withdrawing groups on the benzene ring, such as nitro (5,17), and trifluoromethyl (10), was detrimental to the activity. For the substituents at the para position of the benzene ring, electron-donating groups (6,9) and weak electron-withdrawing group (8) were favorable for maintaining the activity. The position of the substituents on the benzene ring had a significant effect on the activity and showed a significant ortho-position effect; that is, the activities of the ortho-substituted derivatives were significantly better than that of the derivatives substituted at other positions (14 versus 8, 9, and 19 versus 9, 18). For example, when the benzene ring has a methoxy substituted on the benzene ring, the order of bioactivity levels is 19 (2-OMe) > 9 (4-OMe) > 18 (3-OMe); different from this, when the substituent was chlorine, the order changed to 14 (2-Cl) > 13 (3-Cl) > 8 (4-Cl). The anti-TMV activities of 14 (inhibition rate for inactivation, curative, and protection activities in vivo: 54 ± 3, 50 ± 3, 45 ± 2% at 500 mg/) and 19 (53 ± 2, 48 ± 4, 45 ± 2% at 500 mg/L) were better than that of ribavirin (39 ± 1, 37 ± 1, 39 ± 1 at 500 mg/L) and comparable to that of ningnanmycin (58 ± 1, 55 ± 1, 57 ± 1% at 500 mg/L). These two compounds could be further developed as antiviral drug candidates. To investigate the role of R in bioactivity, we designed and synthesized compound 25, which has a methyl at the imine moiety. To our delight, it showed lower antiviral activities (43 ± 3, 38 ± 2, 40 ± 4%, 500 mg/L) than compound 4 (R = H, 51 ± 1, 46 ± 2, 48 ± 3% at 500 µg/mL). The above experimental results prove the rationality of our choice of aldimine. When the benzene ring was changed to alkyl groups (31 and 32), the activity decreased obviously. Larvicidal Activities We then studied the larvicidal activities of the synthesized derivatives, and different orders of pests were selected for the research, such as T. cinnabarinus, P. xylostella (lepidoptera), and C. pipiens pallens (diptera) ( Table 4). In general, some derivatives showed larvicidal activities against these pests, and at the same time, these derivatives showed obvious selectivity. The derivatives containing the structure of benzyl imines 18 (3-OMe) and 21 (1,3-dioxol) showed good larvicidal activity against T. cinnabarinus. Hydrazide derivative 3 showed no activity against T. cinnabarinus. For the lepidopteran pest P. xylostella, the overall activity was better than that against T. cinnabarinus, and most of the derivatives showed larvicidal activities. Likewise, hydrazide derivative 3 did not exhibit larvicidal activity against P. xylostella. Derivatives containing the structure of benzyl imines 4 (no substituent), 23 (4-bromo-2,6-difluoro), and heteroarylmethyl imines 29 (imidazolyl) showed >50% larvicidal activities against P. xylostella at 200 mg/L. Different from the activity rules of the former two pests, hydrazide derivative 3 has larvicidal activity against C. pipiens pallens, and its activity against C. pipiens pallens larvae was 50 ± 0% at the concentration of 2 mg/L. Derivatives containing the structure of benzyl imines 9 (4-OMe), 21 (1,3-dioxol), 23 (4-bromo-2,6-difluoro), and heteroarylmethyl imines 28 (furyl) showed >60% larvicidal activities at 5 mg/L. To further study the larvicidal activities of these derivatives against other lepidopteran pests, the larvicidal activities against M. separate, H. armigera, and P. nubilalis were also studied (Table 5). In general, most derivatives showed larvicidal activities against these three lepidopteran pests. The structure-activity relationship was different from that of larvicidal activities against P. xylostella, where derivative 3 showed no larvicidal activity, but this derivative exhibited larvicidal activity against these three lepidopteran pests. Derivatives containing the structure of benzyl imines 12 (4-Ph), heteroarylmethyl imines 28 (furyl), alkyl imines 31 (t-Bu), and 32 (cyclohexyl) showed >60% larvicidal activities at 600 mg/L, the larvicidal activities of derivatives 31 and 32 against these three pests were 100% at 600 mg/L. This means that a good fat-soluble alkyl substituent was beneficial to larvicidal activities. Derivatives 31 and 32 can be used as insecticidal leads for further study. Materials The hydrazinolysis reaction was carried out in a microwave synthesis system (100 • C, 100 W, Discover S-Class, CEM). 1 H, 13 C nuclear magnetic resonance (NMR) spectra were obtained at 400 MHz using a Bruker AC-P 400. Chemical shift values (δ) were given in parts per million (ppm) and were downfield from internal tetramethylsilane. High-resolution mass spectra (HRMS) data were obtained on an FTICR-MS instrument (Ionspec 7.0 T). The melting points were determined on an X-4 binocular microscope melting point apparatus and were uncorrected. Reaction progress was monitored by thin-layer chromatography on silica gel GF-254 with detection by UV. General Synthesis The synthetic routes of target compounds 3-32 are depicted in Scheme 1. The spectra of target compounds 3-32 are depicted in the Supplementary Materials.
2022-09-09T16:21:40.980Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "313f7acd4eae563a8aa3eedf8d171f9bd39b9467", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/18/5758/pdf?version=1662474491", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46ccb939d18ee1c2ce0f79523bb9f220a8acaa89", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
219556117
pes2o/s2orc
v3-fos-license
Nonlinear Processing of Wrist Pulse Signals to Distinguish Diabetic and Non-Diabetic Subjects In pulse diagnosis, the pulse signals obtained at wrist have been used for analysis of certain diseases in ancient systems of medicine in which the practitioner feels the pulse of the subject by placing his three fingers on the subject’s wrist at three distinct radial pulse point locations. The preliminary studies show that there are many conventional linear techniques applied to analyze the wrist pulse signals and less focus on non-linear techniques. Hence, the main aim of this research is to apply Recurrence Plot and Recurrence Quantification Analysis (RQA), a nonlinear technique to analyze the wrist pulse signals for distinguishing between diabetic and non-diabetic subjects. Wrist pulse signals from 32 subjects were recorded during the morning hours and were analyzed using RQA techniques. The results show significant differences in the RQA parameters of the wrist pulse signals as they are obtained from the recurrences occurring in the phase space plots of the wrist pulse signals. It was found that parameters like entropy, divergence and average diagonal line length showed significant variations for diabetic and nondiabetic subjects. Therefore, it can be concluded that RQA parameters can be used effectively to identify diabetic and nondiabetic subjects and thus may be applied on the wrist pulse signals for early detection of various diseases. I. INTRODUCTION Pulse based diagnosis has an important role in ancient literatures be it Ayurveda, Unani, Chinese or Greek from thousands of years. The pulses are felt by placing the practitioners hand in a certain orientation at proper positions on the patient's wrist. The pressure applied on the wirst is varied to obtain the maximal pulse and also the different phases of the pulse. [1]. Just like most of the other physiological signals, wrist pulse also contained some singularities which vary rapidly for a very small change in time [2]. Usually the radial pulse is chosen as the site to read the pulse as it is the most convenient position to read and also more easily available than any other pulse sites. The blood flow through the artery and change in vessel diameter along with the cardiac activities results in obtaining the pulse signals on the radial artery at three distinct positions. This makes pulse diagnosis effective for analyzing both cardiac and non-cardiac diseases [3]. The heart generates a forward traveling wave-front in complex patterns of blood flow. There will be pressure changes at different points in the arterial circulation. Physiological signals such as heart sound, electrocardiogram, and wrist pulse waveform are recorded at these locations non-invasively for the assessment of the physiological condition of the human body [4]. The palpation is used as an important diagnostic method in traditional medicine to know the nature of illness and identify effective treatments. The connection of pulse signals with several diseases is verified by the preliminary studies [5], [3], [6]. Literature also shows that when certain external pressure is applied it is possible to analyze certain semi-solid and hollow organs by feeling the pulse at different layers and depths of the radial. The evidence is discovered that the harmonic parameters of the pressure pulse waveform is affected by the pathological changes in different organs, which has lead to the pulse wave analysis as an effective tool for diagnosis [7]. The radial artery can be compared to a cylinder having three dimensions. Three axes are required for the location of measuring points and they are considered as shown in the Fig. 1. Considering Y axis to be parallel to the radial artery, perpendicular to the radial artery (i.e., vertical) is Z axis and X axis is considered to be perpendicular to the forearm (i.e., horizontal). The exerted pressure by the sensor on the artery is actually at the location on th Z axis which can be viewed as the transmural pressure. It is reported that the raidal pulse amplitude increases when greater pressure is exerted on the skin [8]- [10]. But when the exerted pressure exceeds a certain level, the wave velocity and the pulse amplitudes decreases. The variance in transmural pressure of blood vessel is acquired by pressure sensor [3]. Based on the pressure sensors, the pulse signal is acquired by exerting pressure on the radial artery of the patient's wrist and the pressure variation at that position is measured. Actually, pulse waves are generated when the heart contracts as there is an expulsion of blood into the aorta which causes dilation of that blood vessel. Any generated pressure will affect the complete physiological system and hence, any changes in the pressure of the radial artery can be measured on the wrist non-invasively. The physiological status of the entire human body is obtained by these pulsations as their changes are sensed by the three fingers (index, middle and ring) of a pulse examiner [11]- [12]. The change in frequency, size, shape, width, strength and power of the pulse can be determined by palapting the radial artery on the patient's wrist using the three fingers. This approach is subjective in nature and takes years of practice to master [13]. This is a tedious and inconvenient process. Pulse is classified based on its characterisitcs like rate, rhythm, volume, regulatiry, force and contact pressure required to feel the pulse. Further, the pulse can be classified as tender or tense, sinking or floating, large or small, fast or slow, emplty or full etc., where each of these parameter reflect the personal condition of the body. To obtain the pulse conditions of the patients various pulse diagnosis instruments have been proposed [14]- [16]. Several pulse taking platforms are invented but will restrict its applications due to its complexity and insufficiency. Usually the pulse is transmitted from the heart by blood flow through arteries and this is affected by several conditions of organs, muscles, nerves and so on [17]. More body information is obtained from wrist pulse based on this principle to get broader applications in health status analysis [18]. One of the major disorders is Diabetes Mellitus (DM) which is a group of metabolic diseases. The prevalence of diabetes is globally increasing. There has been a growing research interest in finding alternate indicators for early identification of diabetes [19]. In the recent past wrist pulse signal analysis has obtained much importance in research and is said to be a good predictor of diabetes. Various pulse acquisition systems had been reported with single sensor which are larger in size and not user friendly. By using a single sensor it is inconvenient to locate the position. Moreover, for the pulse diagnosis, the signals from the three positions of the wrist is required. Hence, having a single sensor makes the sampling procedure very time consuming as one need to sample the three signals Individually. Hence a multichannel wrist pulse system with different sensor arrays can be used to reduce information loss. In multichannel wrist pulse acquisition, three independent channels are used in the pulse probe. The center of the radial artery and pulse width information is detected using photoelectric sensor array. The fluctuations of the blood vessel is reflected in the signals collected using pressure sensors and the changes in blood volume is reflected in the signals collected using photoelectric sensors. Both these sensors are combined to get detailed pulse information. Since the pressure applied on the wrist to acquired these signals differs in the three positions, there is a need to gain primary knowledge to ensure correct position for acquiring the pulse signal. The pulse signals are processed using time domain and frequency domain analysis. The main pulse features in time domain analysis are the tidal wave, the prominent peak amplitude etc. In the frequency domain analysis, the spectrum features such as energy ratio and power spectrum are extracted from the pulse signals. The biomedical signals are generally analyzed as stochastic process output. There are several approaches towards linear model. There limitation is that these models do not consider nonlinear process from which the biomedical signals are generated. Hence, linear models cannot give a peculiar represenation of the biological system considered. Therefore, a different approach is required to analyse biomedical signals and one such approach can be to use a nonlinear technique for analysis rather than using conventional linear techniques. The maximal Lyapunov exponent and correlation dimension are among few signal processing techniques which has the properties of nonlinear system and can be used for the dynamic analysis of biological signals [20]. Phase space is used by both these techniques to demonstrate the changes in the nonlinear system. The complexity of time series is estimated using correlation dimension and the amount of chaos in the system is estimated using maximal Lyapunov exponent. The correlation dimension also calculates the embedding dimension while reconstructing the phase space. The method used in this work utilizes one of the fundamental property of recurrences. At first, it was Poincar´e who proved that in a phase space the trajectory of a chaotic system will return arbitrarily close to any former point of its route with probability one after a sufficiently long time. Then, Lorenz discovered three ordinary differential equations that can exhibit chaotic behavior [21]. Later, the method of recurence plots (RPs) was intoduced to visualize the recurrences of a dynamical system [22]. Hence this method gained importance for its applicability to short and non-stationary time series data. In addition, researchers have studied the relationship between RPs and the properties of dynamical systems [23]. In this work, our main concern is the application of a nonlinear technique for distinguishing the diabetic and nondiabetic subjects. II. MATERIAL AND METHODS The main objective of this study is to apply the nonlinear signal processing technique to infer about the variations in the wrist pulse signals obtain from diabetic and non-diabetic subjects at different positions. A. Acquisition of Wrist Pulse Signals An experimental set up was designed to record the wrist pulse signals at two positions. A differential pressure sensor is used to acquire the wrist pulse signal which is then passed through the instrumentation amplifier. The amplified signal is further filtered using high pass filter to remove the motion artifacts and other noise components. The signal is further passed through the low pass filter to remove the ripples. This signal is given as input to microcontroller for converting analog to digital form and then transmitted to PLX-DAQ software to store the data in an excel format. Further the data stored in excel format is read in MATLAB (Mathworks Inc.) for further processing and analysis. B. Subjects Thirty two subjects were recruited for the current study. All the subjects identified are above 35 years of age including men and women. The wrist pulse signals were recorded in the morning on the left hand of the subjects. C. Signal Processing The use of nonlinear methods to analyze wrist pulse signals is becoming popular as they are sensitive enough to detect the early phases of health damages. With the application of this method there can be improvement in healthcare as it helps in understanding the process occurring in human body. One such method that can describe the basic dynamics of a system with chaotic behavior which is found to be in every biological system is the recurrence analysis. The signal obtained from the natural system is used to construct the phase space trajectory. The procedure followed to map a discrete signal x into phase space is as follows. Given a signal assumed to have N s terms, 1 , 2 , 3 , ⋯ ⋯ , ⋯ , vectors y i of dimension D with a lag (delay) d are defined as This is referred to as embedding the signal, x, into a phase space of dimension D along with lag d and the variable y gives the trajectory of x in phase space of dimension D. Consider a signal of duration N s there are N s number of vectors. The original state space of phase space is topologically represented by preserving all its properties as stated by Takens' theorem. The phase space preserves the recurrences as they are the topological property of state space [24]. The recurrence of states in phase space is visualized using a tool called recurrence plot (RP). The D-dimensional phase space trajectory can be investigated through a twodimensional representation of its recurrences by using a NxN matrix whose elements take values 1 or 0. The recurrence plot graphically shows a recurrence of a state at time i and at time j by a square of size N×N pixels. A unit black or white pixel is mapped to each element of the matrix at the corresponding location [22]. The higher dimensional trajectory in a two-dimensional space is mathematically expressed as an N×N matrix and is calculated as: Where a norm is represented as ||. ||,ε is used to represent the recurrence threshold and Θ( ) shows the Heaviside step function forcing R i,j to be either 0 or 1. D. Recurrence Quantification Analysis Even though graphical appeal is the main advantage of RP, it often contains subtle patterns. A visual inspection or qualitative analysis is not sufficient to detect these patterns in practical applications. Henceforth, the concept of Recurrence Quantification Analysis (RQA) was introduced to solve this problem by quantifying RPs [25]. RQA measures are defined for signals of finite duration. The various RQA parameters considered in this paper are recurrence rate, determinism, diagonal length, laminarity, divergence and entropy. III. RESULTS AND DISCUSSION The main focus of this study was to identify how the the wrist pulse signals of the subjects vary between the diabetic and non-diabetic subjects. An experimental set up is used for acquisition of wrist pulse signals from different subjects. The Recurrence Quantification Analysis (RQA) which is a nonlinear technique were applied on the wrist pulse signals to determine the parameters that show the differences between the diabetic and non-diabetic subjects. Also, the previous studies have shown that processing of wrist pulse signals will help in identification and diagnosis of various diseases. Because of these reasons, wrist pulse signals acquired at different positions were considered for the identification of diabetic subjects. Nonlinear techniques such as recurrence plot and various RQA parameters such as laminarity, divergence, diagonal line length, entropy, recurrence rate, and determinism were extracted from the signals. The results obtained showed significant differences in recurrence plots and RQA parameters extracted from the wrist pulse signal for diabetic and non-diabetic subjects. The wrist pulse signals are recorded simultaneously at two positions (position 1 is index finger and position 2 is middle finger) on the radial artery located at the wrist of the subject. Fig.2 shows the signals obtained for non-diabetic subjects. Fig. 2: Wrist pulse signal of non-diabetic subject Similarly, Fig. 3 shows the wrist pulse signal obtained for two positions on the radial artery located at the wrist. The wrist pulse signal obtained from diabetic subjects shows drastic variations compared to non-diabetic subjects. We can see that several parameters show changes for diabetic subjects. We can conclude that the recurrence quantification analysis technique applied on wrist pulse signals is suitable for identifying the diabetic and nondiabetic subjects. IV. CONCLUSION This paper is concerned with distinguishing between diabetic and non-diabetic subjects by using the wrist pulse signals. Biomedical signals are inherently non-linear in nature. It is much suitable to use a non-linear approach to study wrist pulse signals and hence the systems from which they arise. The use of linear techniques leads to the loss of some information which could be of potential use. In general, non-linear techniques are computationally intensive and require a large number of data points for analysis. Some of the non-linear techniques in use are correlation dimensions and Lyapunov exponents. Due to the computational difficulties faced in using these techniques we have used here the concepts of recurrence plots and recurrence quantification analysis to wrist pulse signals. RQA is found to be computationally less intensive and requires only a small number of data points for analysis. The advantage of recurrence plots is that the pictorial representation of the signal gives us a better insight into the changes happening in its trend and thereby simplifies understanding complex non-linear systems. In this paper, we have used recurrence plots and RQA for identification of diabetic and non-diabetic subjects. The study has shown that RQA is effective in distinguishing diabetic from non-diabetic subjects. In addition to this, the RQA parameter values show effective variations in distinguishing diabetic and non-diabetic subjects. It is found that the wrist pulse signal obtained from position 1 show variations for average entropy, average diagonal line length and average divergence. The average entropy and average diagonal line length for diabetic subjects was less compared to non-diabetic subjects. Similarly, the average divergence for diabetic subjects was more compared to non-diabetic subjects. The results obtained for position 2 also shows the similar variations for diabetic and non-diabetic subjects. Hence, both the positions show variations for the same parameters. Therefore, nonlinear technique can be used effectively for analysis of wrist pulse signals. In future, it may be useful to extend the study of wrist pulse signals for the third position for better identification of the abnormalities at the early stages. Dr. D. Narayana Dutt has his Ph.D from Indian Institute of Science, Bangalore. He has been working in the E.C.E department, Indian Institute of Science for the past several decades. He has been working in the area of digital signal processing with particular reference to signals from the brain (mainly EEG) and the cardiovascular system (mainly ECG, HRV and wrist pulse signals).
2019-11-22T00:41:34.828Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "342c272fe38629e549b59c5b912c23f0e20ce85e", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijeat.a1854.109119", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ef2fb92c41d2c24ba7740d4f8d4091c46104b855", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
254926886
pes2o/s2orc
v3-fos-license
What sources are the dominant Galactic cosmic-ray accelerators? Supernova remnants (SNRs) have long been considered to be the dominant source of Galactic cosmic rays, which implied that they provided most of the energy to power cosmic rays as well as being PeVatrons. The lack of evidence for PeV cosmic rays in SNRs, as well as theoretical considerations, has made this scenario untenable. At the same time the latest LHAASO and other gamma-ray results suggest that PeVatrons lurk inside starforming regions. Here I will discuss why SNRs should still be considered the main sources of Galactic cosmic rays at least up to 10 TeV, but that the cosmic-ray data allow for a second component of cosmic rays with energies up to several PeV. This second component could be a subset of supernovae/SNRs, reacceleration inside starforming regions, or pulsars. As a special case I show that the recent observations of Westerlund 1 by H.E.S.S. suggest a low value of the diffusion coefficient inside this region, which is, together with an Alfv\'en speed>100 km/s, a prerequisite for making a starforming region collectively a PeVatron due to second order Fermi acceleration. Introduction The question about the origin of cosmic rays has been with us since their identification as an extraterrestrial source of ionization. 1) Cosmic rays have an energy distribution that is nearly a power law with index q ≈ −2.7 from ∼ 10 9 eV to ∼ 10 20 eV. The deviations in the spectrum from a pure, single power law-and compositional changes as a function of energy-provide information on the the origin, transport and acceleration physics of cosmic rays. Important features are the steepening at ∼ 3 × 10 15 eV-the "knee"-and the flattening at ∼ 3 × 10 18 eV-the "ankle". The "knee" has long been thought to be the maximum energy that protons can be accelerated to by the dominant class of Galactic cosmic-ray sources, whereas the "ankle" is regarded to mark the transition from Galactic to extragalactic cosmic rays. Although the sources of both Galactic and extragalactic cosmic rays remain topics of debate, supernova remnants (SNRs) are commonly considered to be the dominant source of Galactic, and active galactic nuclei (AGN) of extragalactic cosmic rays. Here I will discuss that SNRs likely provide the bulk of Galactic cosmic rays, but that they do not accelerate up to the "knee". Starforming regions provide observationally and theoretically good conditions for acceleration of PeV cosmic rays, as illustrated using new gamma-ray results on Westerlund 1. Supernovae and the Galactic cosmic-ray energy budget SNRs are long known to be radio synchrotron sources, and over the last 30 yr many have been identified as X-ray-synchrotron and very-high-energy (VHE) gamma-ray sources, indicating the presence of particles with energies of 10 8 eV to 10 14 eV. 2) The theory of diffusive shock acceleration 3) (DSA) provides the theoretical framework to interpret the acceleration properties of SNRs. For SNRs to be the dominant source class of Galactic cosmic rays, they must able to transfer a substantial fraction of their energy (∼ 10 51 erg) to cosmic rays. The Galactic energy density of cosmic rays is estimated to be U cr ≈ 1 eV cm −3 , mostly concentrated around energies of ∼ 1 GeV. 4) Composition measurements indicate that the typical escape time of cosmic rays with these energies is τ esc ≈ 1.5 × 10 7 yr, whereas the Galactic diffusion coefficient is D ≈ 3 × 10 28 (R/4 GV) δ cm 2 s −1 , with δ ≈0.3-0,7. 5) The one-dimensional diffusion length scale of cosmic rays is associated with the scale height of the cosmic-ray populations above and below the Galactic plane: H cr = √ 2Dτ esc . The total energy budget to maintain the cosmic-ray energy density in the Galaxy is then with R disc the typical radius of the Milky Way. The PeVatron problem SNRs as thé Galactic cosmic-ray sources are problematic when it comes to explaining the cosmic-ray "knee", which is often taken as evidence that the dominant cosmic-ray sources should be able to accelerate protons beyond 10 15 eVi.e.t hey should be PeVatrons. However, SNRs are unlikely to be PeVatrons, both from an observational as well as from a theoretical point of view. Gamma-ray spectra of SNRs are reasonably well described by power-law spectra with a break or a cutoff. The youngest known SNRs have gamma-ray spectrum extending up to ∼10 TeV-100 TeV, but show a turnover in their spectra below 10 TeV. This is both true for gamma-ray sources that are best modeled as hadronic gamma-ray sources-i.e. caused by pion production, such as Cas A 6) and Tycho's SNR 7) -and for leptonic gamma-ray sources. Since typically the gamma-ray photon energy is ∼ 10% of the energy of the primary particle, the gamma-ray spectra of young SNRs indicate that the cosmic-ray spectrum inside young SNRs cuts off below 100 TeV, well below the "knee". Mature SNRs (∼ 2000-20,000 yr), such as W44 and IC443, have breaks in their gamma-ray spectrum around 10-100 GeV, indicating the lack of cosmicray particles with energies in excess of 1 TeV. In these SNRs acceleration beyond 1 TeV has apparently stopped, and most previously accelerated particles have diffused out of the SNRs. Extended gamma-ray emission beyond the shock in the ∼ 2000 yr old SNR RX J1713.7-3946 may indeed reveal escape of cosmic rays caught in the act. 8) An interesting, but peculiar, counter example is the relatively mature and luminous SNR N132D (∼ 2500 yr) in the Large Magellanic Cloud, for which no gamma-ray break or cutoff is detected below 10 TeV. 9) This is in contrast to the much younger-but in many other ways comparable-SNR Cas A, which has a cutoff at ∼ 3 TeV. 6) The theoretical problems of acceleration of cosmic rays by SNRs beyond 1 PeV, using reasonable assumptions, are at least four decades old. 10,11) The acceleration timescale according to DSA corresponds to the timescale for a shock-crossing cycle around the maximum energy: 12) with V s the shock velocity and D 1 the upstream diffusion coefficient. The diffusion coefficient for relativistic particles can be expressed as with c the particle speed/the speed of light, r g the gyroradius, and λ mfp = ηr g a parametrisation of the mean-free path in terms of the gyroradius. The smallest realistic value for D is for η = 1, so-called Bohm diffusion, requiring a very turbulent magnetic field (δB/B ≈ 1). For SNRs we can approximate V s = mR s /t snr , with R s the shock radius and t snr the SNR age. We can take Cas A as an example of a young cosmicray accelerator with V s ≈ 5500 km/s 13) , R s ≈ 3 pc, m = 0.7 13) , and B 1 ≈ 100 µG 14, 15) . For the acceleration timescale we write τ acc = f t snr , with f < 1 (typically f = 10%). Rewriting eq. (2) gives: Optimistically taking η = 1, we see that Cas A cannot accelerate to PeV energies. The situation is better than theorized 30 yr ago 10) , because X-ray synchrotron filamentary widths in Cas A, as measured by the Chandra X-ray Observatory, provide evidence for amplified magnetic fields. 14, 16) Moreover, X-ray synchrotron radiation by itself requires η ≈ 1. 17) Athough two ingredients for large E cr,max -magnetic-field amplification and turbulence-appear to be present, they are not sufficient to make SNRs PeVatrons. In fact, the optimism regarding E max in SNRs is tempered by the fact that the measured gamma-ray cutoff energy for Cas A is consistent with E cr,max ≈ 10 TeV, 6) rather than the expected ∼100 TeV. This is peculiarly low for hadrons, as unlike electrons, they do not suffer radiative energy losses, but their maximum energy appears to be similar to the inferred maximum electron energy. 14) 4 Alternative Galactic cosmic-ray source candidates Which energetic sources, other than SNRs, could be sources of Galactic cosmic rays? Clearly, Galactic PeVatrons do exist as LHAASO recently has reported the detection of PeV photons from various regions along the Galactic plane, including from the Crab Nebula or its pulsar. 18) . Alternative source classes often discussed are pulsars 19) , microquasars 20) , stellar winds 21) , supernovae 22) , superbubbles 23,24,25) , and the supermassive black hole SGR A * 26) . Most of these source classes are advocated based on the idea that they can accelerate particles up to the "knee", but not all of them are capable of explaining the Galactic cosmic-ray energy density, except "supernovae" and "superbubbles", which are grosso modo powered by the same source of energy as SNRs. We ignore below SGR A * , which may indeed be a PeVatron, and may have been more powerful in the past. However, the LHAASO results require the presence of PeVatrons throughout the Galactic plane, and not just in the Galactic center. Supernovae The supernova hypothesis usually assumes that a subset of supernovae, those exploding in a dense stellar wind, start accelerating almost immediately after the explosion. 22) Typically these are Type IIb and Type IIn supernovae, which comprise ∼ 10% of all supernovae. An important example of a potentially powerful accelerator was SN1993J, whose magnetic field at the shock was estimated to be ∼ 10 G 27) with an initial shock velocity of 20,000 km s −1 . Essentially the supernova hypothesis is a "very young SNR" hypothesis, as in supernovae such as SN1993J a bright SNR shell immediately develops in the dense wind of the progenitor. To get an idea of the maximum energy that can be reached under the right conditions, consider that the maximum distance traveled by 15,000 km/s shock is R s = 4.7 × 10 16 cm in one year, and if the wind velocity and densities are high Bell's instability 16) could maintain a magnetic field of ∼ 1 G near the shock. eq. (4) then gives E cr,max ≈ 1.3 × 10 16 eV reached within one year. Oservational proof for this hypothesis would be the detection of VHE gamma rays from a radio-emitting supernovae, but so far only upper limits have been reported. 28) Superbubbles and starforming regions A substantial fraction of core-collapse supernovae probably explode inside of starforming regions. Collectively these regions have, therefore, somewhat less supernova energy available than SNRs, but this is offset by the energy input provided by stellar winds (sect. 4.3). The recent LHAASO detection of PeV photons associated with starforming regions 18) provide observational evidence for the hypothesis that starforming regions/superbubbles are PeVatrons. 25) However, what needs to be proven is that the responsible multi-PeV cosmic rays are not originating from the sources contained in starforming regionssupernovae, SNRs and stellar winds-but that there are collective effects that keep on accelerating cosmic rays within the region as a whole. In other words, are starforming regions, from a cosmic-ray-acceleration point of view, more than the sum of their parts? These "collective" effects are in all likelihood due to second order Fermi acceleration 29) , which states that collisions of charged, relativistic cosmic rays with moving magnetic-field disturbances leads to energy gains of with ξ ≈ 1 a parameter hiding the details of the interactions, and c the speed of the relativistic particles. Since the magnetic disturbances are moving with the Alfveń speed we can set v = V A . Second order Fermi acceleration takes into account gains due to head-on collisions, as well as losses due head-tail collisions. Although second order Fermi acceleration is slower than DSA, acceleration in starforming regions can take up to millions of years, rather than the few thousand year timescale of SNRs. There have been some calculations of the expected spectra of cosmic rays due to this mechanism, taking into account diffusion in phase-space. 30) Here I present a heuristic approach to obtain E max,cr . First note that the average "collision time" is ∆t = λ mfp /c, with λ mfp the mean free path. In reality, there are no discrete collisions, but we use the same approach as when we use a spatial diffusion coefficient, eq. 3. We can get rid of λ mfp by stating ∆t = 3D/c 2 , and the aforementioned energy scaling which for 0 < δ < 1 has the solution with τ acc the timescale available for acceleration, and E 1 the injection energy of the particle. We can parameterize this for δ = 1/2, and E 1 E max,cr as Note that we need a large Alfvén velocity (∼ 150 km s −1 ) and high level of magnetic-field turbulence-D(10 TeV) 10 26 cm 2 s −1 -to create a PeVatron. 1 Moreover, the particles need to be contained sufficient long to reach PeV energies. I come back to this when discussing Westerlund 1 (sect. 6.1). Stellar winds It is sometimes said that stellar winds may provide as much kinetic energy as supernova explosion. The reality is somewhat more complicated. Best understood are the wind properties of massive main-sequence stars. 31) Fig. 4.2 provides the time-integrated wind-energy of main-sequence stars. Only for stellar masses approaching 100 M does the wind-energy approach the kinetic energy of supernova explosions. But given the steepness of the initial mass function, the fraction of these massive stars is very small. More common stars have M ms 25 M , which provide 2 × 10 49 erg. Things are more confusing beyond the main sequence. Most massive stars will become a red supergiant, or sometimes a yellow supergiant. These So how much kinetic energy is associated with Wolf-Rayet stars? The measured wind velocities are in the range v WR ≈700 km s −1 -3000 km s −1 , with mass loss rates in the rangeṀ w ≈(0.5-6)×10 −5 M yr −1 . 32) Typically a Wolf-Rayet star phase lasts a few 100,000 yr, implying a total time integrated energy of E w = 2 × 10 50 (Ṁ w /10 −5 M yr −1 )(v w /2000 km/s) 2 (τ wr /500, 000 yr) erg. This is about ∼ 20% of the canonical supernova explosion energy. If we take Type Ibc supernovae to be explosions of Wolf-Rayet stars, we use theType Ibc supernova rate of 19% of the overall rate 33) , to suggest that ∼ 20% of all massive stars become Wolf-Rayet stars. This implies that the Galactic power budget of Wolf-Rayet stars is ∼ 4% of the supernova power-small but not negligible. Moreover, in young (few Myr) stellar clusters like Westerlund 1 (see below), the power of Wolf-Rayet stars precedes the supernova power, as the most-massive stars are in the Wolf-Rayet star phase, and the many less massive stars still need 5-20 Myr to evolve to the point of core collapse. Microquasars Microquasars are X-ray binaries containing an accreting neutron star or black hole, that develop jet outflows during certain accretion phases. They are regarded as nearby analogues to the radio galaxies and quasars (i.e. AGN). Given that AGN are the most likely sources for extragalactic cosmic rays, detected with energies up to ∼ 10 20 eV, it is not unreasonable to assume that also microquasars are good Galactic accelerators. Indeed, radio and gamma-ray emission shows that they do accelerate at least electrons. 34) The shell found around Cygnus X-1 has also been used as evidence for energetic jets from these systems. 20) However, microquasars are a much less abundant gamma-ray source class than SNRs and pulsar-wind nebulae. 35) Moreover, the number of systems available at any given moment seem to be 50-100. 36) Together with a typical jet power of ∼ 10 38 erg/s, this implies a typical Galactic kinetic power to be attributed to microquasars ofĖ µq ≈ 10 40 erg/s. If 10% of that power is transferred to cosmic rays, microquasars fall a factor hundred short of maintaining the Galactic cosmic-ray energy density. Pulsars Pulsar wind nebulae are among the most common Galactic gamma-ray sources. However, the canonical theory is that pulsar wind nebulae (PWNe) contain mostly electrons/positrons created by pair creation in the pulsar magnetospheres. Clearly they are efficiently accelerating electrons/positrons, and the archetypal Crab pulsar/PWN is even a confirmed PeVatron, given the detection of PeV photons from this source by LHAASO. 18) It is possible that the pulsar winds do not solely consists of electrons/positrons and Poynting flux, but may also contain hadrons 37) , potentially making pulsars hadronic PeVatrons. However, they are unlikely to be the dominant source of Galactic cosmic rays from an energy-budget point of view. I illustrate this by pointing out that the pulsar birth rate is similar, but somewhat smaller than the supernova rate, i.e. about ∼ 2 per century. For normal pulsars the energetic output comes at the expense of the rotational energy of the neutron star. The total initial rotational energy available is with P 0 = 2π/Ω 0 the initial spin period and I the moment of inertia. For pulsars to compete energetically with supernovae, they need P 0 5 ms. However, the initial spin period inferred from population synthesis models indicate a much longer initial period, 50-100 ms, or even 300 ms. 38) Taking P 0 50 ms givesĖ psrs 5 × 10 39 erg/s, insufficient for powering Galactic cosmic rays. The above considerations suggest that pulsars are not prominent sources of Galactic cosmic rays, even if they accelerate hadrons. But in the latter case they could be a source of PeV protons. Pulsars are likely an important, perhaps even dominant, contributor to the electron-/positron cosmic-ray population. Moreover, the pulsar wind nebulae (PWNe) and pulsar wind haloes 39) constitute an important class of VHE gamma-ray sources. 35) There is no contradiction between being prominent gamma-ray sources and not providing enough energy to power Galactic cosmic rays: energetic electrons/positrons are radiatively much more efficient than energetic protons. Do the sources of Galactic cosmic rays need to be PeVatrons? For a long time it was considered that the dominant sources of Galactic cosmic rays must fulfill both the cosmic-ray energy budget and be PeVatrons. The reason was a lack of cosmic-ray spectral features between ∼ 1 GeV and 3 × 10 15 eV, whereas if there were two or more source classes fulfilling together these criteria, we would expect some breaks in the cosmic-ray spectrum. It has now time to reconsider that idea, because of evidence that the cosmic-ray spectrum below the "knee" is not so featureless. First of all, the proton cosmic-ray spectrum has a different slope than the helium cosmic-ray spectrum. 40) This suggest two different origins, although both could originate from SNRs in different environments, or forward shock versus reverse shock acceleration. 41) Secondly, the latest cosmic-ray measurements 42,43,44) indicate that the proton cosmic-ray spectrum hardens around ∼ 0.7 TeV and softens again around 15 TeV. 45) The situation regarding the proton spectrum around the "knee" is not clear. Certainly the break around 15 TeV is consistent with the maximum energy young SNRs can accelerate protons to. These results open up the possibility that SNRs indeed provide the bulk of Galactic cosmic rays, a scenario that agrees with the cosmic-ray energy budget, and that other source classes-or subclasses of SNRs, including supernovaeare responsible for cosmic rays from 15 TeV up to the "knee". Note that this requires PeVatrons to have harder (flatter slope) spectra than SNRs, in order for these PeVatron sources to be subdominant around 1 GeV. Observational candidate PeVatrons: starforming regions Since the detection of PeV photons from Galactic plane sources by LHAASO 18) and some associations with starforming regions, starforming regions demand more attention as sources of cosmic rays and being potentially PeVatrons themselves (sect. 4.2)-as opposed to merely containing PeVatrons. A notable source of PeV photons is LHAASO J2032+4102, which is positionally associated with the Cygnus Cocoon surrounding the Cyg OB2 star cluster. Imaging atmospheric Cherenkov telescope arrays also provide evidence for the existence of PeVatrons associated with starforming regions, and provide a better angular resolution. For example, recently H.E.S.S. detected VHE gamma-ray emission from HESS J1702-420(A) up to energies of 100 TeV, implying primary particles of up to or beyond 1 PeV. 46) But the nature of this source, which has a complex gamma-ray morphology, is not entirely clear. Although not a confirmed PeVatron, another interesting starforming region is Westerlund 1/HESS J1646-458, for which the gamma-ray spectrum extends at least up to 50 TeV. 47) 6.1 Very-high energy gamma-ray emission from Westerlund 1 Westerlund 1 is the most massive young massive star cluster known in the Milky Way, ∼ 10 5 M , which hosts many Wolf-Rayet stars and other evolved massive stars. Its age has been estimated to be ∼ 4 Myr, but recent work suggests an age spread up to 10 Myr. 2 The VHE gamma-ray emission does not originate from the stellar cluster itself, but from a large shell-like region surrounding it, with peak radius of ∼ 0.5 • (fig. 2). At a distance of 4 kpc the shell has a physical radius of ∼ 35 pc, whilst Westerlund 1 itself is much more compact, ∼ 1 pc. In the H.E.S.S. publication an interpretation is favored in which the shelllike emission is associated with a collective-wind termination shock, 47, 48) as a physical shell, consisting of swept-up gas, was considered unlikely, because the gamma-ray structure is too small for the energy input from Wolf-Rayet-star winds (Ė WR 10 39 erg s −1 49) ) and the age of the cluster, as based on the stellar-bubble expansion model by Weaver et al. 50) . However, some of the assumptions used to calculate the putative shell size may not be correct. The cluster age of ∼ 4 Myr is a poor indicator for the total energy and shell creation timescale available for making a shell. As noted in sect. 4.3, the Wolf-Rayet star phase typically lasts for a few 100,000 yr, consistent with being about 2-10% of the stellar life time. Taking τ SB ≈ 200, 000 yr reduces the radius of the collective wind bubble from R SB = 185(n H /5) −1/5 pc to R SB = 31(n H /5 cm −3 ) −1/5 pc. There is another reason to be suspicious about the predicted size of the bubble: observationally superbubbles appear to be smaller than predicted by wind-bubble expansion theories. This may be due to internal dissipation and radiative losses, as well as back pressure from the ambient medium. 51,52) The lack of an association of the VHE gamma-ray shell-like structure with an HI or CO structure is intriguing, but should also not be taken as evidence against the presence of a physical shell, as the intense UV light from the OB and Wolf-Rayet stars in Westerlund 1 likely photo-dissociated/ionize a large part of the surrounding CO and neutral hydrogen. Assuming that Westerlund 1 itself is indeed accelerating cosmic rays, or further accelerating particles pre-accelerated by the colliding winds and past SNRs, what can we learn from the VHE gamma-ray spectrum and morphology measured by H.E.S.S.? The fact that the VHE gamma-ray morphology seems to have no or little dependence on gamma-ray energy suggest that the cosmic-ray particles have been well mixed within the emitting region and may have been accelerate throughout the bubble. The best-fit cutoff energy of ≈ 40 TeVcorresponding to proton energies above 100 TeV-indicates that escape is only important for particles above ∼ 100 TeV. Equating the radius of the shell to the threedimensional diffusion length scale, R shell = √ 6Dτ , and using τ ≈ 200, 000 yr, we can estimate the diffusion coefficient at ∼ 100 TeV to be D ≈ R 2 shell /(6τ ) ≈ 3 × 10 26 (τ /200, 000 yr) −1 cm 2 s −1 , corresponding to D ≈ 10 26 cm 2 s −1 at 10 TeV. This is close to the value for which considerable energy gain due to second order Fermi acceleration is expected (eq. 8)! The maximum energy as a function of the energy dependence of the diffusion coefficient, i.e. δ, is shown in fig. 2 (right) for different valued of D and V A . It is clear that V A 100 km s −1 . From V A = B/ 4π · 1.4 · n H m p we find that we need a low internal density: n H 0.2(B/10 µG)(V A /100 km s −1 ) −1 cm −3 . However, even densities of 10 −3 cm −3 are possible in superbubbles of a few Myr old. 53) Moreover, 10 µG is rather modest and corresponds to an internal magnetic-field energy density within the shell of 2 × 10 49 erg. Interestingly, using eq. 3 with B ≈ 10 µG and D(10 TeV) = 10 26 cm 2 s −1 provides a value of η ≈ 3, which is very close to Bohm diffusion. Stronger magnetic fields are still consistent with the energy budget, required Alfvén speeds and a PeVatron hypothesis. Much lower strengths lead to inconsistencies, such as η < 1, or D 10 26 cm 2 s −1 . To summarize, the VHE gamma-ray spectra and morphology of Westerlund 1 imply a small diffusion coefficient, whereas a low density inside the shell and amplified, turbulent magnetic field likely result in fast Alfvén velocity, setting the right conditions to (re)accelerate cosmic rays injected by primary sources to well beyond 100 TeV. As such Westerlund 1 may provide a model for other starforming regions as PeVatrons. Conclusion I have argued here that SNRs likely are the dominant sources of Galactic cosmic rays below ∼ 10 TeV, as both observational and theoretical results are consis-tent with young SNRs being able to accelerate to at least this energy. Moreover, the latest cosmic-ray measurements indicate that there is is a hardening of the proton cosmic-ray spectrum around ∼ 10 TeV, indicating the presence of additional sources of Galactic cosmic rays, which may be responsible for cosmic rays up to the "knee". If these additional sources accelerate cosmic rays with a relatively hard spectrum, this source class does not violate the energetic constraints. Like SNRs, this PeVatron source class may rely on the power input of supernovae, be its a subclass of SNRs, core-collapse supernovae exploding inside the dense stellar wind of the progenitor star, or the collective power of supernovae and stellar winds in starforming regions/superbubbles. Energetic pulsars should not be discarded as hadronic PeVatrons, but it first remains to be proven that pulsars are hadronic-and not just leptonic-accelerators. Both pulsars and starforming regions/superbubbles as PeVatron sources are consistent with the latest detections of Galactic PeV gamma-ray sources by LHAASO. Several starforming regions/superbubbles have been associated with Pe-Vatron candidate sources. It is not yet clear whether these are PeVatrons by themselves, or whether they merely contain(ed) PeVatrons sources (in the past), such as the aforementioned subclass of SNRs/supernovae and hadronic pulsar accelerators. I argue that the recently reported VHE gamma-ray properties of HESS J1646-458-associated with Westerlund 1-indicates a small internal diffusion coefficient; small enough to accelerate protons up to the "knee" in a few 100,000 yr, provided that the Alfvén speed is 100 km s −1 . This suggests that starforming regions/superbubbles could be themselves PeVatrons. As cosmic-rays source, starforming regions may be more than the sum of their parts.
2022-12-22T06:42:30.780Z
2022-12-20T00:00:00.000
{ "year": 2022, "sha1": "7e030ead1d2efa3ab49858dc23a5d919ff5a4a04", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9b72f9582105559c2f7f2564ad63e2680064b50f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
155614774
pes2o/s2orc
v3-fos-license
Sustainable Development Strategy For Ecotourism at Tangkahan, North Sumatera Ecotourism Destination of Tangkahan is located at the edge of Gunung Leuser National Park, within the Sub-regency of Batang Serangan, Regency of Langkat, Province of North Sumatera, Indonesia. The Ecotourism Destination of Tangkahan relies upon a distinctive tourist attraction, namely elephant trekking that is undertaken along the edge of the river and in the Gunung Leuser National Park (GLNP), as well as the diversity of flora and fauna available at the GLNP. There are many activities can be undertaken by visitors at this destination, such as: elephant trekking, wildlife watching at the GLNP, trekking at the edge of Buluh river and come back by swimming wearing life jacket, tubing (traditional rafting) and canoeing at Batang Serangan river, swimming at Buluh river, camping and outbound activities at the camping ground, village tour at sub-village of Kuala Buluh, and traditional massage ( pijat / kusuk ) by local therapist. The research was undertaken to develop strategies which could be used as guidance in managing and developing this ecotourism destination. The proposed strategies were based upon the results of SWOT analysis. Data were assembled from the visitors’ survey, focus group discussions and workshop involving tourism stakeholders and several interested groups. Based upon the analysis of existing tourist attractions offered at the Ecotourism Destination of Tangkahan, it could be said that the nature based tourist attractions were considered to be interesting up to very interesting. The uniqueness of elephant jungle trekking in the GLNP was the tourism icon of the Ecotourism Destination of Tangkahan. Camping ground, plant nursery Introduction Eco-tourism is getting popular as an alternative form of tourism activity which is expected to bring benefits for the regional economy and to contribute to the nature conservation. Some countries have implemented eco-tourism in managing national park. Eco-tourism has been recognized to contribute to nature conservation, and to bring sustainable benefits to the park management, local community, and government (Sudarto, 1999). The eco-tourism type of national park management is also possible to be implemented in the area of Gunung Leuser National Park (GLNP) that covers an area of 1,094,692 hectares within the region of two provinces, namely North Sumatera and Nanggroe Aceh Darusalam (NAD). The GLNP has various tourist attractions which also consider being exotic, such as orangutan at Bukit Lawang (North Sumatera), and elephants at Conservation Response Unit (CRU) Tangkahan (North Sumatera). This paper focuses on the eco-tourism at Tangkahan in which the elephants are the main attraction. Some elephants have been taken care by the CRU Tangkahan for jungle patrol and to protect the agricultural area near the forest from the wild elephants. However, these elephants have also become attractions for tourists, such as elephant trekking (Gunawan, 2008). The tourist areas of Tangkahan has been known as one of the popular eco-tourism destinations at North Sumatera Province. However, eco-tourism development at Tangkahan was cosidered to face some challenges, such as: lack of community awareness on nature conservation and environmental sanitation, lack of accessibility and tourism facilities, and lack of supports from local government (Langkat Regency) and provincial government (North Sumatera Province). Regarding the above challenges, this paper aims to provide some directions which were obtained from an academic study and analysis in order to develop sustainable ecotourism development at Tangkahan. The objectives of the study were: (i) to analysis of existing tourist attractions and facilities available at Tangkahan; (ii) to undertake an analysis of internal and external factors related to eco-tourism development at Tangkahan; and (iii) to establish the strategies and programs that could be used as guidance in managing and developing ecotourism destination at Tangkahan by the tourism stakeholders. The study was undertaken in several steps, namely: (Rangkuti, 1998). (iv) workshop: undertaken in order to develop a recommendation of eco-tourism development strategies and programs, involving tourism industry, local government tourism authorities, local community and NGOs at the study area. Destcription of the study areas Eco-tourism Destination of Tangkahan has been relied upon a distinctive tourist attraction, namely elephant trekking that is undertaken along the river edge and in the GLNP, as well as the diversity of flora and fauna available at the GLNP. Other tourist attractions at Tangkahan, such as:  Nature based attractions, such as: Batang Serangan river and Buluh river, Alur Garut and Gelugur water fall, Buluh river and Gelugur hot springs, caves (Gua Kalong, Gua Kambing Hutan, and Gua Langkup Gendek), and palm oil plantation. Tourism facilities and services were also already available at Tangkahan, but they were considered to be insufficient yet. The facilities and services were including: 6 units accommodation (bungalow and homestay) about 42 rooms with simple facilities, 5 units restaurant / rumah makan with 25 tables, one travel agent namely Community Tour Operator (CTO) owned by Lembaga Pariwisata Tangkahan (LPT), and 11 tourist guides who can speak English. Eco-tourism Tangkahan was supported by several supporting facilities, such as : a visitor centre, a traditional water-based transport made from wood and bamboo (rakit), elephants for elephant trekking activities, foot path for trekking in the GLNP, elephant trail, rubber tube that can be rent by tourist for tubing in the river, elephant garden which has been planted for sugar cane and grass (elephant feeds) with organic fertilizer from elephant's manure, camping ground, and 2 units public toilets located at the back of LPT's Office. However, there was no currency exchanger, and no electricity network by PLN and no telephone line by Telkom to the tourism area of Tangkahan. Communication can be made through seluler phone only. Parking area was also considered to be limited. Anaysis of tourist attractions and facilities Based upon the analysis of existing tourist attractions and facilities at Tangkahan, it could be said that:  Nature based tourist attractions at Tangkahan were considered to be interesting up to very interesting. The uniqueness of elephant jungle trekking in the GLNP is the tourism icon of Tangkahan.  Culture based tourist attractions at Tangkahan were considered to be interesting. Traditional foods, traditional medicines and local tradition were considered to be potential to be promoted as tourist attractions in order to support eco-tourism at Tangkahan.  Man-made tourist attractions at Tangkahan were considered to be interesting up to very interesting. Camping ground, plant nursery, and agriculture plantation were potential to be promoted as tourist attractions at Tangkahan.  Accessibility to Tangkahan was considered to be not good, particularly in term of transportation and communication facilities.  Tourism facilities and services at Tangkahan were considered to be good and sufficient, especially restaurant/ rumah makan. However, accommodation and travel agent, as well as banking facility, public toilet and parking area were considered to be not sufficient yet. Based upon the results of analysis of level of importance of tourist attractions and facilities, it could be said that:  There were some tourist attractions considered to be very important, namely: diversity of fauna, uniqueness of fauna, view of nature, and tourist activities available at the destination.  There were several tourism facilities and services considered to be very important, namely: restaurant / rumah makan and public toilet.  Some tourist attractions were considered to be important, namely: diversity and uniqueness of flora, handicrafts, traditional arts, traditional foods, traditional architecture, agriculture plantation, and camping ground.  Several tourism facilities and services were considered to be important, namely: accommodation, travel agent, and banking facilities (including currency exchanger).  Several accessibility indicators were considered to be important, namely: transportation and communication facilities. Analysis of Internal and External Factors Some internal factors were considered to be the strengths of Tangkahan as an ecotourism destination, namely: 1) Tourist attractions (nature, culture, and man-made) 2) Tourist activities available at the tourist destination 3) Tourism facilities (accommodation and restaurant) 4) Land zoning of the tourism area 5) Institutions related to tourism management (community or private) 6) Community awareness on tourism (sadar wisata) 7) Community and tourism industry awareness on nature conservation e-ISSN: 2407-392X. p-ISSN: 2541-0857 On the other hand, some internal factors were also considered to be the weaknesses of Tangkahan as an eco-tourism destination, namely: 1) Accessibility (land transportation) 2) Tourism supporting facilities (public toilet, parking area, health-care facilities) 3) Human resources for tourism 4) Tourism marketing 5) Capability of the local community to invest in tourism 6) Cleanliness and environmental sanitation 7) Risk of accident during tourist activities. Based upon the analysis of internal factors (IFAS), the overall internal factors at Tangkahan got score of 2.66. It means that these internal factors were considered as the strengths of Tangkahan as an eco-tourism destination. Several external factors were considered to be the opportunities for Tangkahan as an eco-tourism destination, namely: 1) Interest of tourists to visit a tourism destination 2) World trend of 'back to nature' 3) Situation of politics and security (global and national) 4) Government role in tourism development (regency, province and central government) 5) Collaboration with other institutions. However, some external factors were considered to be the threats for Tangkahan as an eco-tourism destination, namely: 1) Economic situation (global and national) 2) Competitors from similar type of destinations 3) Illegal logging 4) Sustainability of the tourist attraction (elephant trekking). Based upon the analysis of external factors (EFAS), the overall external factors at Tangkahan got score of 2.60. It means that these external factors were considered as the opportunities of Tangkahan as an eco-tourism destination. Recommendation for Ecotourism developmet The strategies and programs that are recommended for tourism development at Tangkahan as an eco-tourism destination, namely: to build more public toilets as well as to establish a management of public toilets at Tangkahan. (k) To establish electricity networks into the Tangkahan tourist area.
2019-05-17T14:40:43.261Z
1970-01-01T00:00:00.000
{ "year": 1970, "sha1": "d38cbf3d037182c39da6f5b9a7a016211c0ef478", "oa_license": "CCBY", "oa_url": "https://ojs.unud.ac.id/index.php/eot/article/download/19437/12877", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3f040ad24b3726cefd914a35a02bcd22ac875045", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Geography" ] }
136307790
pes2o/s2orc
v3-fos-license
Electroless Deposition and Ignition Properties of Si/Fe2O3 Core/Shell Nanothermites Thermite, a composite of metal and metal oxide, finds wide applications in power and thermal generation systems that require high-energy density. Most of the researches on thermites have focused on using aluminum (Al) particles as the fuel. However, Al particles are sensitive to electrostatic discharge, friction, and mechanical impact, imposing a challenge for the safe handling and storage of Al-based thermites. Silicon (Si) is another attractive fuel for thermites because of its high-energy content, thin native oxide layer, and facile surface functionality. Several studies showed that the combustion properties of Si-based thermites are comparable to those of Al-based thermites. However, little is known about the ignition properties of Si-based thermites. In this work, we determined the reaction onset temperatures of mechanically mixed (MM) Si/Fe2O3 nanothermites and Si/Fe2O3 core/shell (CS) nanothermites using differential scanning calorimetry. The Si/Fe2O3 CS nanothermites were prepared by an electroless deposition method. We found that the Si/Fe2O3 CS nanoparticles (NPs) had a lower reaction onset temperature (∼550 °C) than the MM Si/Fe2O3 nanothermites (>650 °C). The onset temperature of the Si/Fe2O3 CS nanothermites is also insensitive to the size of the Si core NP. These results indicate that the interfacial contact quality between Si and Fe2O3 is the dominant factor for determining the ignition properties of thermites. Finally, the reaction onset temperature of the Si/Fe2O3 CS NPs is comparable to that of the commonly used Al-based nanothermites, suggesting that Si is an attractive fuel for thermites. ■ INTRODUCTION Thermite is an energetic composite of metals as fuels (e.g., Al, B, and Si) and metal oxides as oxidizers (e.g., CuO, Fe 2 O 3 , and WO 3 ). Once ignited, thermite releases a large amount of heat and thus finds applications, ranging from aerospace propulsion, thermal batteries, waste disposals, and power generation for microsystems to material synthesis. 1−4 Most of the researches on thermites have focused on using aluminum (Al) as the fuel. 5−11 However, Al, especially Al nanoparticles (Al NPs), is sensitive to electrostatic discharge, friction, and mechanical impact, 12 imposing a challenge for the safe handling and storage of Al-based thermites. In addition, Al particles have a native oxide layer of 2−6 nm, which acts as a dead mass. 13 Among other fuels such as red phosphorus 14 and boron, 15,16 silicon (Si) is another attractive fuel for thermites because it has high volumetric and specific energy densities (75.1 MJ/L and 32.2 MJ/kg, respectively) similar to those of Al (83.8 MJ/L and 31.0 MJ/kg, respectively). Si also has a thinner native oxide layer of 1−3 nm than Al, 17 and it is resistant to electrostatic discharge. Importantly, the surface of Si can be easily functionalized, 18,19 facilitating the use of coating and surface modification to tailor its ignition and burning properties. Several previous studies have shown that Si-based thermites have comparable combustion properties as Al-based thermites, 18,20,21 in terms of adiabatic combustion temperatures (∼3000 K) and burning rates (40−530 m/s). 22 However, little is known about the ignition properties of Si-based thermites and their dependence on the structure of thermites. Herein, we study the ignition properties, in terms of reaction onset temperatures, of Si and Fe 2 O 3 thermite mixtures. The reason for choosing Fe 2 O 3 as the oxidizer is that Al/Fe 2 O 3 is one of the most studied thermite systems. 23 Moreover, the thermite reaction between Si and Fe 2 O 3 (3Si + 2Fe 2 O 3 → 3SiO 2 + 4Fe, 12.9 MJ/kg of Si) has the least amount of gaseous byproducts among thermites, 20 allowing us to focus on the structural effect on the oxygen diffusion process that is critical to ignition. In particular, we compared the reaction onset temperatures of Si/Fe 2 O 3 core/shell (CS) particles with those of the mechanically mixed (MM) Si/Fe 2 O 3 powders ( Figure 1). We found that the Si/Fe 2 O 3 CS NPs had a lower reaction onset temperature (∼550°C) than the MM Si/Fe 2 O 3 nanothermites (>650°C). These results indicate that the interfacial contact quality between Si and Fe 2 O 3 is the dominant factor for determining the ignition properties of thermites, a phenomenon similar to that in Al-based thermites. 3 Finally, the reaction onset temperature of the Si/Fe 2 O 3 CS NPs is comparable to that of the commonly used Al-based nanothermites, suggesting that Si is an attractive fuel for thermites. We investigated the ignition properties of Si/Fe 2 O 3 nanothermites by using Si NPs of two different sizes (20−30 nm, US Research Nanomaterials, and 100 nm, Skyspring Nanomaterials Inc.). The oxygen content of these particles is summarized in Table 1 SEM inspection over a wide range of samples shows that the surface of the Si particle is coated with a shell. The coating of Fe 2 O 3 over the Si surface is also confirmed by transmission electron microscopy (TEM) and energy-dispersive X-ray spectroscopy (EDXS) mapping. It should be noted that the SEM images were taken for the 100 nm Si NPs because of the spatial resolution limit and that the TEM images were taken for 20−30 nm Si particles for better electron transmission. Both the TEM and standing TEM (STEM) images (Figure 3a,b) suggest that the final particle has a CS structure. The EDXS element mapping of the boxed regime in Figure 3a shows the coexistence of strong Fe element and O element signals and relatively weak Si signals, indicating that a layer of Fe 2 O 3 has been deposited on the surface of Si. However, because the initial sizes and shapes of Si particle have an inherent distribution, it is difficult to determine the exact thickness of iron oxide. Instead, we have estimated the Si/Fe 2 O 3 equivalence ratios using an inductively coupled plasma optical emission spectroscopy (Thermo Scientific ICAP 6300 Duo view spectrometer) system. The estimated equivalence ratios of 20−30 nm and 100 nm Si/Fe 2 O 3 CS particles are 24.1 ± 0.15 and 6.07 ± 0.02, respectively. In addition, the X-ray photoelectron spectroscopy (XPS) analysis of the Si/Fe 2 O 3 CS NP in Figure 3f shows the characteristic iron oxide peaks Next, we used differential scanning calorimetry (DSC) to determine the reaction onset temperatures of Si/Fe 2 O 3 CS NPs and compare with those of the MM NPs. Figure 4 shows the representative baseline-corrected DSC profiles of Si/Fe 2 O 3 CS and MM samples with two different sizes of the Si particle. First, the onset temperature of the Si/Fe 2 O 3 CS NPs is about 550°C, which is about 100−150°C lower than that of the MM samples. This confirms that the CS structure has more facile oxygen diffusion process because of its better interfacial contact between Si and Fe 2 O 3 . Second, the onset temperature of the 20 nm Si/Fe 2 O 3 (MM) is about 50°C lower than that of the 100 nm Si/Fe 2 O 3 (MM), confirming the importance of size effect. A smaller size NP has a larger specific surface area and a shorter diffusion distance for oxygen, and both factors contribute to the lower onset temperature. By contrast, the onset temperature of the 20 nm Si/Fe 2 O 3 (CS) is about the same as that of the 100 nm Si/Fe 2 O 3 (CS), indicating that intimate interfacial contact between Si and Fe 2 O 3 in the CS structure has already effectively minimized the oxygen diffusion length. Hence, the initial size of the Si NP plays a negligible role here. In other words, the reaction onset temperature of Si/Fe 2 O 3 nanothermites is mostly affected by their interfacial contact quality. Finally, we compared the reaction onset temperatures of our Si/Fe 2 O 3 CS NPs with those of Al/metal oxide nanothermites in the literature. We chose the Al/metal oxide nanothermites that have comparable Al size with our Si NPs for fair comparison, and the results are listed in Table 2. It is clear that Si/Fe 2 O 3 CS NPs have comparable reaction onset temperatures as other Al/metal oxide nanothermites (460− 580°C). These data suggest that replacing Al with Si in thermites will have little change in terms of reaction onset temperatures. ■ CONCLUSIONS We have shown that the easy surface functionality of Si allows a facile synthesis of Si/Fe 2 O 3 CS NPs using a simple electroless plating method. The formed Si/Fe 2 O 3 CS NPs have a reaction onset temperature of about 550°C, which is about 100−150°C lower than that of the MM samples. The lowering of the reaction onset temperature is caused by the enhanced intermolecular oxygen diffusion processes in the CS structure. In addition, the reaction onset temperature of the Si/Fe 2 O 3 CS NPs is comparable to that of the commonly used Al-based nanothermites. Considering the easy surface functionalization property and comparable ignition properties, our study suggests that Si is another attractive fuel for thermites. Our synthesis methodology can be applied to form a diverse range of Si/metal oxide thermites for tailored applications. Synthesis of Si/Fe 2 O 3 CS NPs. First, 100 mg of Si NPs was dispersed into 100 mL of isopropyl alcohol and sonicated for 15 min to break up the Si particle clusters. (3-Aminopropyl)triethoxysilane (APTES, 1 mL, 99%, Sigma-Aldrich) and 0.5 mL of Milli-Q water were sequentially added to the Si NP solution and stirred at 70°C for 1 h to functionalize the surface of Si with APTES. Second, the functionalized Si NPs were washed with deionized (DI) water and collected by centrifugation, and then, they were immersed in 40 mL of palladium chloride aqueous solution (20 mg of PdCl 2 , Sigma-Aldrich, and 0.1 mL of reagent grade 37% HCl, Fisher Scientific) and stirred for 20 min. This electrostatic plating step coats the Si NPs with a thin layer of Pd film. Again, the Pdcoated Si NPs were washed with DI water and collected by centrifugation. The third step is to replace Pd with Fe by electroless plating. Specifically, the Pd-coated Si NPs were stirred for 5 min in 100 mL of DI water solution that contained 11.8 g of sodium citrate, 3.1 g of ammonium iron sulfate, 0.37 g of boric acid, 0.4 g of saccharin, and 0.1 g of lysine (all from Sigma-Aldrich). Then, 0.3 g of sodium borohydride (98%, Sigma-Aldrich) was added to the solution to initiate the electroless plating reaction. The resulted Si/Fe NPs were magnetic, so they were collected with a magnet. After rinsing with isopropyl alcohol and fully drying in vacuum, the Si/Fe NPs were annealed in a furnace in air at 450°C for 5 h at a ramping time of 3 h. After annealing, the color of the NPs changed from black to red, indicating that the Fe shell was oxidized to Fe 2 O 3 . DSC Measurement of Si/Fe 2 O 3 CS and MM NPs. For a typical DSC measurement, a Si/Fe 2 O 3 sample of about 5 mg (either CS or MM NPs) was placed in a 100 μL alumina crucible. The sample was heated in an inert argon environment (40 sccm) from room temperature to 900°C at a heating rate of 5°C/min. After the samples were cooled down to room temperature, they were heated with the same process again. The second-round heat flow traces were used to correct the baseline of the first-round heat flow traces following the method described previously. 25
2019-04-29T13:17:26.188Z
2017-07-13T00:00:00.000
{ "year": 2017, "sha1": "bbf3c173b4e1af1366919f6195ff7256741bd45a", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.7b00652", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4a90f4f0098522cb65a4b44c8efe6b18a72ad89", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
253211464
pes2o/s2orc
v3-fos-license
CFR-PEEK Pedicle Screw Instrumentation for Spinal Neoplasms: A Single Center Experience on Safety and Efficacy Simple Summary Advances in screening methods and new therapeutic strategies have lead to a continuous decline in cancer death rates, especially over the last ten years. As a consequence, the number of patients with spinal metastases is increasing. In modern oncological treatment surgery followed by postoperative radiotherapy for spinal metastases has gained a decisive role. For spinal stabilization, pedicle screws and rods are used. They used to be made of titanium or cobalt–chrome alloys. Recently, carbon-fiber-reinforced (CFR) polyethyl-ether-ether-ketone (PEEK) was introduced as a new material reducing artifacts on imaging and showing less perturbation effects on photon radiation. The aim of this study is to report on the safety and efficacy of CFR-PEEK pedicle screw systems for spinal neoplasms in a large cohort of consecutive patients. We could show that implant-related complications, such as intraoperative screw breakage and screw loosening, were rare. So, we conclude that CFR-PEEK is a safe and efficient alternative to titanium for oncological spinal instrumentation. Abstract (1) Background: Surgery for spinal metastases has gained a decisive role in modern oncological treatment. Recently, carbon-fiber-reinforced (CFR) polyethyl-ether-ether-ketone (PEEK) pedicle screw systems were introduced, reducing artifacts on imaging and showing less perturbation effects on photon radiation. Preliminary clinical experience with CFR-PEEK implants for spinal metastases exists. The aim of this monocentric study is to report on the safety and efficacy of CFR-PEEK pedicle screw systems for spinal neoplasms in a large cohort of consecutive patients. (2) Methods: We retrospectively analyzed prospectively the collected data of consecutive patients being operated on from 1 August 2015 to 31 October 2021 using a CFR-PEEK pedicle screw system for posterior stabilization because of spinal metastases or primary bone tumors of the spine. (3) Results: We included 321 patients of a mean age of 65 ± 13 years. On average, 5 ± 2 levels were instrumented. Anterior reconstruction was performed in 121 (37.7%) patients. Intraoperative complications were documented in 30 (9.3%) patients. Revision surgery for postoperative complications was necessary in 55 (17.1%) patients. Implant-related complications, such as intraoperative screw breakage (3.4%) and screw loosening (2.2%), were rare. (4) Conclusions: CFR-PEEK is a safe and efficient alternative to titanium for oncological spinal instrumentation, with low complication and revision rates in routine use and with the advantage of its radiolucency. Introduction In 2005, a milestone study demonstrated a significant advantage of patients with spinal metastases (SM) treated with surgery followed by radiotherapy over patients treated with radiotherapy alone regarding their functional status [1]. Moreover, it was shown that Study Design We retrospectively analyzed prospectively the collected data of consecutive patients being operated on using a CFR-PEEK pedicle screw system for posterior stabilization (Icotec, Altstätten, Switzerland) because of spinal metastases or primary bone tumors of the thoracic or lumbar spine. Pedicle screw placement was performed and navigated Study Design We retrospectively analyzed prospectively the collected data of consecutive patients being operated on using a CFR-PEEK pedicle screw system for posterior stabilization (Icotec, Altstätten, Switzerland) because of spinal metastases or primary bone tumors of the thoracic or lumbar spine. Pedicle screw placement was performed and navigated using an operating room-based sliding gantry CT (Brilliance CT Big Bore, Philipps, Amsterdam, The Netherlands), a mobile cone-beam CT (O-arm II, Medtronic, Minneapolis, MN, USA) [21] or a C-arm with 3-dimensional scanning (Arcadis Orbic, Siemens, München, Germany). Cement augmentation was used, depending on the quality of the cancellous bone. Indication of surgery was discussed in an interdisciplinary neurooncological board consisting of certified neurosurgeons, oncologists, radiooncologists and neuroradiologists. Aspects of spinal instability or deformity, epidural compression, the patient's functional status, comorbidities and the oncological burden of the disease were evaluated. Reconstruction of the anterior column was performed when needed, depending on preoperative imaging, the degree of instability and systemic tumor burden, either in the same surgery or as a staged second surgery. For vertebral body replacement, either an expandable PEEK cage (XRL, DePuySynthes, Solothurn, Switzerland), an expandable CFR-PEEK cage (Kong, Icotec, Altstätten, Switzerland) or an expandable titanium alloy cage (Obelisc, Ulrich Medical, Ulm, Germany) was used. All the surgeries were performed by six senior surgeons. Population The data comprise anonymized records of patients operated on in the period from 1 August 2015 until 31 October 2021 in the Department of Neurosurgery of a tertiary care hospital. Baseline demographic data, the Karnofsky performance status scale (KPS), surgical details, complications and the outcome of patients were analyzed. Ethical Agreement The study was approved by the ethical committee of our university (reference number 96/19 S) and conducted in accordance with the Declaration of Helsinki. Demographic Background We included 321 patients, 306 with SM and 15 with primary bone tumors of the spine, of a mean age of 65 ± 13 years (Table 1). Most patients were of a KPS of 80% or better ( Table 1). The most frequent primary tumor site for metastatic patients was the prostate followed by the breast and non-small-cell lung cancer (NSCLC) ( Table 1). Primary bone tumors were chordoma (five cases), aneurysmatic bone cyst (four cases), fibrous dysplasia (three cases), angiosarcoma (one case), cavernous haemangioma (one case) and osteosarcoma (one case). Symptoms of patients were, in most cases, pain without neurological impairment (Table 2). Surgical Details In the majority, posterior stabilization was performed in the thoracic spine, followed by the lumbar spine. On average, 5 ± 2 levels were instrumented. In 257 cases (80.1%), a standard open approach via midline skin incision was used, while in 64 cases (19.9%), pedicle screws were inserted minimally invasively; i.e., transmuscular (Table 3). Additional decompression was performed in 248 cases (77.3%). Cement augmentation of pedicle screws was used in 77 cases (24.0%). Anterior reconstruction was performed in 121 patients (37.7%) (Figures 3 and 4). The mean blood loss was 1104 mL (± 1146 mL). Intraoperative red blood cell transfusion was necessary in 133 (41.4%) patients. For patients with primary bone tumors, in almost all cases (except of one palliative), an extensive tumorresection was performed. In eleven cases (73.3%), a total vertebrectomy with vertebral body replacement was performed. In five cases, the tumor was embolized preoperatively. Surgical Details In the majority, posterior stabilization was performed in the thoracic spine, followed by the lumbar spine. On average, 5 ± 2 levels were instrumented. In 257 cases (80.1%), a standard open approach via midline skin incision was used, while in 64 cases (19.9%), pedicle screws were inserted minimally invasively; i.e., transmuscular (Table 3). Additional decompression was performed in 248 cases (77.3%). Cement augmentation of pedicle screws was used in 77 cases (24.0%). Anterior reconstruction was performed in 121 patients (37.7%) (Figures 3 and 4). The mean blood loss was 1104 mL (± 1146 mL). Intraoperative red blood cell transfusion was necessary in 133 (41.4%) patients. For patients with primary bone tumors, in almost all cases (except of one palliative), an extensive tumorresection was performed. In eleven cases (73.3%), a total vertebrectomy with vertebral body replacement was performed. In five cases, the tumor was embolized preoperatively. Complications and Revision Surgery Intraoperative complications were documented in 30 out of 321 (9.3%) patients (Table 4). Direct implant-associated complications as screw breakage were rare (eleven out of 321 cases, 3.4%). In six cases, pedicle screws broke during insertion; in two cases, during intraoperative revision; and in three cases, during implant removal. The tips of the broken screws were left in the vertebral body. During insertion, in five cases, another pedicle screw was inserted in the same level in a different trajectory, and in three cases, the level was skipped on the side of the broken screw without the need of extension of the construct as all these screws broke in the middle part of the fusion. Figure 3). The revision rate because of pedicle screw loosening was low (seven out of 321 patients, 2.2%) ( Table 5). The reasons for screw loosening were low-grade infection (three cases), acute putrid infection (two cases), mechanical screw pullout (one case) and tumor recurrence (one case). In one case, revision surgery was necessary because of rod breakage. However, this rod was made of titanium. In one case, which was revised multiple times because of an acute infection, a postoperative screw breakage was registered. Outcome In total, 258 (80.4%) patients were treated with radiotherapy postoperatively. Nine patients with spinal metastases needed reoperation due to local tumor recurrence. The median time to tumor recurrence was 417 days (range: 301-1261 days). For six patients with primary bone tumors, the first operation in our department was already a revision surgery because of a recurrent tumor with a median time interval of 389.5 days (range: 28-1836 days). One of these patients had another revision surgery because of tumor recurrence after 141 days, and another patient was operated twice (after 266 days and after another year). One patient with a first-time diagnosis of a spinal primary bone tumor was operated after 182 days for tumor recurrence. The median follow-up for all the patients was 97 days (range: 7-1888 days). Seven patients died during the same hospital stay. The reasons were respiratory insufficiency (five), cardiopulmonary decompensation (one) and palliative situation (one). The majority of patients preserved or even improved their neurological function postoperatively (Table 6). Analogously, for the majority, the postoperative KPS was equal or even better (Table 6). Discussion In this study, we report about the safety and efficacy of CFR-PEEK pedicle screw systems for patients with SM and primary bone tumors of the spine. The rate of intraoperative complications of our study was comparable with other series of spinal instrumentation for spinal neoplasms [22]. The rate of intraoperative implantassociated complications was low. In eleven cases, screw breakage was reported: six during insertion, two during intraoperative revision and three when removing the implants. In four cases of screw breakage during insertion, an osteoblastic bone was documented and in two cases, the reason remained unclear. Biomechanical studies have shown that CFR-PEEK stabilization constructs resist the same static and cyclic axial compression loading and pull-out forces as titanium does [16,23]. However, torsion forces during screw insertion have not been analyzed in these studies. The rate of CFR-PEEK screw breakage in our study was comparable to what has been reported before in a smaller cohort of patients [18]. The major reasons of postoperative complications requiring revision surgery were surgical site infections and wound healing disorders, which were regarded not to be attributed to the use of CFR-PEEK. Their number was comparable to the rates of this type of complication of patients treated for SM previously reported by other studies [24,25]. In seven cases (2.2%), revision surgery for screw loosening was performed. This rate is lower compared to what has been described for titanium alloy systems (16%) [26,27]. It is also significantly lower compared to the rate of pedicle screw loosening after CFR-PEEK instrumentation for spondylodiscitis (35%), which we had examined in another study [28]. The mean time of the diagnosis of screw loosening in the latter study was 110 days, while the median follow-up for patients with spinal metastases in this study was 79 days and for patients with a primary bone tumor, 349 days. So, the shorter follow-up interval of patients with spinal metastases in this study could bias the rate. However, pedicle screw loosening after spinal instrumentation for infectious indication and for oncological indication cannot be compared directly because of different factors influencing the loosening process, such as the use of cement augmentation, different bone quality, the role of biofilm-producing bacteria and different surgical strategies. The preoperative KPS of patients in our study was within the range of other recent studies about surgical treatment for spinal neoplasms [24,29]. A strong association between KPS and survival after surgery for spinal metastases was shown before [30]. In our study, the majority of patients preserved or even improved their KPS and neurological function postoperatively. Modern oncological therapeutic concepts and better screening methods have led to a prolonged survival of cancer patients [7]. As a consequence, modern spinal tumor surgery not only aims on spinal stabilization, prevention of or recovery from neurological deficits and pain reduction, but also on long-term symptom control. Therefore, durable constructs are required, enabling an optimal application of adjuvant radiotherapy and an optimal long-term follow-up imaging. These requirements become even more clear given the fact that the majority of patients in our study presented with pain without neurological impairment, while the percentage of patients with neurological symptoms was higher in older studies [31]. It has been shown in vitro [32] and in vivo [13] that CFR-PEEK reduces artifacts on CT and MR imaging and shows less perturbation effects on radiotherapy dose distributions [15] than titanium, fulfilling the requirements for an optimal application of radiotherapy ( Figures 5 and 6) and optimal long-term follow-up imaging. The advantages of CFR-PEEK on follow-up imaging have already been shown in the field of pyogenic spondylodiscitis [33]. It has been shown in vitro [32] and in vivo [13] that CFR-PEEK reduces artifacts on CT and MR imaging and shows less perturbation effects on radiotherapy dose distributions [15] than titanium, fulfilling the requirements for an optimal application of radiotherapy (Figures 5 and 6) and optimal long-term follow-up imaging. The advantages of CFR-PEEK on follow-up imaging have already been shown in the field of pyogenic spondylodiscitis [33]. Strenghts of This Study This is the largest study, to the best of our knowledge, of consecutive patients with spinal neoplasms operated on using routinely a CFR-PEEK pedicle screw system reporting on safety and efficacy. Limitations of This Study There are several limitations of this study. (1) It is a retrospective study without a randomized control group; thus, CFR-PEEK cannot be compared to titanium directly. (2) No standardized follow-up examinations were performed and the follow-up period was comparably short. (3) In reconstructing the anterior column in some cases, titanium cages were used as well as in some cases, titanium rods for posterior stabilization, degrading the advantages of CFR-PEEK pedicle screws regarding artifacts. However, this study did not aim on a qualitative evaluation of postoperative imaging and radiotherapy planning. Strenghts of This Study This is the largest study, to the best of our knowledge, of consecutive patients with spinal neoplasms operated on using routinely a CFR-PEEK pedicle screw system reporting on safety and efficacy. Limitations of This Study There are several limitations of this study. (1) It is a retrospective study without a randomized control group; thus, CFR-PEEK cannot be compared to titanium directly. (2) No standardized follow-up examinations were performed and the follow-up period was comparably short. (3) In reconstructing the anterior column in some cases, titanium cages were used as well as in some cases, titanium rods for posterior stabilization, degrading the advantages of CFR-PEEK pedicle screws regarding artifacts. However, this study did not aim on a qualitative evaluation of postoperative imaging and radiotherapy planning. Conclusions CFR-PEEK is a safe and efficient alternative to titanium for spinal instrumentation because of spinal neoplasms with low complication and revision rates in routine use. We recommend using CFR-PEEK for spinal oncological surgery in the context of modern cancer therapy so that patients can benefit from an optimized application of radiotherapy and from an earlier detection of tumor recurrence. To further prove these obvious clinical advantages, prospective studies with long-term follow-up are necessary.
2022-10-30T15:18:40.373Z
2022-10-27T00:00:00.000
{ "year": 2022, "sha1": "035f0559773e88b76d6ce0248af2ce96f4006e1d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/21/5275/pdf?version=1666862630", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a55158862d835f5d6d12f58d8c4c931c82285da", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
246016161
pes2o/s2orc
v3-fos-license
A non-parametric Plateau problem with partial free boundary We consider a Plateau problem in codimension $1$ in the non-parametric setting. A Dirichlet boundary datum is given only on part of the boundary $\partial \Omega$ of a bounded convex domain $\Omega\subset\mathbb{R}^2$. Where the Dirichlet datum is not prescribed, we allow a free contact with the horizontal plane. We show existence of a solution, and prove regularity for the corresponding minimal surface. Finally we compare these solutions with the classical minimal surfaces of Meeks and Yau, and show that they are equivalent when the Dirichlet boundary datum is assigned in at most $2$ disjoint arcs of $\partial \Omega$. Introduction Let Ω ⊂ R 2 be a bounded open convex set; in this paper we look for an area-minimizing surface which can be written as a graph over a subset of Ω, and spanning a Jordan curve Γ σ = γ ∪ σ ⊂ R 2 × [0, +∞). Here γ is fixed (Dirichlet condition) and is given by a family {γ i } n i=1 ⊂ ∂Ω × [0, +∞) of n ∈ N curves each joining pairs of points {(p i , q i )} n i=1 of ∂Ω. Whereas σ, which represents the free boundary, consists of (the image of) n curves (σ 1 , . . . , σ n ) sitting in the plane containing Ω (also called free boundary plane), and joining the endpoints of γ in order that γ ∪ σ forms a Jordan curve Γ σ in R 3 . We assume that each γ i is Cartesian, i.e., it can be expressed as the graph of a given nonnegative function ϕ defined on a corresponding portion of ∂Ω. This allows to restrict ourselves to the Cartesian setting, and to assume that the competitors for the Plateau problem are expressed by graphs of functions ψ defined on a suitable subdomain of Ω depending on σ. Our prototypical example is given by the catenoid. Consider a cylinder in R 3 with a circle of radius r as basis, and height l. Choose a system of Cartesian coordinates in which the x 1 x 2 -plane contains the cylinder axis, and restrict attention to the half-space {x 3 ≥ 0} as in Figure 1, where Ω = (0, l) × (−r, r) and n = 2. Write where ∂ 0 1 Ω = (0, l) × {r}, ∂ 0 2 Ω = (0, l) × {−r}, ∂ D 1 Ω = {0} × (−r, r), and ∂ D 2 Ω = {l} × (−r, r). On the Dirichlet boundary ∂ D Ω = ∂ D 1 Ω ∪ ∂ D 2 Ω we prescribe a continuous function ϕ whose graph consists of the two half-circles γ 1 and γ 2 . The endpoints of γ 1 and γ 2 live on the free boundary plane (the horizontal plane) and are p 1 = (0, −r), q 1 = (0, r), and p 2 = (l, r), q 2 = (l, −r) respectively. The free boundary σ consists of two curves σ 1 and σ 2 with endpoints q 1 , p 2 , and q 2 , p 1 , respectively, constrained to stay in Ω. The concatenation of γ = γ 1 ∪ γ 2 and σ forms a Jordan curve in R 3 Therefore we proceed to look for an area-minimizer among all Cartesian surfaces S with boundary Γ σ keeping σ free, i.e. we minimize the area among all pairs (σ, S). In this particular case of the catenoid, a minimizing sequence ((σ k , S k )) tends (in a suitable way specified in the sequel) to a minimizer (σ, S) which allows for two different possibilities. If l is small, σ 1 and σ 2 remain disjoint and the classical catenoid (half of it, namely the intersection between the catenoid and the half-space {x 3 ≥ 0}) is the surface S, in turn coinciding with the graph of a function ψ defined on the region of Ω "enclosed" by σ. If instead l is large, the two curves σ 1 and σ 2 merge and the region of Ω enclosed by σ tends to become empty (it reduces to the two segments ∂ D 1 Ω ∪ ∂ D 2 Ω). This describes the solution given by two (half) discs. A peculiarity of our problem is the presence of a free boundary. The problem of Plateau with partial free boundary has been exhaustively studied (see for instance [10]) but never investigated, to our best knowledge, with the non-parametric approach. Referring to Section 2 for the precise description of the mathematical framework, here we just describe it with few details. We fix some distinct points p 1 , q 1 , p 2 , q 2 , . . . , p n , q n ∈ ∂Ω taken in clockwise order. The part of ∂Ω between the points p i and q i is noted by ∂ D i Ω, and the part between q i and p i+1 by ∂ 0 i Ω. We fix a nonnegative continuous function ϕ : ∂Ω → [0, +∞) which is positive on ∂ D Ω = ∪ n i=1 ∂ D i Ω and vanishes on {p i , q i } n i=1 ∪ ∂ 0 Ω with ∂ 0 Ω = ∪ n i=1 ∂ 0 i Ω , and we consider Lipschitz injective and mutually disjoint curves σ i in Ω, i = 1, . . . , n, joining p i to q i+1 . We suppose the graph of ϕ on ∂ D Ω to be a Lipschitz curve in R 3 . We define E(σ) := ∪ n i=1 E(σ i ), with E(σ i ) the planar closed region enclosed between ∂ 0 i Ω and σ i . We define the two classes On the boundary of the convex set Ω we have fixed the points p i , q i ; the arc of ∂Ω joining p i to q i is ∂ D i Ω, while the arc joining q i to p i+1 is ∂ 0 i Ω (p 4 := p 1 ). On ∂ D Ω the Dirichlet boundary datum ϕ is imposed, whose graph has been depicted. The dotted arcs are the free planar curves σ i joining the pairs (q i , p i+1 ). We want to find a solution to the following minimum problem: inf (σ,ψ)∈Xϕ A(ψ; Ω \ E(σ)), (1.3) where A denotes the classical area integral, i.e., A(ψ; Ω \ E(σ)) := Ω\E(σ) » 1 + |∇ψ| 2 dx. (1.4) Since, in general, existence of minimizers is not guaranteed in the class X ϕ , we need to formulate this problem to a more suited space of admissible pairs. Specifically, a standard relaxation procedure leads one to analyse the problem above for pairs (σ, ψ) belonging to Σ×BV (Ω), where Σ is a suitable class containing Σ but which also allows for partial overlapping of the curves σ i (a precise definition is given in Section 2.3). Therefore we shall be concerned with the study of the functional F ϕ defined as where (σ, ψ) ∈ W ⊂ Σ × BV (Ω), W is the space of pairs (σ, ψ) ∈ Σ × BV (Ω) such that ψ = 0 a.e. on E(σ), and A(ψ; Ω) is the relaxed area functional defined as in (2.1), which accounts for the area of the generalized graph of the map ψ on Ω. The functional F ϕ extends the area integral A to the larger class W. We then prove the following result, accounting for existence and regularity of minimizers of F ϕ . Theorem 1.1. There exists a minimizer of F ϕ on W. Moreover, any minimizer (σ, ψ) ∈ W of F ϕ satisfies the following regularity properties: (1) The region E(σ) consists of a family of closed convex sets. The boundary ∂E(σ) is given by the union of the arcs ∂ 0 Ω and a family of disjoint Lipschitz curves in Ω (joining the points p i and q j , in some order). Moreover, if ∂ D i Ω is not a straight segment, then ∂ D i Ω ∩ ∂E(σ) = Ø. If instead ∂ D i Ω is a straight segment, then either ∂ D i Ω ∩ ∂E(σ) = Ø or ∂ D i Ω ∩ ∂E(σ) = ∂ D i Ω. (3) If Ω ∩ ∂E(σ) = Ø, there is at least a minimizer (σ, ψ) such that ψ is continuous and null on Ω ∩ ∂E(σ), and moreover Ω ∩ ∂E(σ) consists of a family of mutually disjoint smooth curves (joining p i and q j in some order). A comparison with classical solutions of the Plateau problem in parametric form is in order. Denoting by γ i the graph of the map ϕ on ∂ D i Ω, we consider also sym(γ i ), namely the graph of −ϕ on ∂ D i Ω, which is symmetric to γ i with respect to the plane containing Ω. Setting Γ i := γ i ∪sym(γ i ), this turns out to be a simple Jordan curve in R 3 , for all i = 1, . . . , n. Hence we can consider the classical Plateau problem for the curves Γ i . In the case n = 1 it is intuitive that a disc-type minimal surface S spanning Γ = Γ 1 will be symmetric with respect to the plane containing Ω, and that S + := S ∩ {x 3 ≥ 0} will be a minimal disc with partial free boundary on Ω. It is interesting to compare such a minimal disc with the graph of ψ, where (σ, ψ) ∈ W is a minimizer as in Theorem 1.1. Actually, in this simple case n = 1, it is not difficult to see that S + is Cartesian, and it is the graph of a function ψ which is positive outside the convex region E(σ) enclosed by σ and ∂ 0 Ω, and further (σ, ψ) is a minimizer as provided by Theorem 1.1. Also the converse is true: Any minimizer (σ, ψ) that satisfies (1)-(3) of Theorem 1.1 has as graph of ψ a disc-type surface S + whose double S = S + ∪ S − is a classical solution to the Plateau problem for the curve Γ. This result is rigorously stated in Theorem 6.1 of Section 6.1. In Section 6.2 we instead analyse the case n = 2. In this case one might look for minimal surfaces obtained as union of two discs spanning Γ 1 and Γ 2 , or else for a catenoid-type surface spanning Γ = Γ 1 ∪ Γ 2 together. Appealing to an existence result due to Meeks and Yau [18], we are able to show the counterpart of Theorem 6.1: Theorem 6.5, that essentially states that any minimizer (σ, ψ) ∈ W of F ϕ satisfying properties (1)-(3) of Theorem 1.1 is (the nonnegative half of) a Meeks-Yau solution, and vice-versa. In order to prove Theorem 6.5 we will strongly use the convexity of the domain Ω, which implies that the cylinder Ω × R, which contains Γ on its boundary, is convex, and so the results of Meeks and Yau are applicable. Due to the highly nontrivial arguments used to prove this result, we restrict our analysis to the case n = 2, since a generalization to the case n > 2 probably requires heavy modifications. Indeed, some of the lemmas needed to prove Theorem 6.5 employ crucially the fact that ∂ 0 Ω consists of only two connected components. For this reason we leave the case n > 2 for future investigations. Let us now come to the reasons for our study. One motivation is the description of a cluster of soap films which are constrained to wet a given system of wires γ emanating from a given free boundary plane. The soap films are expected to arrange in such a way to form a free boundary on the plane. Therefore, the questions of existence of a minimal configuration and its regularity naturally arise. A second motivation is related to the description of the singular part of the L 1relaxation of the Cartesian 2-codimensional area functional computed on nonsmooth maps. The L 1 -relaxed area functional [1,14], denoted by A(·; U ), is mostly unknown, up to a few exceptions, see [1,[5][6][7]20]. One of the remarkable exceptions is the case of 3 Ω are the set where ϕ is prescribed and positive. In the set ∂ 0 Ω =q 1 p 2 ∪q 2 p 3 ∪q 3 p 1 and on E(σ) = E(σ 1 ) ∪ E(σ 2 ) ∪ E(σ 3 ) we prescribe ψ = 0. The curves σ i joining q i to p i+1 (with the corresponding set E(σ i )) are indicated. On the dotted segment σ 1 and σ 2 overlaps with opposite orientations. The emphasized region Ω \ E(σ) is the one where ψ is not necessarily null. |x| : in this case it can be proved that where the infimum is taken over all pairs (σ, ψ) ∈ Σ × BV (R 2l ) with ψ = 0 a.e. on E(σ). Here the setting is the following: 1)), p = (0, 1), q = (2l, 1), and σ is a unique curve in R 2l joining p to q. The Dirichlet datum ϕ : This setting is similar to the catenoid case, with the difference that the Dirichlet boundary is here extended to include the basis (0, 2l) × {−1} and the free curve σ is just one simple curve (see Figure 4). In order to construct a recovery sequence for the relaxed area (1.6) of the vortex map, it is essential to analyse the existence and regularity of minimizers of F ϕ . In particular, it is necessary to show that there is at least one sufficiently regular 1 minimizer (σ, ψ). The shape of the curve σ and the graph of ψ are related to the vertical part of a Cartesian current in B l (0) × R 2 which arises as limit of (the graphs of) a recovery sequence ( According to what happens for the catenoid, also in this case we have a dichotomy for the behaviour of minimizers (σ, ψ). When l is small, the solution (σ, ψ) consists of a curve σ joining p and q whose interior is contained in R 2l , and its shape is so that E(σ) is convex; at the same time the graph of ψ on R 2l \ E(σ) is a sort of half-catenoid, so that if we double it considering also its symmetric with respect to the plane containing R 2l , it becomes a sort of catenoid spanning two radius one circles, and constrained to contain the segment (0, 2l) × {−1}. When instead l is larger Figure 4: The domain R 2l of the vortex map. The graph of ϕ on ∂ D R 2l is emphasized (in particular ϕ = 0 on the lower horizontal side), together with an admissible curve σ, which in this specific case partially overlaps the Dirichlet boundary. In this example n = 1. than a certain threshold, then the solution reduces to two circles spanning the two radius one and parallel circles. The structure of the paper is as follows. In Section 2 we introduce the setting of the problem in detail. In order to prove existence of minimizers of F ϕ we first restrict ourselves to prove the result in a smaller class W conv ⊂ W of admissible pairs (σ, ψ), where compactness is easier and allows to make use of the direct method. Roughly speaking, the class W conv accounts only for specific geometries of the free boundary σ, namely, it considers configurations for which each set E(σ i ) is convex. In Section 3 we prove the existence of minimizers of F ϕ in W conv . Next, in Section 4, we show the existence of minimizers in the wider class W where, essentially, σ is not constrained to the previous geometric features; this result is contained in Corollary 4.3. To show this we consider a minimizing sequence in W and we modify it, by a cut and paste procedure, in order to construct a minimizing sequence in W conv . In Section 5 we study the regularity properties of minimizers. Specifically, we state and prove Theorem 5.1, which rephrases in a more precise way the results contained in Theorem 1.1. Theorem 1.1 follows from Theorem 3.1, Corollary 4.3, and Theorem 5.1. Eventually in Section 6 we compare the solutions we found with the classical minimal surfaces spanning Γ. Here, as anticipated, we restrict our analysis to the case n = 1, 2, the case n = 2 essentially giving rise to either a catenoid-type minimal surface, or two disc-type surfaces spanning Γ 1 and Γ 2 . The main theorems here are Theorems 6.1 and 6.5. The proof of the former, for the case n = 1, is quite simple, whereas Theorem 6.5, for the case n = 2, requires a series of lemmas. In particular, if S is a Meeks-Yau catenoid-type minimal surface, at one step, we need to employ a Steiner symmetrization of the 3-dimensional finite perimeter set in Ω × R enclosed by S. In turn, using standard results on the condition of equality for the perimeters of a set and its symmetrization, we are able to show that the starting surface S were already symmetric with respect to the plane containing Ω, and already Cartesian, and the conclusion of the proof of Theorem 6.5 is achieved. Area of the graph of a BV function Let U ⊂ R 2 be a bounded open set. For any ψ ∈ BV (U ) we denote by Dψ its distributional gradient, so that where ∇ψ is the approximate gradient of ψ and D s ψ denotes the singular part of Dψ. We recall that the L 1 -relaxed area functional reads as [15] A(ψ; U ) := In what follows we denote by ∂ * A the reduced boundary of a set of finite perimeter A ⊂ R 3 (see [2]). For any ψ ∈ BV (U ) we denote by R ψ ⊂ U the set of regular points of ψ, namely the set of points x ∈ U which are Lebesgue points for ψ, ψ(x) coincides with the Lebesgue value of ψ at x and ψ is approximately differentiable at x. We define the subgraph SG ψ of ψ as This turns out to be a finite perimeter set in U ×R. Its reduced boundary in U ×R is the generalised denotes the integral current given by integration over SG ψ and ∂〚SG ψ 〛 ∈ D 2 (R 3 ) is its boundary in the sense of currents, then with 〚G ψ 〛 denoting the integer multiplicity 2-current given by integration over G ψ (suitably oriented; see [13] for more details). Hausdorff distance If A, B ⊂ R 2 are nonempty, the symbol d H (A, B) stands for the Hausdorff distance between A and B, that is where d F (·) is the distance from the nonempty set F ⊆ R 2 . If we restrict d H to the class of closed sets, then d H defines a metric. Moreover: (H4) If A ∈ K is convex, then there exists a sequence (A n ) n ⊂ K of convex sets with boundary of class C ∞ such that d H (A n , A) → 0 as n → ∞; (H5) Let (A n ) n be a sequence of closed convex sets in R 2 , A ⊂ R 2 and d H (A n , A) → 0 as n → +∞. Then A is convex as well; (H6) Let (A n ) n and A be compact convex subsets of R 2 such that d H (A n , A) → 0 and let x ∈ int(A); then x ∈ A n definitely in n; (H7) Let A and B be closed subsets of 1. Property (H1) is straightforward, while (H2) is well-known. Also property (H3) is easily obtained (see, e.g. [21]). Concerning property (H4) we refer to, e.g., [4,Corollary 2]. To see (H5), from (H1) we have that d An → d A pointwise, and therefore since d An is convex, also d A is convex, which implies A convex 2 . Let us now prove (H6) by contradiction; assume that there exists a subsequence (n k ) such that d An k (x) > 0 for all k ∈ N; then x ∈ R 2 \ A n k , d An k (x) = d ∂An k (x), and using (H1) twice, the first equality following from (H3). This implies x ∈ ∂A, a contradiction. Setting of the problem We fix Ω ⊂ R 2 to be an open bounded convex set (strict convexity is not required) which will be our reference domain. Given two points p, q ∈ ∂Ω in clockwise order, Ù pq stands for the relatively open arc on ∂Ω joining p and q. Let n ∈ N, n ≥ 1, and let {p i } n i=1 be distinct points on ∂Ω chosen in clockwise order; we set p n+1 := p 1 . For all i = 1, . . . , n let q i be a point inṗ i p i+1 ⊂ ∂Ω. We set and Since ∂ D i Ω and ∂ 0 i Ω are relatively open in ∂Ω, so are ∂ D Ω and ∂ 0 Ω. It follows that ∂Ω is the disjoint union We will make a further regularity assumption on ϕ: we require that the graph i Ω is a Lipschitz curve in R 3 , for all i = 1, . . . , n. Remark 2.4. The hypothesis ϕ > 0 on ∂ D Ω excludes from our analysis the example in Figure 4 of the introduction. We will further comment on this later on (see Section 5.1); the presence of pieces of ∂ D Ω where ϕ = 0 will bring to some additional technical difficulties that we prefer to avoid here. However, the setting in Figure 4 can be easily achieved by an approximation argument. Namely, one considers a suitable regularization ϕ ε of ϕ on ∂ D Ω such that ϕ ε > 0, and then letting ε → 0 one obtains a solution to the problem with Dirichlet datum ϕ. We will analyse the functional F = F ϕ defined in (1.5), namely where the pair (σ, ψ) belongs to the admissible class W, defined as follows: where (i') σ = (σ 1 , . . . , σ n ) with σ i injective, σ i (0) = q i and σ i (1) = p i+1 , for all i = 1, . . . , n; (ii') For i = 1, . . . , n, denoting by E(σ i ) ⊂ Ω the closed region enclosed between ∂ 0 i Ω and σ i ([0, 1]), we assume int(E(σ i )) ∩ int(E(σ j )) = Ø for i = j where int denotes the interior part; we also set (2.7) Remark 2.5. The injectivity property in (i') guarantees that the sets E(σ i ) are simply connected (not necessarily connected). The assumption that the interior int(E(σ i )) of the sets E(σ i ) are mutually disjoint is an hypothesis on the curves σ i , which essentially translates into the fact that these curves cannot cross transversally each other, but might overlap. Notice that int(E(σ i )) might be empty, as the case ∂ 0 i Ω = σ i ([0, 1]) is not excluded. The strategy to show existence and regularity of minimizers of the functional (2.5) (see (3.1)) is to reduce to study the same functional on a restricted class of competitors, more precisely to reduce our analysis to the case where the sets E(σ i ) are convex. Specifically, we define: (i) For all i = 1, . . . , n the set E(σ i ) is convex. As we have already said, the sets int(E(σ i )) might also be empty, since from assumption (i') we cannot exclude that σ i overlaps ∂ 0 i Ω: Recalling that Ω is convex, this can happen, by (ii') and (i), only ifq i p i+1 is a straight segment 3 . Clearly, (2.9) Remark 2.6. Exploiting the characterization of the boundaries of convex sets given in Corollary 2.3, we see that conditions (i'),(ii') and (i) for the curves in Σ conv imply the following: (P) For all i = 1, . . . , n there is a nondecreasing function θ i : [0, 1] → R with θ i (1) − θ i (0) ≤ 2π, and such that, setting γ i (t) := (cos(θ i (t)) , sin(θ i (t))) for all t ∈ [0, 1], we have Here we have denoted the length of σ i by (σ i ). Existence of minimizers of F in W conv The main result of this section reads as follows. Then ((σ) k , ψ k ) k admits a subsequence converging to an element of W conv . Proof. We divide the proof in two steps. Indeed we have Now taking the limit as k → +∞ in (3.2) we conclude. Thus lim k→+∞ σ ik = σ i uniformly, hence we also conclude that σ i takes values in Ω. It remains to show that E(σ i ) is convex for any i ∈ {1, . . . , n}. The uniform convergence of (σ ik ) yields lim This, together with property (H3), gives for h ≥ k, and so (E(σ ik )) k∈N is a Cauchy sequence in the space of compact subsets of R 2 endowed with the Hausdorff distance (see (H2)). We find K ⊂ R 2 convex compact such that d H (E(σ ik ), K) → 0. Eventually from (H3) we get is convex by property (H5). Proof. By a standard argument [15], the functional The converse inequality is a consequence of Fatou's Lemma and (H6), indeed The assertion of the lemma follows. Proof of Theorem 3.1. By Lemma 3.4 and Lemma 3.5 we can apply the direct method and conclude. Existence of a minimizer of F in W In this section we extend the previous results to the minimization of F in the larger class W of competitors. One issue we find in minimizing the functional F on W, is that the class Σ in (2.6) is not closed under uniform convergence, since a uniform limit of elements in Σ needs not be formed by injective curves. To overcome this difficulty, in Theorem 4.1 we prove that the infimum of F over W coincides with the infimum of F over W conv . Thus in particular, by Theorem 3.1, we derive the existence of a minimizer for F in W (Corollary 4.3). Moreover every connected component of E(σ) is convex. Remark 4.2. Since the σ i 's may overlap, the assumption that every E(σ i ) is convex does not imply in general that every connected component of As a direct consequence of Theorem 4.1 we have: Let (σ, ψ) ∈ W conv be a minimizer as in Theorem 3.1. Then (σ, ψ) is also a minimizer of F in the class W. For the reader convenience we split the proof of Theorem 4.1 into a sequence of intermediate results: Lemmas 4.4,4.5,4.6, and the conclusion. First we need to introduce some notation. Let (σ, ψ) ∈ W. We fix an extension ϕ ∈ W 1,1 (B) of ϕ on an open ball B ⊃ Ω. Extending ψ in B \ Ω as ϕ (still denoting by ψ such an extension), we can rewrite F(σ, ψ) as Notice that the function u is defined only on the half-plane R × (0, +∞), and in (4.2) the symbol u(s) denotes its trace on the line R × {0}. where R u is the set of regular points of u. We have, recalling the notation in Section 2.1, Then, looking at G u as an integral current, a slicing argument yields where the last inequality follows from the following fact: If we denote by 〚G u 〛 t the slice of the current 〚G u 〛 on the line {x 1 = t}, then , and in writing δ (t,st,0) we are using that u has compact support in B r . This can be seen, for instance, by approximation of u by smooth maps 4 . Therefore This justifies the last inequality in (4.4), and the proof is achieved. We now turn to two technical lemmas which are necessary to prove Theorem 4.1. We need to introduce a class of sets whose boundaries are regular enough so that the trace of a BV function on them is well-defined. Precisely we say that an open subset of R 2 is piecewise Lipschitz if it can be written as the union of a finite family of (not necessarily disjoint) Lipschitz open sets. Notice that, by (2.1) if V ⊂⊂ U is a piecewise Lipschitz subset of an open and bounded U ⊂ R 2 , then Then, for any i ∈ {1, . . . , N }, Proof. Fix i ∈ {1, . . . , N }. By the convexity of Ω, we have ψ = ψ i in B \ Ω, hence it suffices to show that We start by observing that we may assume F i to be simply connected. Indeed, if not, we can replace it with the set obtained by filling the holes of F i , and by setting ψ equal to zero in the holes. This procedure reduces the energy. Indeed, since F i is piecewise Lipschitz, any hole H of it satisfies ∂H ⊂ ∪ n j=1 ∂A j where A j 's are the Lipschitz sets whose union is F i . Hence the trace of ψ H on ∂H is well-defined, and the external trace ψ (B \ H) vanishes. We have that (∂conv(F i )) \ ∂F i is a countable union of segments. We will next modify ψ by iterating at most countably many operations, setting ψ = 0 in the region between each of these segments and ∂F i . Step 1: Base case. Let l be one of such segments, and U be the open region enclosed between ∂F i and l. We define ψ ∈ BV (Ω) as We claim that where G := G ∪ U . To prove the claim we introduce the sets Note that H is a piecewise Lipschitz set. By construction and (4.8) will follow if we show that this can also be written as In turn A(ψ ; B) = A(ψ ; U ) + A(ψ ; B \ U ) (and similarly for ψ), so we have reduced ourselves with proving In view of the definition of ψ which is zero in U , we have 5 A(ψ ; U ) = l |ψ + |dH 1 +|U | (ψ + denoting the trace of ψ (B \ U ) on the segment l) implying that (4.9) is equivalent to , and the expression above is equivalent to We now prove (4.10). Fix a Cartesian coordinate system (x 1 , x 2 ) so that l belongs to the x 1 -axis and U belongs to the half-plane {x 2 > 0}. Let u be an extension of ψ in R×(0, +∞) which vanishes outside U . Lemma 4.4, applied to u with the ball B r = B, implies Here the last inequality follows by recalling that ψ (and thus u) vanishes on V . From this and the inequality l |ψ + |dH 1 ≤ l |ψ + − ψ U |dH 1 + l |ψ U |dH 1 the proof of (4.10) is achieved, so that (4.8) follows. Step 2: Iterative case. We set ∂(conv(F i )) \ ∂F i = ∪ ∞ j=1 l j with l j mutually disjoint segments. For every h ≥ 1 we define the pair (ψ h , G h ) as follows: where U 1 is the open region enclosed between ∂F i and l 1 . We also define where U h is the open region enclosed between ∂H h−1 and l h and H h : By construction each H h is simply connected and piecewise Lipschitz, For any h ≥ 2 we apply step 1, and after h iterations we get (4.12) In particular, for all h ≥ 1, and then we easily see that, up to a subsequence, (4.13) Finally, gathering together (4.11)-(4.13) we infer This concludes the proof. and Proof. Base case: (h = 1). We take the sets and let Then by Lemma 4.5, The next step is not necessary if N = 1. Iterative step: (h > 1). Suppose N > 1. Let 1 < h ≤ m ≤ N be natural numbers, and let F 1,h , . . . , F m,h be connected closed subsets of Ω with nonempty interior that satisfy the following property: There exists 1 ≤ k < h such that: Notice that, if m > 1, for h = 2 the sets in the base case satisfy (1), (2) with m = N and k = 1. We then set (a) If I k = Ø we define the sets (b) if I k = Ø, up to relabelling the indices, we may assume that Then we set In both cases (a) and (b) a direct check shows that the produced sets satisfy properties (1) and (2). We define also the function Then, by induction, for all h we use Lemma 4.5, and in view of (4.18) we infer Conclusion. If N = 1 it is sufficient to apply only the base case. If instead N > 1 after a finite number h ≤ N of iterations we obtain a collections of mutually disjoint and closed convex sets Since from (2.9) it follows inf we only need to show the converse inequality. Take a pair (σ,ψ) ∈ W; we suitably modify (σ,ψ) into a new pair (σ, ψ) ∈ W conv satisfying and this will conclude the proof. Let E(σ 1 ), . . . , E(σ n ) be the closed sets with mutually disjoint interiors corresponding toσ (as in (ii') of Section 2.3) and let G : Consider the (closure of the) connected components F 1 , . . . , F N of G, N ≤ n . Then by Lemma 4.6 there exist 1 ≤ñ ≤ N and ‹ F 1 , . . . , ‹ Fñ ⊂ Ω mutually disjoint closed and convex satisfying (4.14), (4.15) and (4.16). Therefore, by construction, for every i = 1, . . . , n, q i and p i+1 belong to ‹ F j for a unique j ∈ {1, . . . ,ñ}. For every j = 1, . . . ,ñ we denote by q j 1 , p j 1 +1 , . . . , q jn j , p jn j +1 , the ones that belong to F j . Then we conclude by taking (σ, ψ) ∈ W conv with σ := (σ 1 , . . . , σ n ) and for every j = 1, . . . ,ñ and ψ := ψ . Regularity of minimizers In this section we investigate regularity properties of minimizers of F. The main result reads as follows. satisfies the following properties: 1 Each connected component of E(σ) is convex; 2 ψ is positive and real analytic in Ω \ E(σ); Moreover, there is a minimizer (σ, ψ) ∈ W conv such that 5 Ω ∩ ∂E(σ) consists of a finite number of disjoint curves of class C ∞ , and ψ is continuous and null on ∂E(σ) \ ∂ D Ω. The prototypical example is given by the classical catenoid, as explained in the introduction (see also Figure 1) where, if the basis of the rectangle Ω = R 2 is large enough, a solution ψ is identically zero, and ∂ D Ω ⊂ ∂E(σ). This also explains why in point 5 of For the reader convenience we divide the proof in a number of steps. Proof. Item 1 follows by Theorem 4.1. By [15,Theorem 14.13] we also have that ψ is real analytic in Ω \ E(σ). Together with the strong maximum principle [15,Theorem C.4], this implies that, in Ω \ E(σ), either ψ > 0 or ψ ≡ 0. On the other hand, since Ω is convex we can apply [15,Theorem 15.9] and get that ψ is continuous up to ∂ D Ω \ ∂E(σ); in particular Lemma 5.4. Let Γ ⊂ R 3 be a rectifiable, simple, closed and non-planar curve satisfying the following properties: (1) Γ ⊂ ∂(F × R) for some closed bounded convex set F ⊂ R 2 with nonempty interior; (2) Γ is symmetric with respect to the horizontal plane R 2 × {0}; (3) There are an arc Ù pq ⊂ ∂F , with endpoints p and q, and Let S be a solution to the classical Plateau problem for Γ, i.e., a disc-type area-minimizing surface among all disc-type surfaces spanning Γ. Then: (1 ) β p,q := S ∩ (R 2 × {0}) ⊂ F is a simple curve of class C ∞ joining p and q such that β p,q ∩ ∂F = {p, q}; (2 ) S is symmetric with respect to R 2 × {0}; is the open region enclosed between Ù pq and β p,q . Moreover ψ is analytic in U p,q ; (4 ) The curve β p,q is contained in the closed convex hull of Γ, and F \ U p,q is convex. For later convenience we prove Lemma 5.4 under the more general assumption (3). Proof. Even though several arguments are standard, we give the proof for completeness. Step 1: β p,q is a simple curve joining p and q. Let B 1 ⊂ R 2 be the open unit disc centred at the origin and let be a parametrization of S with Φ(∂B 1 ) = Γ, that is harmonic, conformal, and therefore analytic in B 1 , continuous up to ∂B 1 . Further by (1) it follows that Φ is an embedding and hence injective (see [18] and also [10, page 343]). ) is a simple smooth curve joining p and q. Step 2: S is symmetric with respect to the horizontal plane R 2 × {0}. By step 1 the sets {w ∈ B 1 : Φ 3 (w) ≥ 0} and {w ∈ B 1 : Φ 3 (w) ≤ 0} are simply connected and the two surfaces have the topology of the disc. We assume without loss of generality that Then S is symmetric surface of disc-type with ∂ S = Γ and In particular S is a symmetric solution to the Plateau problem for Γ. Further S = S on a relatively open subset of S; hence, since they are real analytic surfaces, they must coincide, S = S. Step 3: S + is the graph of a function ψ ∈ W 1,1 (U p,q ) ∩ C 0 (U p,q ). To show this it is enough to check the validity of the following Claim: Every vertical plane Π is tangent to int(S) at most at one point. In fact by step 2 this readily implies that int(S + ) has no points with vertical tangent plane and hence we can conclude. We prove the claim arguing by contradiction as in [6, page 97], that is we assume there is a vertical plane Π tangent to int(S) at x and x with x = x . We define the linear map d ν (x) := (x − x ) · ν with ν a unit normal to Π, so that clearly Π = {x ∈ R 3 : d ν (x) = 0}. Since • at most two points and a segment; • two segments; • four points. Case 1: A 1,1 and A 1,2 belong to the same connected component containingw 1 w 2 . Then we can find two simple curves α 1 , α 2 contained in A 1,1 and A 1,2 respectively, that connect w to a point inw 1 w 2 and such that the region enclosed by the curve Since d ν • Φ > 0 on α 1 ∪ α 2 by the maximum principle we have a contradiction. Case 2: A 1,1 and A 2,1 belong to the connected component containingw 1 w 2 while A 1,2 and A 2,2 belong to the connected component containingw 3 w 4 . Then we can find four simple curves α i,j (with i, j = 1, 2) contained respectively in A i,j , such that α 1,1 (respectively α 2,1 ) connects w (respectively w ) to a point inw 1 w 2 and α 1,2 (respectively α 2,2 ) connects w (respectively w ) tow 3 w 4 . Then the region enclosed by the curve which again by the maximum principle gives a contradiction. Thus the claim follows. Step 4: The curve β p,q is contained in the closed convex hull of Γ, and the set F \ U p,q is convex. Let π(Γ) ⊂ ∂F be the projection of Γ onto the plane R 2 × {0}. By [10, Theorem 3, pag. 343] the relative interior of S is strictly contained in the convex hull of Γ, thus in particular the curve β p,q (respectively β p,q \ {p, q}) is contained (respectively strictly contained) in the same half-plane (with respect to the line pq) that contains π(Γ). Now, assume by contradiction that F \ U p,q is not convex. Then there are p , q ∈ β p,q with the following properties: • The open region U enclosed by β p,q and the segment p q is non-empty and contained in U p,q ; • the points p and q and the set U lie on the same side with respect to the line containing p q . Let then d W : R 3 → R be an affine function that vanishes on the vertical plane containing p q and is positive on the half-space W + containing p, q and U . We now observe that Γ ∩ W + is the union of two connected subcurves Γ 1 and Γ 2 , containing p and q respectively. As a consequence Φ −1 (Γ 1 ) =w 1 w 2 and Φ −1 (Γ 2 ) =w 3 w 4 for some w 1 , w 2 , w 3 , w 4 ∈ ∂B 1 (clockwise oriented). On the other hand since d W > 0 on U we can find t ∈ ∂U \ p q such that Once again by the harmonicity of d W • Φ : B 1 → R we deduce the existence of a curve α ⊂ {w ∈ B 1 : d W • Φ(w) > 0} joining Φ −1 (t ) to one ofw 1 w 2 andw 3 w 4 . Hence Φ(α) ⊂ Φ(B 1 ) = ψ(U p,q ) is a curve joining t to one of Γ 1 and Γ 2 , say Γ 1 . This implies that the projection π(Φ(α)) of Φ(α) onto the horizontal plane R 2 × {0} is a curve contained in U p,q that connects t to π(Γ 1 ). So in particular, the curve π(Φ(α)) cannot be included in the half-plane W + . But this contradicts the fact that α ⊂ {w ∈ B 1 : d W • Φ(w) > 0} (this is because the values of d W at a point x and π(x) are the same). We need also the following technical results on the distance function d F from a convex set F . Lemma 5.6. Let F ⊂ R 2 be bounded, closed and convex. Then Proof. By [8, Theorem 3.6.7 pag. 75] it follows that d F ∈ C 1,1 with ν η the outer unit normal to ∂B ∪ ∂(F + η ). By taking the limit as k → ∞ we get and lim where (5.5) follows by using that with C > 0 independent of η. By the arbitrariness of η > 0, the thesis follows. Corollary 5.7. Let U ⊂ R 2 be a bounded open set with Lipschitz boundary. Let F ⊂ R 2 be closed and convex such that U ∩F = Ø and let ψ ∈ W 1,1 (U )∩L ∞ (U )∩C 0 (U ). Then the following formula holds: where ν is the outer normal to ∂U and γ denotes the normal trace of ∇ d F on ∂U . Remark 5.8. The normal trace γ of ∇ d F on ∂F equals 1 H 1 -a.e. on ∂F . Indeed, from Corollary 5.7 we have that for all ϕ ∈ C 1 where we have used that ∂(F + η ) being a level set of d F , it results ∇ d F = ν η on it. Letting η → 0 and using that ∆ d F ∈ L 1 (B \ F ) for all balls B, we infer By the arbitrariness of ϕ and again by Corollary 5.7, the claim follows. Lemma 5.9. Let F ⊂ Ω be closed and convex with non-empty interior, and let δ > 0. Let Proof. Let ε ∈ (0, δ) and T ε : By Remark 5.8 we have and lim Moreover, since ∆ d F ∈ L 1 (T ε ) by Lemma 5.6, we deduce also Finally gathering together (5.8)-(5.11) we infer (5.6). Remark 5.10. Let F , δ and ψ be as in Lemma 5.9. Let α be any connected component of Ω ∩ ∂F , and for every 0 < ε < δ let α ε be the corresponding component of Ω ∩ ∂(F + ε ); namely, if π F is the orthogonal projection onto the convex closed set F , setting then one has α ε := α ε ∩ Ω. Arguing as in Lemma 5.9, we can show that Lemma 5.11. Let (σ, ψ) ∈ W conv be a minimizer as in Theorem 4.1. Then there is a minimizer ( σ, ψ) ∈ W conv with the following properties: 2. ψ is continuous and null on Ω ∩ ∂E( σ). The second condition means essentially that ψ vanishes on Ω ∩ ∂E( σ) when considering its trace from the side of Ω \ E( σ). Proof. We know by Lemma 5.3 that (σ, ψ) satisfies the following properties: • Each connected component of E(σ) is convex; • ψ is positive and real analytic in Ω \ E(σ); In what follows we are going to modify (σ, ψ) near each arc of ∂E(σ) using an iterative argument in order to get a new minimizer ( σ, ψ) ∈ W conv that satisfies 1-2. To this aim we denote by F 1 , . . . , F k with 1 ≤ k ≤ n the closed connected components of E(σ); we also set δ 0 := min i =j dist(F i , F j ) > 0. Moreover by the first property we deduce that Ω ∩ ∂E(σ) is the union of an at most countable family of pairwise disjoint arcs with endpoints in ∂Ω, i.e., where α i,j is a connected component of Ω ∩ ∂F i for i ∈ {1, . . . , k}, j ≥ 1 6 . Since S ε is a disc-type surface and ψ is analytic in Tε ε (α) it turns out that Yε is also a disc-type surface satisfying ∂Yε = Γε. Therefore using that Sε and S ε are solutions to the Plateau problems corresponding to Γε and Γ ε respectively, we have Passing to the limit as ε → 0 + , by (5.13) and the fact that H 1 (L ε ) → 0, we obtain We are finally in the position to conclude the proof of Theorem 5.1. Moreover by Lemma 5.11 there is a minimizer ( σ, ψ) ∈ W conv such that and ψ is continuous and null on Ω ∩ ∂E( σ). It remains to show that if ∂ D i Ω is not straight for some i = 1, . . . , n, then If instead ∂ D i Ω is straight for some i = 1, . . . , n we prove that property 4 holds. Eventually we show that there is a minimizer that satisfies property 5. This will be achieved in a number of steps. Step 1: Assuming that there is i ∈ {1, . . . , n} such that ∂ D i Ω is not straight, we show that ∂ D i Ω ∩ E( σ) = Ø. To prove this we proceed by analysing three different cases. Case A: Suppose, to the contrary, that there is a non-straight 8 arc Ù ab (with endpoints a = b) in ∂ D i Ω ∩ ∂E( σ). Thus in particular Ù ab ⊂ ∪ n j=1 σ j ([0, 1]). We may assume without loss of generality that Ù ab ⊂ σ 1 ([0, 1]). Then we consider the curves where In this way Γ satisfies the assumptions of Lemma 5.4 and hence a solution S to the Plateau problem spanning Γ is a disc-type surface such that: i. β a,b := S ∩ (R 2 × {0}) is a simple curve of class C ∞ joining a and b; ii. S is symmetric with respect to R 2 × {0}; iii. the surface S + := S ∩ {x 3 ≥ 0} is the graph of a function ψ a,b ∈ W 1,1 (U a,b ) ∩ C 0 (U a,b ), where U a,b ⊂ E( σ 1 ) is the open region enclosed between Ù ab and β a,b ; iv. the curve β a,b is contained in the closed convex hull of Γ and E( σ 1 ) \ U a,b is convex. The inclusion U a,b ⊂ E( σ 1 ) follows since Ù ab ⊂ σ 1 ([0, 1]), E( σ 1 ) is convex, and S is contained in the convex envelope of Γ. Furthermore by the minimality of S one has Here the strict inequality follows since the vertical wall spanning Γ given by } is a disc-type surface but, since Ù ab is not a segment, cannot be a solution to the Plateau problem. We now consider the pair ( σ, ψ) ∈ W conv given by Then noticing that ψ = 0 in U a,b , E( σ) = E( σ) ∪ U a,b , and recalling (5.18), we get where the penultimate equality follows from the fact that ψ is continuous and equal to ϕ on Ù ab while the traces of ψ and ψ coincide on ∂Ω \ Ù ab. This contradicts the minimality of ( σ, ψ). Case B : Suppose by contradiction that the set ∂ D i Ω ∩ ∂E( σ) contains an isolated point c or has a straight segment cc as isolated connected component. Then there are two arcs Ù ab ⊂ ∂ D i Ω andã b ⊂ ∂E( σ) with either a = a or b = b (and with endpoints a = b and a = b ) such that aa ∩ bb = Ø and Ù ab ∩ã b = {c} (respectively Ù ab ∩ã b = cc ). Notice also that, since ∂ D i Ω is not straight, the segment cc does not coincide with ∂ D i Ω and hence the arc Ù ab can be chosen so that it properly contains the segment cc . We consider the curves Notice that Γ ± connect a to b . By applying again Lemma 5.4 to the nonplanar curve Γ and arguing as in case A we obtain the contradiction also in this case. Case C : More generally, assume by contradiction that both the sets ∂ D i Ω∩∂E( σ) and ∂ D i Ω\∂E( σ) are nonempty. Then we can find a not flat arc Ù ab ⊂ ∂ D i Ω such that the following holds 9 : there are pairs of points {c j , d j } j∈N ⊂ ∂ D i Ω ∩ ∂E( σ) such that the arcsãd 0 ,ĉ 0 b, and {c j d j } ∞ j=1 are mutually disjoint and Without loss of generality, we might assume that all the points c j , d j ∈ σ 1 ([0, 1]). For all j ≥ 1 we denote by V j the region enclosed byc j d j and ∂E( σ) 10 . We now argue as in case B and choose a , b ∈ σ 1 ([0, 1]). Additionally, be the region enclosed between ∂E( σ) and aa ∪ãd 0 (∂E( σ) and bb ∪ĉ 0 b, respectively). We finally define Γ correspondingly, as in (5.20). Again by Lemma 5.4 the solution S to the Plateau problem corresponding to Γ satisfies properties i.-iv. with a and b in place of a and b respectively. Moreover by the minimality of S for every N ≥ 1 there holds 11 In particular by taking the limit as N → ∞ in (5.21) we get Let ( σ, ψ) ∈ W conv be defined as in (5.19), then observing that which in turn implies F( σ, ψ) ≤ F( σ, ψ) . (5.23) To conclude we need to show that the inequality in (5.23) is strict. To this aim we choose c ∈ {c j } ∞ j=1 . Consider the curves Γ 1 and Γ 2 defined as follows Let S 1 and S 2 be the solutions to the Plateau problem corresponding to Γ 1 and Γ 2 respectively, so that properties i.-iv. are satisfied with c in place of b and a respectively. By the minimality of S we have A(ψ a ,b , U a ,b ) < A(ψ a ,c , U a ,c ) + A(ψ c,b , U c,b ) . (5.24) 10 These regions are simply connected since cj, dj ∈ σ1([0, 1]). 11 The right-hand side is the area of the surface given by the (positive) subgraph of ϕ on Ù ab \ ∪ N j=1cj dj and the graph of ψ on the region ∪ N j=0 Vj, which is of disc-type. To see this we use that the trace of ψ on the subarcs of ∂E( σ) between the points cj and dj is zero (and between a and d0, and d0 and b ). On the other hand by arguing as above 12 we conclude and thus the contradiction. Step 2: Assuming there is i ∈ {1, . . . , n} such that ∂ D i Ω is a straight segment, and we show that i Ω has to be connected, i.e., it is either a single point a or a segment aa = ∂ D i Ω. In both cases we then consider a (small enough) ball B centred at a such that B ∩ E( σ) = B ∩ F (in the second case we also require that the radius of B is smaller than aa ). If ∂F ∩ ∂ D i Ω = {a} we let {p, q} := ∂B ∩ ∂F and {b, c} := ∂B ∩ ∂ D i Ω (with b, p and c, q lying on the same side with respect to a). Then we define the curves where Ù bp, Ù cq denote the arcs in ∂B joining b to p and c to q respectively. If ∂F ∩ ∂ D i Ω = aa we let {p, q} := ∂B ∩ ∂F and {b, c} := ∂B ∩ ∂ D i Ω where we identify q and c. Then we consider the curves By applying again Lemma 5.4 to Γ and arguing as above we get the contradiction. Step 3: We show that there is a minimizer ( σ, ψ) that satisfies property 5. We first notice that ψ is continuous and null on ∂E( σ) \ ∂ D Ω. Moreover by steps 1 and 2 it follows that ∂E( σ) ∩ Ω is the union of a finite number of pairwise disjoint Lipschitz curves each of them joining each p i for i = 1, . . . , n to each of the q j for some j = 1, . . . , n. To conclude it is enough to replace each curve, without increasing the energy, with a smooth one having the same endpoints. More precisely, let γ be any of such curves. Reasoning as in the proof of Lemma 5.11 step 1, we can replace ( σ, ψ) with a new minimizer (σ γ , ψ γ ) ∈ W conv such that ∂E(σ γ ) ∩ ∂Ω = ∂E(σ) ∩ ∂Ω and ψ γ = 0 on γ , where γ ⊂ ∂E(σ γ ) ∩ Ω is a suitable smooth curve that replaces γ and has the same endpoints of γ. In particular ψ γ is continuous and null on ∂E(σ γ ) \ ∂ D Ω. Eventually iterating this procedure for each curve in ∂E( σ) \ ∂Ω we can construct a new minimizer ( σ, ψ) with the required properties. 12 With the arc Ù ac ( Ù cb, respectively) in place of Ù ab. The example of the catenoid containing a segment Consider the setting depicted in Figure 4. Precisely, for ε > 0 and consider an approximating sequence (ϕ) ε of continuous Dirichlet data, with G ϕε Lipschitz, which tends to ϕ uniformly and satisfy ϕ ε = 0 on ∂ 0 Ω, ϕ ε > 0 on ∂ D Ω. Let (σ ε , ψ ε ) be a solution as in Theorem 1.1 corresponding to the boundary datum ϕ ε ; as F(σ ε , ψ ε ) is equibounded 13 , arguing as in the proof of Lemma 3.4, we can see that, up to a subsequence, ((σ ε , ψ ε )) tends to some (σ, ψ) ∈ W conv , which minimizes the functional F with Dirichlet condition ϕ. In this case however we cannot guarantee that σ does not touch ∂ D Ω, even if this is not a straight segment. This is essentially due to the presence of the portion [0, 2l] × {−1} of ∂Ω where ϕ is zero, which does not allow to apply the arguments used in the proof of Theorem 5.1. In particular, it can be seen that if l is large enough, the solution (σ, ψ) splits and becomes degenerate, being ψ ≡ 0 and the functional F pays only the area of two vertical half discs of radius 1. Under a certain threshold instead the solution satisfies the regularity properties stated in Theorem 5.1, and in particular ψ = ϕ on ∂ D Ω, and σ is the graph of a smooth convex function passing through p and q. We refer to [6] for details and comprehensive proofs of these facts. 6 Comparison with the parametric Plateau problem: The case n = 1, 2 In this section we compare the solutions provided by Theorems 3.1 and 5.1 with the solutions to the classical Plateau problem in parametric form. Specifically, motivated by the example of the catenoid, we will restrict our analysis to the classical disc-type and annulus-type Plateau problem. These configurations correspond to the cases n = 1 and n = 2 respectively, i.e., the Dirichlet boundary ∂ D Ω is either an open arc or the union of two open arcs of ∂Ω with disjoint closure. Due to the highly involved geometric arguments, we do not discuss the case n > 2, which requires further investigation. Thus, in this section we assume n = 1, 2. We first discuss the case n = 1 which is a consequence of Lemma 5.4, and then the case n = 2. The case n = 1 Let n = 1. Let p 1 , q 1 ∈ ∂Ω, ∂ D Ω = ∂ D 1 Ω, ϕ be as in Section 2.3 and consider the space curve γ 1 := G ϕ ∂ D 1 Ω joining p 1 to q 1 . We define the curve where Sym(γ 1 ) := G −ϕ ∂ D 1 Ω , and consider the classical Plateau problem in parametric form spanning Γ. More precisely we look for a solution to 1) 13 We can indeed always bound it from above by |Ω| + ∂ D Ω |ϕε|dH 1 . where is a weakly monotonic parametrization of Γ . Then the disc-type surface is a solution to the classical Plateau problem associated to Γ, i.e., there is Φ ∈ P 1 (Γ) solution to (6.1) such that Φ(B 1 ) = S. Precisely we set Σ ann ⊂ R 2 to be an open annulus enclosed between two concentric circles C 1 and C 2 , and we look for a solution to where is a weakly monotonic parametrization of Γ j for j = 1, 2 . Here the crucial assumption that we require is that the curves Γ j have the orientation inherited by the orientation 14 of the graph of ϕ on ∂ D j Ω. Due to the specific geometry of Γ we can appeal to Theorem 6.4 below (which is a consequence of [18, Theorem 1 and Theorem 5]) to deduce the existence of a minimizer. This might not be true for a more general Γ. To this purpose for j = 1, 2 we consider the minimization problem defined in (6.1) for the curve Γ j , namely with P 1 (Γ j ) defined as in (6.2). Remark 6.2. By standard arguments one sees that m 2 (Γ) ≤ m 1 (Γ 1 ) + m 1 (Γ 2 ). Indeed, two disctype surfaces can be joined by a very thin tube (with arbitrarily small area) in order to change the topology of the two discs into an annulus-type surface. Toward the proofs of Theorems 6.1 and 6.5: preliminary lemmas In order to prove Theorems 6.1 and 6.5, we collect some technical lemmas. (a) Suppose that Ω \ E(σ) is simply connected. Then there exists an injective map Φ ∈ W 1,1 (Σ ann ; and Φ C j : C j → Γ j is a weakly monotonic parametrization of Γ j for j = 1, 2. (b) Suppose that Ω \ E(σ) consists of two connected components, whose closures F 1 and F 2 are disjoint, with F j ⊇ ∂ D j Ω for j = 1, 2. Then there exist two injective maps Φ 1 , and Φ j ∂B 1 : ∂B 1 → Γ j is a weakly monotonic parametrization of Γ j for j = 1, 2. (b). It is sufficient to argue as in case (a), by replacing Ω \ E(σ) in turn with F 1 and F 2 and Σ ann with B 1 to find Φ 1 and Φ 2 , respectively. Lemma 6.7. Let n = 2, and (σ, ψ) ∈ W conv be a minimizer of F in W satisfying properties 1-5 of Theorem 5.1. (b) Suppose that Ω \ E(σ) consists of two connected components whose F 1 and F 2 are disjoint, and F j ⊃ ∂ D j Ω for j = 1, 2, and Let Φ 1 , Φ 2 be the maps given by Lemma 6.6 (b). Then, for j = 1, 2, there is a reparametrization of Φ j belonging to P 1 (Γ j ) and solving (6.5). Hence we conclude which contradicts (6.8). In the last inequality we have used that 2m 1 (λ) ≥ m 2 (Γ); this follows from the fact that a disc-type parametrization of a minimizer for m 1 (λ) can be reparametrized on a half-annulus (as in the proof of Lemma 6.6), and glued with another reparametrization of it on the other half-annulus, so to obtain a parametrization of an annulus-type surface spanning Γ which is admissible for (6.4). Hence claim (6.10) follows. Now, since ψ is Lipschitz continuous on H k , for all k ∈ N there exists a parametrization Ψ k ∈ H 1 (B 1 ; R 3 ) ∩ C 0 (B 1 ; R 3 ) with Ψ k (∂B 1 ) = λ k monotonically which solves the classical disc-type Plateau problem spanning λ k and such that Letting k → +∞ and using that the Dirichlet energy of Ψ k equals the area of G ψ H k , we conclude that (Ψ k ) tends to a map Ψ ∈ H 1 (B 1 ; R 3 ) ∩ C 0 (B 1 ; R 3 ) with Ψ(∂B 1 ) = λ weakly monotonically, and that is a solution of the classical disc-type Plateau problem with Arguing as in the proof of Lemma 6.6 we finally get a parametrization Φ : Σ ann → R 3 which belongs to P 2 (Γ) and parametrizes G ψ (Ω\E(σ)) ∪ G −ψ (Ω\E(σ)) . This concludes the proof of (a). (b). It is sufficient to argue as in case (a), by replacing Ω \ E(σ) in turn with F 1 and F 2 and Σ ann with B 1 to find Φ 1 and Φ 2 , respectively. We can now start the proof of Theorems 6.1 and 6.5. Proof of Theorem 6.5 The proof of Theorem 6.5 is much more involved, so we divide it in a number of steps. We start with a result (which can be seen as the counterpart of Lemma 5.4 for the Plateau problem defined in (6.4)) that will be crucial to prove (i). In what follows we denote by π : R 3 → R 2 × {0} the orthogonal projection. Proof. We recall that Φ : Σ ann → R 3 is an embedding. The fact that π(Φ(Σ ann )) is a subset of Ω and contains ∂ D 1 Ω ∪ ∂ D 2 Ω follows from the fact that the interior of Φ(Σ ann ) is contained in the convex hull of Γ. So it remains to show that π(Φ(Σ ann )) is simply connected. Suppose by contradiction that π(Φ(Σ ann )) is not simply connected. Let H be a hole of it, namely a region in Ω surrounded by a loop contained in π(Φ(Σ ann )) and such that H ∩ π(Φ(Σ ann )) = Ø; choose a point P ∈ H. We will search for a contradiction by exploiting that Σ ann is an annulus and using that the map Φ is analytic and harmonic. Figure 5: The horizontal section of two planes Π θ 1 and Π θ 2 intersecting ∂ 0 1 Ω and ∂ 0 2 Ω, respectively. By hypothesis on P , for all θ ∈ [0, 2π) the intersection between Φ(Σ ann ) and Π θ consists of a family of smooth simple curves, either closed or with endpoints on Γ. Correspondingly, Φ −1 (Φ(Σ ann ) ∩ Π θ ) is a family of closed curves in Σ ann , possibly with endpoints on C 1 ∪ C 2 . In case (1) also Φ −1 (Φ(Σ ann ) ∩ Π θ 1 +π ) consists of closed curves in Σ ann . Take two loops α and α in Φ −1 (Φ(Σ ann ) ∩ Π θ 1 ) and in Φ −1 (Φ(Σ ann ) ∩ Π θ 1 +π ) respectively. Let d 1 be the signed distance function from the plane Π θ 1 ∪ Π θ 1 +π , positive on ∂ D 2 Ω. Since d 1 • Φ changes its sign when one crosses transversally α and α , we easily see that both α and α cannot be homotopically trivial in Σ ann (by harmoniticy of d 1 • Φ, if for instance α is homotopically trivial in Σ ann , d 1 • Φ = 0 in the region enclosed by α, i.e. the image of Φ is locally flat, contradicting the analyticity of Φ). Hence, since Φ is an embedding, they run exactly one time around C 1 ; as a consequence, they must be homotopically equivalent to each other in Σ ann . On the other hand, they do not intersect each other (Φ is an embedding), so they bound an annulus-type region in Σ ann , and by harmonicity d 1 • Φ is constantly null in this region. This would imply again that the image by Φ of this annulus is contained in Π θ 1 ∪ Π θ 1 +π , a contradiction. As in case (1), let α ∈ Φ −1 (Φ(Σ ann ) ∩ Π θ 1 ) and β ∈ Φ −1 (Φ(Σ ann ) ∩ Π θ 2 ) be two loops. We know that α and β are closed in Σ ann . Again, we conclude that α and β are homotopically equivalent in Σ ann , and both run one time around C 2 . Assume without loss of generality that α encloses β, which in turn encloses C 2 . Since d 2 • Φ is positive on both α and C 2 , d 2 • Φ must be positive in the region enclosed between them, contradicting the fact that it vanishes on β. If instead we are in case (3) we can argue analogously to case (2) and get a contradiction. In all cases (1), (2), and (3), we reach a contradiction which derives by assuming that π(Φ(Σ ann )) is not simply connected. The proof is achieved. Proof. By Lemma 6.9, π(Φ(Σ ann )) is simply connected in Ω, and contains ∂ D Ω. Therefore Ω \ π(Φ(Σ ann )) consists of two simply connected components, one containing ∂ 0 1 Ω and the other containing ∂ 0 2 Ω. Let E 1 and E 2 be the closures of these two components 19 , so that in particular the boundary of E i is a simple Jordan curve of the form β i ∪ ∂ 0 i Ω for some embedded curve β i ⊂ Ω joining the endpoints of ∂ 0 i Ω. We will prove that E i is convex for i = 1, 2. This will also imply that β i are Lipschitz. Take i = 1, and assume by contradiction that E 1 is not convex. Thus we can find a line l in R 2 and three different points A 1 , A 2 , A 3 on l, with A 2 ∈ A 1 A 3 , so that A 2 is contained in Ω \ E 1 , and A 1 and A 3 belong to the interior of E 1 . Consider the region π(Φ(Σ ann ))\l, which consists in several (open) connected components. There is one of these connected components, say U , which does not intersect ∂ D Ω and whose boundary contains A 2 . In addition, U ∩ ∂ D Ω = Ø. Indeed, ∂U is the union of a segment L (containing A 2 ) and a curve γ (contained in β 1 ⊆ ∂(π(Φ(Σ ann ))) joining its endpoints. Hence, U \ U = γ ∪ L, and L cannot intersect ∂ D Ω by the hypothesis on A 1 , A 2 , and A 3 . Let Π l ⊂ R 3 be the plane containing l and orthogonal to the plane containing Ω; As usual, Π l ∩ Φ(Σ ann ) is a family of closed curves, possibly with endpoints on Γ ∩ Π l . Now, pick a point P on ∂U \ L, and let Q be a point on Φ(Σ ann ) so that π(Q) = P . Let d l : R 3 → R be the signed distance from Π l , with d l (Q) = d l (P ) > 0. We claim that, if D is the connected component of {w ∈ Σ ann : d l • Φ(w) > 0} containing the point Φ −1 (Q), then D ∩ ∂Σ ann = Ø. This would contradict the harmonicity of d l • Φ, since d l • Φ would be zero on D, but d l (Q) > 0. In the next step we show that there exists a set E ⊂ R 3 of finite perimeter such that We first fix some notation. We let 〚E〛 ∈ D 3 (R 3 ) be the 3-current given by integration over E with E ⊂ R 3 being a set of finite perimeter. To every MY solution Φ ∈ P 2 (Γ) to (6.4) we associate the push-forward 2-current Φ 〚Σ ann 〛 ∈ D 2 (R 3 ) given by integration over the (suitably oriented) surface Φ(Σ ann ) [17,Section 7.4.2]. Finally if T ∈ D k (U ) with U ⊂ R 3 open and k = 2, 3, we denote by |T | the mass of T in U [see [11, p. 358]]. Lemma 6.11 (Region enclosed by Φ(Σ ann )). Suppose m 2 (Γ) < m 1 (Γ 1 ) + m 1 (Γ 2 ) and let Φ ∈ P 2 (Γ) be a MY solution to (6.4). Then there is a closed finite perimeter set E ⊂ Ω × R such that ∂E = Φ(Σ ann ) in Ω × R. Proof. As Φ 〚Σ ann 〛 is a boundaryless integral 2-current in Ω×R, there exists (see, e.g., [17,Theorem 7.9.1]) an integral 3-current E ∈ D 3 (Ω × R) with ∂E = Φ 〚Σ ann 〛, and we might also assume that the support of E is compact in Ω × R. We claim that, up to switching the orientation of Φ 〚Σ ann 〛, E has multiplicity in {0, 1}, and hence is the integration 〚E〛 over a bounded measurable set E. This is a finite perimeter set if we show that the integration over (Ω × R) ∩ ∂ * E coincides with Φ 〚Σ ann 〛. We start by observing that Indeed, fixing k ∈ N, by the second equation in (6.15), we have that ∂ * E k is contained in the support of ∂E, which in turn is Φ(Σ ann ). As a consequence, if P = (P 1 , P 2 , P 3 ) ∈ (Ω × R) ∩ ∂ * E k , then P ∈ Φ(Σ ann ). Around P we can find suitable coordinates and a cube U = (P 1 − ε, P 1 + ε) × (P 2 − ε, P 2 + ε) × (P 3 − ε, P 3 + ε) such that Φ(Σ ann ) ∩ U is the graph G h of a smooth function h : (P 1 − ε, P 1 + ε) × (P 2 − ε, P 2 + ε) → (P 3 − ε, P 3 + ε). Moreover, Φ 〚Σ ann 〛 = 〚G h 〛 in U . We conclude 20 that E U = 〚SG h ∩ U 〛 + m〚U 〛, with SG h the subgraph of h, and m ∈ Z. We claim that Indeed, assume for instance that |E k ∩ SG h ∩ U | > 0 and |(SG h \ E k ) ∩ U | > 0; by the constancy lemma it follows that ∂〚E k 〛 is nonzero in the simply connected open set SG h , contradicting (6.16). As a consequence of the preceding claim, we have that U ∩ ∂ * E k = U ∩ Φ(Σ ann ). Since this argument holds for any choice of P ∈ (Ω × R) ∩ ∂ * E k , we have proved that (Ω × R) ∩ ∂ * E k is relatively open (and relatively closed at the same time) in Φ(Σ ann ), which in turn being a connected open set, implies Φ(Σ ann ) = ∂ * E k ∀k ∈ N. Denote by I ± := {k ∈ N : σ k = ±1}, where σ k appears in (6.14). Going back to the local behaviour around P ∈ Φ(Σ ann ), if U is a neighbourhood as above, we see that for all k ∈ I + either E k ∩ U = SG h or E k = U \ SG h (namely, all the E k 's coincide in U ), since otherwise, there will be cancellations in the series k∈I + ∂〚E k 〛, in contradiction with the second formula in (6.15). Assume without loss of generality that for all k ∈ I + we have E k ∩ U = SG h ; thus, arguing as before, for all k ∈ I − we must have E k ∩ U = U \ SG h . We obtain that E U = m〚SG h 〛 − n〚U \ SG h 〛 for some nonnegative integers n, m. Since (∂E) U = (m + n)〚G h 〛 and also (∂E) U = Φ 〚Σ ann 〛 = 〚G h 〛 in U , we conclude m + n = 1. Hence either m = 1 and n = 0, or m = 0 and n = 1. On the other hand, we know that E U = k∈I + 〚E k ∩ U 〛 − k∈I − 〚E k ∩ U 〛, from which it follows that I + has cardinality m and I − has cardinality n. Namely, one of the sets I ± is empty, and the other contains only one index. We conclude that the sum in (6.14) involves only one index, that is, there is only one compact set E in Ω × R such that (up to switching the orientation) This concludes the proof. Remark 6.12. From the fact that (Ω × R) ∩ ∂E = Φ(Σ ann ) ∪ ∆ 1 ∪ ∆ 2 , we easily see that π(E) = π(Φ(Σ ann )) which, by Lemma 6.9, is simply connected. Proof. Since E has finite perimeter, there exists a function ψ ∈ BV (π(E)) such that S ± = G ± ψ [9]. So, we only need to show that ψ is continuous. Take a point P in the interior of π(E); if P = π(Φ(w)) for some w, then w ∈ Σ ann , since π(Φ(C i )) ⊂ ∂Ω for i = 1, 2. If at none of the points of π −1 (P ) ∩ Φ(Σ ann ) the tangent plane to Φ(Σ ann ) is vertical, then ψ is C ∞ in a neighbourhood of P , since it is the linear combination of smooth functions (see the discussion after formula (6.21) below, where details are given). Therefore we only have to check continuity of ψ at those points P for which there is P ∈ π −1 (P ) ∩ Φ(Σ ann ) such that Φ(Σ ann ) has a vertical tangent plane Π at P . Consider a system of Cartesian coordinates centred at P , with the (x, y)-plane coinciding with Π, the x-axis coinciding with the line π −1 (P ), and let z = z(x, y) (defined at least in a neighbourhood of 0) be the analytic function whose graph coincides with Φ(Σ ann ). This map, restricted to the x-axis, is analytic and it vanishes at x = 0; hence it is either constantly zero or it has a discrete set of zeroes (in the neighbourhood where it exists). We now exclude the former case: If z(·, 0) is constantly zero, it means that around P there is a vertical open segment included in π −1 (P ), which is contained in Φ(Σ ann ). Let Q be an extremal point of this segment, and let Π Q be the tangent plane to Φ(Σ ann ) at Q. This plane must contain as tangent vector the above segment, hence Π Q is vertical and contains π −1 (P ). Choosing again a suitable Cartesian coordinate system centred at Q we can express locally the surface Φ(Σ ann ) as the graph of an analytic function defined in a neighbourhood of Q in Π Q , and so the restriction of this map to π −1 (P ) is analytic in a neighbourhood of Q, hence it must be constantly zero since it is zero in a left (or right) neighbourhood of Q. What we found is that we can properly extend the segment P Q on the Q side to a segment P R contained in Φ(Σ ann ). By iterating this argument we conclude that the whole line π −1 (P ) is contained in Φ(Σ ann ), which is impossible since Φ(Σ ann ) is bounded. Hence the zeroes of the function z(·, 0) are isolated, so the next assertion follows: Assertion A: Let P ∈ π −1 (P ) ∩ Φ(Σ ann ). Then in a neighbourhood of P the only intersection between Φ(Σ ann ) and π −1 (P ) is P itself. We can now conclude the proof of the continuity of the function ψ. Let P be in the interior of π(E), and write π −1 (P ) ∩ Φ(Σ ann ) = {Q 1 , Q 2 , . . . , Q m } ⊂ Ω × R. It follows that where (Q j ) 3 is the vertical coordinate of Q j and σ j ∈ {−1, 0, 1} is defined as Let P k ∈ int(π(E)) be such that the sequence (P k ) converges to P , and write π −1 (P k ) ∩ Φ(Σ ann ) = {Q k 1 , Q k 2 , . . . , Q k m k } ⊂ Ω × R. With a similar notation as above, we have 2 ψ(P k ) = H 1 (π −1 (P k ) ∩ E) = m k j=1 σ k j (Q k j ) 3 . (6.20) Now, if P is such that at every point Q j the tangent plane to Φ(Σ ann ) is not vertical, then Φ(Σ ann ) is a smooth Cartesian surface in a neighbourhood of Q j , and so it is clear that, for k large enough, m = m k , Q k j → Q j , σ k j → σ j for all j = 1, . . . , m, (6.21) and the continuity of (6.18) follows. Therefore it remains to check continuity in the case that the tangent plane to some Q j is vertical. Let ‹ Q be one of these points, with associated sign σ. By assertion A there is δ > 0 so that ‹ Q is the unique intersection between π −1 (P ) and Φ(Σ ann ) with vertical coordinate in [ ‹ Q 3 − δ, ‹ Q 3 + δ]. This means that the segments π −1 (P ) ∩ { ‹ Q 3 − δ < x 3 < ‹ Q 3 } and π −1 (P ) ∩ { ‹ Q 3 < x 3 < ‹ Q 3 + δ} are either subsets of int(E) or subsets of R 3 \ E. In particular, there is a neighbourhood U ⊂ Ω of P such that the discs U × {x 3 = ‹ Q 3 − δ} and U × {x 3 = ‹ Q 3 + δ} are subsets of int(E) or of R 3 \ E. Suppose without loss of generality that both these discs are inside R 3 \ E (the other cases being similar), so that σ = 0. We infer that, for k large enough so that P k ∈ U , there is a finite subfamily {Q k j : j ∈ J} of {Q k 1 , Q k 2 , . . . , Q k m k } contained in { ‹ Q 3 < x 3 < ‹ Q 3 + δ} and which satisfies the following: The sum in (6.20) restricted to such subfamily reads as: where J = {j 1 , j 2 , . . . , j l } and (Q k j l ) 3 > (Q k j l−1 ) 3 > · · · > (Q k j 2 ) 3 > (Q k j 1 ) 3 (in the case that j l = 1 necessarily σ k j 1 = 0 and the sum is zero). We have to show that this sum tends to σ ‹ Q 3 = 0 as k → +∞, which is true, since each Q k j tends to ‹ Q. Repeating this argument for each point ‹ Q appearing in (6.18) with a vertical tangent plane to Φ(Σ ann ), we conclude the proof of continuity of ψ in the interior of π(E). Let now P ∈ ∂(π(E)). If P ∈ ∂(π(E)) ∩ Ω then every point in π −1 (P ) ∩ Φ(Σ ann ) has vertical tangent plane and we can argue as in the previous case. It remains to show continuity of ψ on ∂π(E) ∩ ∂Ω. In this case we exploit the fact that the interior of Φ(Σ ann ) is contained in Ω × R. We sketch the proof without details since it is very similar to the previous arguments. Let P ∈ ∂ D 1 Ω, thus π −1 (P ) ∩ Γ 1 consists of two points Q 1 and Q 2 . Let (P k ) be a sequence of points in π(E) converging to P . For P k ∈ ∂ D 1 Ω it follows π −1 (P k ) ∩ Γ 1 = {Q k 1 , Q k 2 } and the continuity of ψ follows from the continuity of ϕ on ∂ D 1 Ω, whereas if P k is in the interior of π(E) there holds π −1 (P k ) ∩ Γ 1 = {Q k 1 , Q k 2 , . . . , Q k m k }. Using the continuity of Φ up to C 1 , it is easily seen that all such points must converge, as k → +∞, either to Q 1 or to Q 2 . Hence we can repeat an argument similar to the one used before. Lemma 6.14. Suppose m 2 (Γ) < m 1 (Γ 1 ) + m 1 (Γ 2 ) and let Φ ∈ P 2 (Γ) be a MY solution to (6.4). Let E be the finite perimeter set given in Lemma 6.11 and let S be defined as in (6.17). Then there is an injective map Φ ∈ H 1 (Σ ann ; R 3 ) ∩ C 0 (Σ ann ; R 3 ) which maps ∂Σ ann weakly monotonically to Γ and such that Φ(Σ ann ) = S, and also H 2 (S) = Σann |∂ w 1 Φ ∧ ∂ w 2 Φ|dw = Σann |∂ w 1 Φ ∧ ∂ w 2 Φ|dw = m 2 (Γ). (6.22) In particular, Φ is a solution of (6.4). We now see that the latter case cannot happen. Indeed, first one checks that in this case the intersection cannot be transversal 21 , and that π −1 (p) must be tangent to S at P 1 . Let Π 1 be the vertical tangent plane to S at P 1 . Let Π ⊥ 1 be the vertical plane orthogonal to Π 1 passing through P 1 . In a neighbourhood of P 1 , the unique curve in S ∩ Π ⊥ 1 must be the union of two curves joining at P 1 , and these curves must belong to the same half-plane of Π ⊥ 1 with boundary π −1 (p). As a consequence, if p ∈ Ω ∩ Π ⊥ 1 is in that half-plane, then π −1 (p ) consists of at least two points; if p lies in the opposite half-plane, then π −1 (p ) is empty. This means that necessarily p ∈ ∂π(E). Namely, the previous assertion can be strengthened to: • S + = S ∩ {x 3 ≥ 0} is the graph of ψ ∈ W 1,1 (U ) ∩ C 0 (U ), where U = Ω \ (E 1 ∪ E 2 ) is the open region enclosed between ∂ D Ω and β 1 ∪ β 2 .
2022-01-19T02:16:07.667Z
2022-01-16T00:00:00.000
{ "year": 2022, "sha1": "7d5a92c229b6abe7dfdb81d59e61e983ade8ef80", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7d5a92c229b6abe7dfdb81d59e61e983ade8ef80", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
253030213
pes2o/s2orc
v3-fos-license
Different RNA Elements Control Viral Protein Synthesis in Polerovirus Isolates Evolved in Separate Geographical Regions Most plant viruses lack the 5′-cap and 3′-poly(A) structures, which are common in their host mRNAs, and are crucial for translation initiation. Thus, alternative translation initiation mechanisms were identified for viral mRNAs, one of these being controlled by an RNA element in their 3′-ends that is able to enhance mRNA cap-independent translation (3′-CITE). The 3′-CITEs are modular and transferable RNA elements. In the case of poleroviruses, the mechanism of translation initiation of their RNAs in the host cell is still unclear; thus, it was studied for one of its members, cucurbit aphid-borne yellows virus (CABYV). We determined that efficient CABYV RNA translation requires the presence of a 3′-CITE in its 3′-UTR. We showed that this 3′-CITE requires the presence of the 5′-UTR in cis for its eIF4E-independent activity. Efficient virus multiplication depended on 3′-CITE activity. In CABYV isolates belonging to the three phylogenetic groups identified so far, the 3′-CITEs differ, and recombination prediction analyses suggest that these 3′-CITEs have been acquired through recombination with an unknown donor. Since these isolates have evolved in different geographical regions, this may suggest that their respective 3′-CITEs are possibly better adapted to each region. We propose that translation of other polerovirus genomes may also be 3′-CITE-dependent. Introduction Cucurbit aphid-borne yellows virus (CABYV) was first described in the early 1990s in France [1] and later identified as one of the most common viruses found in open field cucurbit crops in many other countries, such as Spain, Iran, Greece, Morocco, Egypt, Tunisia, Taiwan, Korea, and China [1][2][3][4][5][6][7][8]. Recently, it has also been reported in Papua New Guinea [9], Brazil [10], Germany [11], and Indonesia [12]. CABYV is one of the prevalent viruses in cucurbit crops, and is often found in mixed infections [2,13]. Its main host range includes cucurbits, such as melon, cucumber, squash, pumpkin, and watermelon, but it also infects agronomically important non-cucurbit species, such as lettuce (Lactuca sativa) and fodder beet (Beta vulgaris), and several weeds [1]. This virus is phloem-limited, cannot be mechanically inoculated, and is transmitted in nature in a persistent, non-propagative manner by the aphids Aphis gossypii and Myzus persicae [1]. Very recently, a new recombinant isolate of CABYV from Brazil has been shown to be transmitted by whiteflies [14]. Plants infected with CABYV show symptoms, such as yellowing and thickening of basal and older leaves, and often flower abortion. CABYV is a member of the polerovirus genus, which belonged together with the luteovirus and enamovirus genera, to the Luteoviridae family that was recently abolished. While the luteovirus genus was reassigned to the Tombusviridae family, the polero-and enamovirus genera were reassigned to the Solemoviridae family [15]. The CABYV genome is a single-stranded positive-sensed RNA molecule of 5.7 kb comprising six open reading frames (ORF). The 5 -proximal ORFs Conservation of the 3 -UTR Sequences of CABYV Isolates Following phylogenetic analyses of the complete genome of CABYV isolates, these were classified into two groups, Asian and Mediterranean [24]. We performed a phylogenetic analysis of the 3 -UTRs of CABYV sequences available in Genbank ( Figure 1A) and observed that the separation into these two groups was maintained. After a more precise analysis of the 3 -UTR sequences following its alignment, it was observed that while, in the first half of the 3 -UTR, the sequences varied considerably (until nucleotide 86 (Mediterranean) and 89 (Asian)), in the second half, these were highly similar between both groups ( Figure 1B). To simplify this figure, only 8 sequences corresponding to the Asian group, but all of the ones available in Genbank for the Mediterranean group, are shown. For each group separately, the 3 -UTR sequences were highly conserved (Supplementary Figure S1A,B). Interestingly, the sequence inserted into the 3 -UTR of MNSV-N, shown to have 3 -CITE activity, was located in the variable region of Asian CABYV, as it comprised the first 60 nt of its 3 -UTR [33]. Conservation of the 3′-UTR Sequences of CABYV Isolates Following phylogenetic analyses of the complete genome of CABYV isolates, these were classified into two groups, Asian and Mediterranean [24]. We performed a phylogenetic analysis of the 3′-UTRs of CABYV sequences available in Genbank ( Figure 1A) and observed that the separation into these two groups was maintained. After a more precise analysis of the 3′-UTR sequences following its alignment, it was observed that while, in the first half of the 3′-UTR, the sequences varied considerably (until nucleotide 86 (Mediterranean) and 89 (Asian)), in the second half, these were highly similar between both groups ( Figure 1B). To simplify this figure, only 8 sequences corresponding to the Asian group, but all of the ones available in Genbank for the Mediterranean group, are shown. For each group separately, the 3′-UTR sequences were highly conserved (Supplementary Figure S1A,B). Interestingly, the sequence inserted into the 3′-UTR of MNSV-N, shown to have 3′-CITE activity, was located in the variable region of Asian CABYV, as it comprised the first 60 nt of its 3′-UTR [33]. These previous results, together with the alignments shown here, suggested that the variable first nucleotides of the CABYV 3′-UTRs (60-90 nts) of Asian and Mediterranean isolates could harbor different 3′-CITEs. [36,37]. The tree with the highest log likelihood (−1029.01) is shown. The percentage of trees in which the associated taxa clustered together is shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using the Maximum Composite Likelihood (MCL) approach, and then selecting the topology with superior log likelihood value. This analysis involved 41 nucleotide sequences. There was a total of 186 positions in the final dataset. Evolutionary analyses were conducted in MEGA X [38]. These previous results, together with the alignments shown here, suggested that the variable first nucleotides of the CABYV 3 -UTRs (60-90 nts) of Asian and Mediterranean isolates could harbor different 3 -CITEs. Identification of 3 -CITEs in CABYV 3 -UTRs To study the possible role of the CABYV 3 -UTRs in cap-independent translation, the firefly luciferase (luc) gene was flanked by the 5 -and/or 3 -UTRs of CABYV. The 3 -UTRs of the genome of Spanish and French CABYV isolates are identical, representing the Mediterranean isolates (CABYV-Sp) here, while the 3 -UTRs of Asian CABYV isolates are highly conserved (Supplementary Figure S1), with CABYV Xinjiang (CABYV-Xin) representing these in our experiments. The 20 nt long 5 -UTR genomic sequence of the Spanish and Xinjiang isolates are identical, and it is nearly invariant in all CABYV isolates (Supplementary Figure S2). The cap-independent translation efficiency of the in vitro transcribed uncapped luc-RNAs was assayed in vivo in melon protoplasts. The luciferase activities measured corresponded to the translation efficiency and are represented as vertical columns in Figure 2 relative to the 5 -luc-3 -UTR construct of CABYV-Spain. Transcripts with only one of the UTRs flanking the luc-gene showed low translation efficiency (between 18% and 6% for the 5 -and 3 -UTRs, respectively, second to fourth column). By contrast, the presence of both CABYV UTRs increased the translation efficiency of these uncapped RNAs, nearly 6-fold for CABYV-Sp (fifth) and 9-fold for CABYV-Xin (eight column). This result suggested that both 3 -UTRs contained a 3 -CITE able to control cap-independent translation in vivo. Its translation efficiency was 2.5-3.5-times lower than that of a capped construct. Identification of 3′-CITEs in CABYV 3′-UTRs To study the possible role of the CABYV 3′-UTRs in cap-independent translation, the firefly luciferase (luc) gene was flanked by the 5′-and/or 3′-UTRs of CABYV. The 3′-UTRs of the genome of Spanish and French CABYV isolates are identical, representing the Mediterranean isolates (CABYV-Sp) here, while the 3′-UTRs of Asian CABYV isolates are highly conserved (Supplementary Figure S1), with CABYV Xinjiang (CABYV-Xin) representing these in our experiments. The 20 nt long 5′-UTR genomic sequence of the Spanish and Xinjiang isolates are identical, and it is nearly invariant in all CABYV isolates (Supplementary Figure S2). The cap-independent translation efficiency of the in vitro transcribed uncapped luc-RNAs was assayed in vivo in melon protoplasts. The luciferase activities measured corresponded to the translation efficiency and are represented as vertical columns in Figure 2 relative to the 5′-luc-3′-UTR construct of CABYV-Spain. Transcripts with only one of the UTRs flanking the luc-gene showed low translation efficiency (between 18% and 6% for the 5′-and 3′-UTRs, respectively, second to fourth column). By contrast, the presence of both CABYV UTRs increased the translation efficiency of these uncapped RNAs, nearly 6-fold for CABYV-Sp (fifth) and 9-fold for CABYV-Xin (eight column). This result suggested that both 3′-UTRs contained a 3′-CITE able to control capindependent translation in vivo. Its translation efficiency was 2.5-3.5-times lower than that of a capped construct. Figure 2. The 3′-UTRs of the CABYV genome contain 3′-CITEs. In vivo cap-independent translation efficiency of different luc-constructs assayed in melon protoplasts. Vertical columns represent measured luciferase activity (corresponding to the translation efficiency) relative to the activity obtained with the construct 5′-luc-3′-UTR of CABYV-Sp (100%). The first column shows the translation efficiency of a capped construct (with plasmid sequence flanking the 5′-and 3′-UTRs). Above the columns, a schematic drawing of the respective constructs assayed is shown (solid grey for CABYV-Sp/Xin sequences flanking luc, lined for chimeric 3′-UTRs). In the penultimate column, the 3′-CITE Figure 2. The 3 -UTRs of the CABYV genome contain 3 -CITEs. In vivo cap-independent translation efficiency of different luc-constructs assayed in melon protoplasts. Vertical columns represent measured luciferase activity (corresponding to the translation efficiency) relative to the activity obtained with the construct 5 -luc-3 -UTR of CABYV-Sp (100%). The first column shows the translation efficiency of a capped construct (with plasmid sequence flanking the 5 -and 3 -UTRs). Above the columns, a schematic drawing of the respective constructs assayed is shown (solid grey for CABYV-Sp/Xin sequences flanking luc, lined for chimeric 3 -UTRs). In the penultimate column, the 3 -CITE of CABYV-Sp in its 3 -UTR was interchanged for the 3 -CITE of CABYV-Xin, while in the last column, the 3 -CITE of CABYV-Xin in its 3 -UTR was interchanged for the 3 -CITE of CABYV-Sp. Error bars are +/−SD. Deletion of the first 60 nt of the 3 -UTRs in these 5 -luc-3 -UTR constructs reduced the translation efficiency to a similar level as that obtained in the absence of the complete 3 -UTR (sixth and ninth columns). On the other hand, RNAs with only the first 60 nt of the 3 -UTRs added to the 3 -end of the 5 -UTR-luc constructs showed similar translation efficiencies as the RNAs with both complete 3 -UTRs (seventh and tenth columns). These results allowed us to conclude that these 60 nt were sufficient for cap-independent translation control. Thus, these results confirm that the 3 -CITE identified as active RNA element in MNSV-N (named CXTE, X for Xinjiang) is also working as a 3 -CITE in Asian CABYV isolates, while Mediterranean isolates also have a 3 -CITE that differs in sequence, now named CMTE (M for Mediterranean). Additionally, Figure 2 shows that the CXTE is more efficient in enhancing translation than the CMTE. The translation efficiencies determined for constructs with chimeric 3 -UTRs (where the 3 -CITEs were interchanged, CXTE for CMTE and vice versa, last two columns in Figure 2) allow us to first conclude that these 3 -CITEs can be interchanged, and second, that the efficiency in translation enhancement depends on the 3 -CITE itself. Similar to melon protoplasts, also in cucumber and Nicotiana benthamiana protoplasts, the translation efficiency of the 5 -luc-3 -UTR construct including the CXTE, was higher than that including CMTE (Supplementary Figure S3; 2.2-and 1.8-fold, respectively). On the contrary, in Arabidopsis thaliana Col1 protoplasts, the translation efficiency of the construct including the CXTE was lower than that including the CMTE (Supplementary Figure S3; 0.5-fold), suggesting that 3 -CITE activities vary in cells of different hosts. Several authors have hypothesized, and support has been provided by the first direct proof provided with our studies on MNSV-N, that 3 -CITEs are modular transferrable RNA elements in nature [33,39]. Thus, we analyzed the possibility that the different CABYV 3 -CITEs were interchanged through recombination events, with a recombination occurrence analysis using RDP4 software, which implements several recombination-detecting algorithms [40]. As shown in Figure 3, a recombination event with an unknown donor, including the first part of the 3 -UTR with very high statistical significance (p < 0.00001), was detected by RDP4, strongly suggesting that this first part of the 3 -UTR of Mediterranean CABYV, including the 3 -CITE, was acquired through recombination. A nucleotide BLAST search with the recombined new sequence did not result in any matches apart from Mediterranean CABYV. Recombination prediction for the 3′-ends of the CABYV genome. Recombination hypothesis generated by the RDP4 software package (http://web.cbio.uct.ac.za/~darren/rdp.html). This computer program characterizes recombination events in sequence alignments using several different recombination analysis methods and tests for recombination hot-spots. The sequences included in this analysis performed using RDP, GENECONV, Maximum Chi-square (MaxChi), BootScan, SisterScan (SiScan), Chimaera, and 3Seq, include the last 50 nt of ORF5 plus the following 3′-UTRs of different CABYV isolates belonging to the Mediterranean and Asian groups (Genbank accession number indicated for each isolate). Default settings with a Bonferroni corrected p-value cut-off of 0.01 were applied. To reduce the possibility of false detection of recombination, only recombination events supported by at least two methods were selected. RDP4 colors similar sequences with similar colors. The arrow on the left marks the position of the start of the 3′-UTR sequence, and the one on the right, the approximate end of the variable region, from which the UTR sequences of Asian and Mediterranean isolates start to be conserved. The statistical significance was very high, with a pvalue < 0.00001. Translation Mediated by the CABYV 3′-CITEs Is eIF4E-Independent The CXTE in MNSV-N was shown to be active in the absence of eIF4E [33], giving this isolate the capacity to infect resistant melon, a resistance that is mediated by a single amino acid change in eIF4E [41]. Here, we studied if the CABYV 3′-CITEs was dependent on eIF4E for their translation enhancement activities, by analyzing the translation efficiencies of the different luc-constructs, first in protoplasts of resistant melon. The translation efficiencies of the luc constructs harboring a CABYV 3′-CITE were similar in protoplasts from resistant melon (dark grey) as from susceptible melon (light grey columns) ( Figure Figure 3. Recombination prediction for the 3 -ends of the CABYV genome. Recombination hypothesis generated by the RDP4 software package (http://web.cbio.uct.ac.za/~darren/rdp.html). This computer program characterizes recombination events in sequence alignments using several different recombination analysis methods and tests for recombination hot-spots. The sequences included in this analysis performed using RDP, GENECONV, Maximum Chi-square (MaxChi), BootScan, SisterScan (SiScan), Chimaera, and 3Seq, include the last 50 nt of ORF5 plus the following 3 -UTRs of different CABYV isolates belonging to the Mediterranean and Asian groups (Genbank accession number indicated for each isolate). Default settings with a Bonferroni corrected p-value cut-off of 0.01 were applied. To reduce the possibility of false detection of recombination, only recombination events supported by at least two methods were selected. RDP4 colors similar sequences with similar colors. The arrow on the left marks the position of the start of the 3 -UTR sequence, and the one on the right, the approximate end of the variable region, from which the UTR sequences of Asian and Mediterranean isolates start to be conserved. The statistical significance was very high, with a p-value < 0.00001. Translation Mediated by the CABYV 3 -CITEs Is eIF4E-Independent The CXTE in MNSV-N was shown to be active in the absence of eIF4E [33], giving this isolate the capacity to infect resistant melon, a resistance that is mediated by a single amino acid change in eIF4E [41]. Here, we studied if the CABYV 3 -CITEs was dependent on eIF4E for their translation enhancement activities, by analyzing the translation efficiencies of the different luc-constructs, first in protoplasts of resistant melon. The translation efficiencies of the luc constructs harboring a CABYV 3 -CITE were similar in protoplasts from resistant melon (dark grey) as from susceptible melon (light grey columns) ( Figure 4A), suggesting that these 3 -CITEs could function either with the two different eIF4E variants expressed by susceptible or resistant melon, or independently of eIF4E. This was further studied by analyzing their activities in melon protoplasts of an eIF4E knock-down line expressing a hairpin construct that targeted, and thus silenced, melon eIF4E; eIF4E expression had been previously shown to be reduced more than six-fold in these plants [42] (Rodríguez-Hernández et al., 2012). Previously, we had shown that translation of the MNSV genome was controlled by 3 -CITEs, being eIF4E-dependent for isolate MNSV-Mα5 [34], and independent of eIF4E for resistance-breaking isolates MNSV-N and MNSV-264 [33,42,43]. Thus, the construct of the luc-gene, flanked by 5 -and 3 -UTRs of MNSV-Mα5, was used as eIF4E-dependent translation control, and the constructs with the UTRs of MNSV-N and MNSV-264 as eIF4E-independent translation controls. As observed in Figure 4B, low eIF4E-expression (dark grey columns) associates with the reduced translation efficiency of only the MNSV-Mα5 construct when compared to translation under wild-type eIF4E expression (light grey columns). Thus, cap-independent translation controlled by either CABYV 3 -CITEs were not affected by the reduced eIF4E expression, suggesting that their activities were eIF4E-independent. wild-type eIF4E expression (light grey columns). Thus, cap-independent translation controlled by either CABYV 3′-CITEs were not affected by the reduced eIF4E expression, suggesting that their activities were eIF4E-independent. CABYV 3′-CITEs Vary in Sequence and Structure The alignment in Figure 1B shows that the sequences of the two CABYV 3′-CITEs are different. Here, we analyzed their secondary structures in solution by Selective 2′-Hydroxyl Acylation analyzed by Primer Extension (SHAPE; [44]) using benzoyl cyanide (BzCN; [45]). This chemical quickly modifies flexible, possibly single-stranded nucleotides in a sequence-independent manner, forming 2′-O adducts that block reverse transcriptase. By incorporating the SHAPE reactivity data in the MC-fold server (http://www.ma- In vivo cap-independent translation efficiencies of different luc-constructs assayed in melon protoplasts. Below the columns, the respective constructs assayed are explained. (A) Protoplasts were prepared from wild-type susceptible and resistant melon (containing a single amino acid change in eIF4E). Vertical columns represent measured luciferase activity relative to the activity obtained with the construct 5 -luc-3 -UTR of CABYV-Sp (100%) (in light grey for susceptible and dark grey for resistant melon). The columns showing the efficiencies obtained with the constructs 5 -UTR-luc-3 -CITE are lined. (B) Protoplasts were prepared from wild-type and transgenic melon eIF4E silenced for eIF4E (light grey and dark grey columns, respectively). Vertical columns represent measured luciferase activity relative to the activity obtained with the construct 5 -luc-3 -UTR of MNSV-N (100%), which just as the one of MNSV-264, serves as the eIF4E-independent control, while that of MNSV-Mα5 is the eIF4E-dependent control. Error bars are +/−SD. CABYV 3 -CITEs Vary in Sequence and Structure The alignment in Figure 1B shows that the sequences of the two CABYV 3 -CITEs are different. Here, we analyzed their secondary structures in solution by Selective 2 -Hydroxyl Acylation analyzed by Primer Extension (SHAPE; [44]) using benzoyl cyanide (BzCN; [45]). This chemical quickly modifies flexible, possibly single-stranded nucleotides in a sequence-independent manner, forming 2 -O adducts that block reverse transcriptase. By incorporating the SHAPE reactivity data in the MC-fold server (http://www.major.iric. ca/MC-Fold), we obtained the mFold secondary structure predictions shown in Figure 5 (http://www.unafold.org/mfold/applications/rna-folding-form.php). The nucleotide variations found in the CXTEs of the other Asian isolates support the structure, as they are either in unpaired regions, or if they involve nucleotides proposed to be paired, they do not affect complementarity. The structure obtained for the CXTE of CABYV-Xin was similar to the one of the CXTE identified in MNSV-N [33], folding into two helices protruding from a central hub. Figure 5B shows the structure prediction obtained for the CMTE after analyzing the SHAPE experiments. The predicted secondary structure based on the SHAPE analysis showed that CMTE folded into two oppositely protruding helices with an additional short helix in the central part. The four nucleotides that were different in the CMTE of the recently sequenced isolate from Australia (Papua New Guinea) support the structure, since they only affect unpaired nucleotides. Role of 3′-CITEs on Virus Multiplication Activity of CABYV To study the role in virus multiplication of the 3′-CITEs identified above, we modified two agroinfectious CABYV clones obtained from Spanish isolates, one unpublished cloned in pBIN61 and the other in pLJ89 [46], on the one hand by deleting its first 60 nt of the 3′-UTR (Δ-CITE), and, on the other hand, by exchanging its CMTE with the CXTE from CABYV-Xinjiang. The multiplication efficiencies of these clones were tested in melon cotyledons and Nicotiana benthamiana leaves. We analyzed the infiltrated leaves by Northern Role of 3 -CITEs on Virus Multiplication Activity of CABYV To study the role in virus multiplication of the 3 -CITEs identified above, we modified two agroinfectious CABYV clones obtained from Spanish isolates, one unpublished cloned in pBIN61 and the other in pLJ89 [46], on the one hand by deleting its first 60 nt of the 3 -UTR (∆-CITE), and, on the other hand, by exchanging its CMTE with the CXTE from CABYV-Xinjiang. The multiplication efficiencies of these clones were tested in melon cotyledons and Nicotiana benthamiana leaves. We analyzed the infiltrated leaves by Northern blot to be able to study virus multiplication through the detection of the subgenomic RNA (sgRNA). Figure 6A shows the viral RNAs (g/sgRNA) generated 3 days after agroinfiltration: in N. benthamiana, as well as in melon, the mutant virus without a 3 -CITE multiplied inefficiently, while the chimeric one containing the CXTE replicated with an efficiency higher than the wild-type virus (with its own CMTE). Similar results were obtained with both CABYV clones expressed from the two different binary plasmids. These results were in line with the ones from the in vivo translation experiments obtained with the luciferase constructs shown above (Figure 2). The size difference of 60 nt in the ∆-CITE clone can be observed in the sgRNA band that migrates slightly faster. The transcript that will start viral multiplication, synthesized after agrobacterium introduces its T-DNA into the cell, is capped [47]. Apart from this, poleroviruses have been described to have a viral protein linked to the 5′-end of their genome (VPg) that could be involved in virus protein synthesis [ ., 1997). Since 3′-CITEs control cap-independent translation, we wanted to study its importance in CABYV multiplication in the absence of 5′-cap or VPg. The transcript that will start viral multiplication, synthesized after agrobacterium introduces its T-DNA into the cell, is capped [47]. Apart from this, poleroviruses have been described to have a viral protein linked to the 5 -end of their genome (VPg) that could be involved in virus protein synthesis [21,48,49] ., 1997). Since 3 -CITEs control cap-independent translation, we wanted to study its importance in CABYV multiplication in the absence of 5 -cap or VPg. Thus, we analyzed the capacity of in vitro-transcribed capped and uncapped genomic RNA to multiply in melon protoplasts by quantifying the viral RNA by RT-qPCR. As can be observed in Figure 6B, the results obtained for virus multiplication when infecting protoplasts with capped or uncapped CABYV RNA were similar. Again, less CABYV RNA was detected in protoplasts electroporated with the mutant CABYV lacking the 3 -CITE. On the other hand, the chimeric virus with the CXTE showed a higher multiplication efficiency (1.6×) than the wild-type virus, congruently with the higher translation enhancement capacity observed for CXTE. We can conclude that CABYV multiplication (including translation) can occur in the absence of a 5 -cap or a VPg linked at the 5 -end of its genome, and additionally, that the 3 -CITEs identified in the CABYV 3 -UTR are required for efficient synthesis of the viral proteins and its multiplication. Possible 3 -CITE of New Brazilian CABYV Isolates Recently, the complete sequences of three Brazilian isolates were published [10] and added to Genbank. These CABYV isolates were suggested to be recombinants with the CABYV-N isolate from France as the major parent (more than half of the 5 -genomic region), plus about 40% of its genome coming from a non-identified minor polerovirus parent (3 -genomic region including genes coding for CP and MP and 2/3 of the 3 -UTR). The end of the recombined region was mapped to the 3 -UTR, such that its last 59 nt again shared a high sequence identity (nearly 90%) with the French isolate [10]. On the other hand, the first 121 nt of the 3 -UTR had less than 50% identity with the French isolate. A phylogenetic analysis of all CABYV 3 -UTRs showed that the 3 -UTRs of these recombinant isolates form a separate group, independent of the Mediterranean or Asian isolates ( Figure 7A). Separate alignments of the 3 -UTRs of Brazilian CABYVs with either Mediterranean or Asian isolates revealed that the first part of the Brazilian 3 -UTRs has no similarity with any of them (Supplementary Figure S5). Thus, we wondered if Brazilian CABYVs had another type of 3 -CITE. To learn if the 5 -end of the 3 -UTR of Brazilian CABYV genome is indeed able to function as a 3 -CITE, we decided, based on 3 -UTR sequence alignments and structure predictions, to add the first 83 and 97 nucleotides of the 3 -UTR of CABYV-Brazil to the 5 -UTR-luc construct at its 3 -end. The cap-independent translation efficiencies of these constructs in melon protoplasts were measured. As observed in Figure 7B, presence of the shorter 83 nt fragment at the 3 -end of the luc gene did not result in an increase in translation as compared to the 5 -UTR-luc construct (first and last columns). On the other hand, the translation efficiency increased after the addition of the longer 97 nt fragment, although less efficiently (60%) than with the wild-type construct (CMTE-100%). Thus, we can conclude that the CABYV isolates newly identified in Brazil seem to have a 3 -CITE that is bigger and varies in sequence and structure ( Figure 7C) from the ones of Mediterranean or Asian CABYV isolates. Again, the recombination hypothesis generated by the RDP4 software package predicted a recombination event with an unknown donor, including the first part of the 3 -UTR with very high statistical significance, strongly suggesting that the first part of the 3 -UTR of Brazilian CABYV including the 3 -CITE, was acquired through recombination (Supplementary Figure S4). A nucleotide BLAST search with this acquired sequence did not result in any matches apart from Brazilian CABYV. phylogenetic analysis of all CABYV 3′-UTRs showed that the 3′-UTRs of these recombinant isolates form a separate group, independent of the Mediterranean or Asian isolates ( Figure 7A). Separate alignments of the 3′-UTRs of Brazilian CABYVs with either Mediterranean or Asian isolates revealed that the first part of the Brazilian 3′-UTRs has no similarity with any of them (Supplementary Figure S5). Thus, we wondered if Brazilian CABYVs had another type of 3′-CITE. and of the polerovirus Chickpea chlorotic stunt virus (CCSV) as the outgroup inferred using the Maximum Likelihood method and Tamura-Nei model [36,37]. The tree with the highest log likelihood (−1271.12) is shown. The percentage of trees in which the associated taxa clustered together is shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using the Maximum Composite Likelihood (MCL) approach, and then selecting the topology with superior log likelihood value. This analysis involved 44 nucleotide sequences. There were a total of 188 positions in the final dataset. Evolutionary analyses were conducted in MEGA X [38]. (B) In vivo cap-independent translation efficiency of different luc-constructs assayed in melon protoplasts. Vertical columns represent measured luciferase activity (corresponding to the translation efficiency) relative to the activity obtained with the construct 5 -luc-3 -CITE of CABYV-Sp (100%). Below the columns, the respective constructs assayed are explained. Error bars are +/−SD. (C) Secondary structure predictions for the first 83 and 97 nt of the 3 -UTR invariant in the three Brazilian isolates using mFold. Searching for Possible 3 -CITEs in Polerovirus 3 -Ends Since the polerovirus protein translation mechanisms are still unknown, we tried to identify possible 3 -CITEs in their genomic 3 ends. Based on comparisons of sequences and predicted structures, we identified CXTE-like structures in seven of them: melon aphid-borne yellows virus (MABYV), turnip yellows virus (TuYV), beet western yellows virus (BWYV), pepper vein yellows virus (PVYV), faba bean polerovirus (FBPV), beet chlorosis virus (BChV), and beet mild yellowing virus (BMYV) (Figure 8). Apart from structural similarity, the conservation of three pyrimidines (C/U) followed by three adenosines (A, only two for BYMV) in the loop of the second stem-loop structure was also observed. Thus, CXTE-like structures could be identified in 8 out of 20 polerovirus genomes in their 3 -ends, including CABYV, suggesting that polerovirus genome translation could generally depend on 3 -CITEs. Discussion In this report, we study CABYV protein synthesis in host cells. We started this study after the identification of a sequence inserted in the 3′-UTR of a new MNSV isolate (MNSV-N) able to break eIF4E-mediated resistance in melon that was highly similar to the first 60 nucleotides of the 3′-UTR of CABYV [33]. This sequence was shown to have the capacity to control cap-independent translation of MNSV in the absence of eIF4E. Here, we show that the first 60 nucleotides of the 3′-UTR of CABYV also function as an RNA element able to enhance cap-independent translation of its own genome. Surprisingly, the sequence of the first half of the 3′-UTR varies between CABYV isolates belonging to the different phylogenetic groups described. However, in spite of this, these sequences are functionally similar, having 3′-CITE activity. We show that these 3′-CITEs need the presence of the 5′-UTR in cis but do not need eIF4E for activity. Congruently, as these RNA elements enhance virus protein synthesis, they are required for efficient virus multiplication. We have not been able to identify a 5′-3′-interaction that is functionally Discussion In this report, we study CABYV protein synthesis in host cells. We started this study after the identification of a sequence inserted in the 3 -UTR of a new MNSV isolate (MNSV-N) able to break eIF4E-mediated resistance in melon that was highly similar to the first 60 nucleotides of the 3 -UTR of CABYV [33]. This sequence was shown to have the capacity to control cap-independent translation of MNSV in the absence of eIF4E. Here, we show that the first 60 nucleotides of the 3 -UTR of CABYV also function as an RNA element able to enhance cap-independent translation of its own genome. Surprisingly, the sequence of the first half of the 3 -UTR varies between CABYV isolates belonging to the different phylogenetic groups described. However, in spite of this, these sequences are functionally similar, having 3 -CITE activity. We show that these 3 -CITEs need the presence of the 5 -UTR in cis but do not need eIF4E for activity. Congruently, as these RNA elements enhance virus protein synthesis, they are required for efficient virus multiplication. We have not been able to identify a 5 -3 -interaction that is functionally important for 3 -CITE activity yet. How polerovirus proteins are translated in the host cell is still unclear [50]. We propose that 3 -CITEs may be involved, as shown here for CABYV, since we identified, based on sequence/structure predictions, CXTE-like 3 -CITEs in 8 out of 20 polerovirus genomic 3 -ends (TuYV, MABYV, BChV, BWYV, PVYV, FBPV, and BMYV). For viruses of the genus Luteovirus, until recently belonging to the same family, 3 -CITEs were also identified, all similar to that of barley yellow dwarf virus (BYDV) translational enhancer (BTE) [28,51,52]. Although the BTE lies in an intergenic 3 -end region, not directly in the 3 -UTR, the sequences of several other 3 -CITEs have been shown to overlap with the coding region of the adjacent gene upstream of the 3 -UTR [32,53]. Here, we showed that uncapped CABYV-RNA is able to translate and multiply in melon protoplasts with a similar efficiency as capped CABYV-RNA, suggesting that the 5 -cap is not needed for initiation of infection from this input RNA. Polerovirus genomes are supposed to be uncapped [20]. Otherwise, its cap-independent, efficient translation and multiplication requires the 3 -CITEs identified in its 3 -UTRs. These results agree with the ones obtained with reporter constructs that showed that these sequences are able to enhance the translation efficiency. Since the CABYV mutant without a 3 -CITE is still able to multiply at low levels in melon protoplasts and N. benthamiana leaves, an additional mechanism for cap-independent translation of its proteins must exist. VPgs from different viruses have been shown to interact with several eukaryotic translation initiation factors, suggesting a possible role in translation [49]. On the other hand, a role for potyviral VPgs in viral RNA stability and protection against being sent to the RNA silencing pathway has been proposed [54]. For some members of the polerovirus genus, a VPg has been proposed to be covalently linked to the genome at its viral 5 -end and to interact with different eIFs [21,49]. Early studies on potato leafroll virus (PLRV) showed evidence of the presence of a VPg linked to its genome [55]. Later, some amino acid sequence similarity between this VPg and the possible VPgs of another three poleroviruses, BMYV, BWYV, and CABYV, were described [48]. In spite of these reports, no direct studies on the existence of a VPg linked to the CABYV genome exists to date. Our finding that in vitro transcribed uncapped CABYV-RNA is able to multiply in protoplasts suggests that a VPg linked to the 5 -end is not essential for initiation of infection from this input RNA. However, the VPg may be required in the following steps of CABYV multiplication. Here, we identified different 3 -CITEs in each of the phylogenetically determined CABYV groups, Asian, Mediterranean, and Brazilian. Our recombination prediction analyses suggest that these 3 -CITEs have been acquired through recombination. The modularity and transferability of 3 -CITEs have been proposed before, as different types of 3 -CITEs have been found in a single genus, and the same type of 3 -CITE appeared in different virus genera [56]. Additionally, with our previous analysis on MNSV-N, we provided the first direct evidence that 3 -CITEs can be transferred between viral species through RNA recombination occurring in nature [33]. The recombination prediction analyses presented here suggest that the CABYV 3 -CITEs were also acquired through recombination, in this case with an unknown donor. Members of the polerovirus, luteovirus, and enamovirus genera, the former Luteoviridae family, are recombination prone, and they have been proposed as having emerged from intergeneric recombination events. Thus, several recombinants between poleroviruses and luteoviruses have been described, and CABYV itself has been proposed to derive from recombinations between polero-and enamo-like viruses [57][58][59]. Many CABYV recombinants have been identified, with the Brazilian isolates being the most recent ones [6,10,24,46]. 3 -CITEs that differ in sequence and structure coincide in the general mechanistic steps involving recruitment of translation initiation factors at the 3 -CITE, and delivery of these near the translation start site through communication with the 5 -UTR [32]. Thus, when interchanged, 3 -CITEs can be active in very heterologous, but also in nearly identical viral genomes. To maintain an interchanged or newly acquired 3 -CITE in the viral genome, it must provide a selective advantage. Thus, with the acquisition of the CXTE from CABYV, the new MNSV isolate gained the ability to infect otherwise resistant melon [33]. Perhaps each of the three CABYV 3 -CITEs confers the virus a selective advantage in the geographical region where it evolved. For example, while comparing CXTE and CMTE translation enhancement activities, we found that CXTE works better in cucurbit cells, while CMTE does so in Arabidopsis cells. Thus, better multiplication in weeds may be the advantage for Mediterranean CABYV for maintaining CMTE, although this 3 -CITE is less efficient in cucurbits. Indeed, a recent study from our group indicated that weeds play an important role in CABYV diversification [23]. We can conclude that an RNA element located in the first set of nucleotides in the 3 -UTR of the CABYV polerovirus genome enhances its translation in host cells in the absence of a 5 -cap or VPg. Isolates from the three different phylogenetic CABYV groups, which evolved in different geographical regions, have different 3 -CITEs. Translation of other polerovirus genomes may also be 3 -CITE-dependent. Deletion of the first 60 nt of both 3 -UTRs was achieved by amplification of the whole plasmid with complementary primers flanking this sequence on both sides but lacking this 60 nt. Subsequently, DpnI digestion was used to digest the input plasmid and select for the mutant plasmids (in vitro mutagenesis; [60]). The interchange of the first 60 nt of the 3 -UTR from CABYV-Sp for the ones of CABYV-Xin and vice versa, was also achieved by in vitro mutagenesis, using complementary primers including the sequence flanking this sequence on both sides and the 60 nt to be interchanged in between. Analysis of shorter and longer 3 -CITE fragments in in vivo cap-independent translation assays delimited the optimal CABYV 3 -CITE lengths to 60 nt. All new plasmid constructs were sequenced. These luc-inserts were amplified by PCR with the high-fidelity Prime Star HS DNA polymerase (Takara, Kusatsu, Japan) and transcribed in vitro (MEGAscript; Thermo Fisher Scientific, Waltham, MA, USA). Sequences of transcripts shown in Supplementary Table S1. In Vivo Translation in Melon Protoplasts Melon protoplasts were isolated from cotyledons as described in [34]. Brief description of in vivo translation: 5 µg of in vitro transcribed RNA (uncapped) was electroporated into 1 million protoplasts [43]. To minimize variations between samples, 2 µg of capped Renilla luciferase reporter RNA (pRL-null vector; Promega, Madison, WI, USA) were introduced along with the virus RNA. After 4-5 h incubation in the dark at 25 • C, protoplasts were precipitated and lysed with a final concentration of 0.5× PLB (Promega). Luciferase activities were measured with the Dual-GloTM Luciferase assay system (Promega) and Firefly activities normalized with respect to Renilla activities. These experiments were carried out at least five times for each construct. RNA Structure Analysis The protocol for the determination of the secondary structure of RNA fragments in solution by selective 2 -Hydroxyl acylation, analyzed by primer extension (SHAPE) was described in detail [61]. Briefly, the 3 -UTRs and 3 -CITEs (first 60 nt of 3 -UTRs) of CABYV-Sp and CABYV-Xin analyzed here, were inserted into the SHAPE cassette [62,63]). RNA, synthesized in vitro using MEGAshortscript TM Kit (Ambion, Austin, TX, USA), after treatment with 60mM of benzoyl cyanide (BzCN; Sigma-Aldrich, St. Louis, MI, USA) was reverse transcribed by primer extension of a radiolabeled primer. The cDNA fragments generated were resolved in an 8% denaturing polyacrylamide gel. Normalized BzCN reactivity values for each nucleotide position were calculated by SAFA Footprinting Software [64]. The RNA secondary structure was obtained using the MC-Fold computer program [65] after adding the SHAPE reactivity data. Construction and Multiplication Analysis of Mutant Viruses The mutant viruses were constructed based on a full-length clone from an unpublished Spanish CABYV isolate infecting melon, cloned in pTOPO. The deletion of the first 60 nt of the 3 -UTR and the interchange of the CMTE in the 3 -UTR of CABYV-Sp for the CXTE from CABYV-Xin was performed by in vitro mutagenesis, using complementary primers including the sequence flanking both sides of the first 60 nt of the 3 -UTR, without this 60 nt, or with the CXTE 60 nt in between. Subsequently, DpnI digestion was used to digest the input plasmid and select for the mutant plasmids [61]. The obtained plasmids were sequenced. The agro-infectious mutant clones based on the binary vector pBIN61, were obtained by digestion of these CABYV-pTOPO mutants and CABYV-pBIN61 with HpaI followed by the exchange of the excised 2460 nt fragments of the 3 -end of the viral genomes (digestion at positions 3125 and 5585), resulting in CABYV-∆CITE-pBIN61 and CABYV-CXTE-pBIN61. The agro-infectious mutant clones based on the binary vector pJL89 were constructed directly by in vitro mutagenesis performed on the agro-infectious CABYV-pJL89 plasmid CABYV-MEC12.1 [46]. All plasmids were sequenced (Supplementary Table S1). CABYV wild-type and mutants were infiltrated into the abaxial side of expanded melon cotyledons or N. benthamiana leaves using the agro-infectious clones. The agroinfiltration and Northern blot protocols described in [66] were followed. TRI Reagent (Sigma-Aldrich) total RNA extractions of infiltrated leaves were performed after 3 days. For Northern blots, 2 µg (N. benthamiana) or 10 µg (melon) of total RNA were loaded on a denaturing gel. Viral genomic and subgenomic RNAs were detected by digoxigeninlabelled RNA probe complementary to a highly conserved CP-gene fragment [2]. For CABYV multiplication detection in melon protoplasts (see above), 5 µg of capped or uncapped RNA (transcribed in vitro, Supplementary Table S1) were electroporated into 1 million protoplasts, followed by 5 washes with 0.5 M Mannitol (pH 5.7) to get rid of RNA remaining outside the protoplasts. Negative controls were protoplasts incubated with RNA but without electroporation. Protoplasts were harvested after 24 h incubation at 25 • C with light. Total RNA was extracted with TRI reagent, purified by phenol-chloroform extraction, and followed by DNAseI treatment (Sigma-Aldrich). Quantification of viral RNA was performed as described by Rabadán et al. by one step RT-qPCR (NZYtech, Lisboa, Portugal) using 100 ng total RNA [46]. The results presented are the average of five independent experiments.
2022-10-21T15:33:21.078Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "03a9489a041c51d420695785b52376707c2a9fd3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/20/12503/pdf?version=1666157304", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1000c979ca7ef805a402ddc5afe3890ca8349508", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
11532612
pes2o/s2orc
v3-fos-license
T2* magnetic resonance imaging of the liver in thalassemic patients in Iran AIM: To investigate the accuracy of T2*-weighted magnetic resonance imaging (MRI T2*) in the evaluation of iron overload in beta-thalassemia major patients. METHODS: In this cross-sectional study, 210 patients with beta-thalassemia major having regular blood transfusions were consecutively enrolled. Serum ferritin levels were measured, and all patients underwent MRI T2* of the liver. Liver biopsy was performed in 53 patients at an interval of no longer than 3 mo after the MRIT2* in each patient. The amount of iron was assessed in both MRI T2* and liver biopsy specimens of each patient. RESULTS: Patients’ ages ranged from 8 to 54 years with a mean of 24.59 ± 8.5 years. Mean serum ferritin level was 1906 ± 1644 ng/mL. Liver biopsy showed a moderate negative correlation with liver MRI T2* (r = -0.573, P = 0.000) and a low positive correlation with ferritin level (r = 0.350, P = 0.001). Serum ferritin levels showed a moderate negative correlation with liver MRI T2* values (r = -0.586, P = 0.000). CONCLUSION: Our study suggests that MRI T2* is a non-invasive, safe and reliable method for detecting iron load in patients with iron overload. METHODS: In this cross-sectional study, 210 patients with beta-thalassemia major having regular blood transfusions were consecutively enrolled. Serum ferritin levels were measured, and all patients underwent MRI T2* of the liver. Liver biopsy was performed in 53 patients at an interval of no longer than 3 mo after the MRIT2* in each patient. The amount of iron was assessed in both MRI T2* and liver biopsy specimens of each patient. CONCLUSION: Our study suggests that MRI T2* is a non-invasive, safe and reliable method for detecting iron load in patients with iron overload. INTRODUCTION Conventional treatment of beta-thalassemia major requires regular blood transfusions to maintain pre-transfusion hemoglobin level above 90 g/L [1] . A major drawback of this treatment is transfusion siderosis, which, in association with the increased intestinal iron absorption, apoptosis of the erythroid precursors and peripheral hemolysis, leads to inexorable iron accumulation in various organs such as the heart, liver and endocrine organs [2] . The assessment of body iron is still dependent upon indirect measurements, such as levels of serum ferritin, as well as direct measurements of the liver iron content [3] . Serum ferritin has been widely used as a surrogate marker but it represents only 1% of the total iron pool, and as an acute phase protein, it is not specific because the levels can be raised in inflammation (e.g. hepatitis) and liver damage [4] . Liver iron concentration measured by needle biopsy is the gold standard for evaluation of siderosis. However, it is an invasive technique which is not easily repeated and its accuracy is greatly affected by hepatic inflammation-fibrosis and uneven iron distribution [4] . More recently, biomagnetic susceptometry and magnetic resonance imaging (MRI) have been validated for measuring iron overload, and these techniques have great merit in being noninvasive [5] . Biomagnetic susceptometry is a non-invasive, well calibrated and validated method as a quantitative measurement technique, but it has limited clinical value because of its high cost and technical demands [6] . MRI has been considered a potential method for assessing tissue iron overload, as iron accumulation in various organs causes a significant reduction in signal intensity stemming from a decrease in the T2 relaxation time [7,8] . The objective of the present study was to report our experience of the MRI technique in assessing hepatic iron overload in thalassemic patients. MATERIALS AND METHODS Between January 2008 and April 2009, 210 patients with beta-thalassemia major (114 females, 96 males) referred to the thalassemia clinic of Firuzgar Hospital were consecutively enrolled in this cross-sectional study. Ages ranged from 8 to 54 years with a mean of 24.59 ± 8.5 years. Patients were treated conventionally with regular blood transfusion, in order to maintain the pre-transfusion hemoglobin concentration above 90 g/L. Regarding chelation therapy, all patients were receiving deferoxamine at a dose of 40 mg/kg, 5-7 times per week, by 8-hourly subcutaneous infusion. The study was approved by the Institutional Review Board Ethics Committee of Iran University of Medical Sciences and written informed consent was obtained from all patients for the procedures studied. Serum ferritin A 5 mL blood sample was obtained from each patient for routine laboratory tests and measurement of ferritin level. Serum ferritin concentrations were assayed in all patients before the MRI scan, using an enzyme-linked radioimmunoassay method (Monobind Kit, USA). Liver biopsy Liver biopsy was performed with a 16-gauge Tru-Cut needle (TSK Laboratory, Japan) in 53 patients who gave written informed consent to undergo the biopsy for this study. Each specimen was at least 2 cm in length. The specimens were kept in 10% formaldehyde solution, and were sent to the Department of Pathology of Firuzgar Hospital. The specimens were stained with hematoxylin & eosin and viewed by an expert pathologist. The amount of stainable iron was graded 0-4 according to the Scheuer et al [9] method. It is notable that the interval between liver biopsy and MRI of the liver and heart was less than 3 mo in all patients. MRI technique MRI scans were performed using a 1.5 Tesla Magnetom Siemens Symphony scanner (Siemens Medical Solution, Erlangen, Germany). Each scan lasted about 10-15 min and included the measurement of hepatic and myocardial T2* quantities. A standard quadrature radiofrequency body coil was used in all measurements for both excitation and signal detection. Respiratory triggering was used to monitor the patients' breathing. Cardiac electrocardiographic gating was used. Spatial presaturation slabs were used to suppress motion-related artifacts. The MRI T2* of the liver was determined using a single 10 mm slice through the center of the liver scanned at 12 different echo times (TE 1.3-23 ms). Each image was acquired during an 11-13 s breathhold using a gradient-echo sequence (repetition time 200 ms, flip angle 20°, base resolution matrix 128 pixels, field of view 39.7 cm × 19.7 cm, sampling bandwidth 125 kHz). Statistical analysis All statistical analyses were performed using the Statistical Package for Social Sciences, version 15 for Windows™ (SPSS ® Inc., Chicago, IL). Continuous variables are presented as mean ± SD and count (percent) for categorical variables. The relationship between continuous variables was evaluated by the Pearson correlation coefficient for normally distributed data and Spearman's Rank correlation coefficient for non-normally distributed data. All tests of significance were two-tailed and considered to be significant at P < 0.05. RESULTS All patients underwent MRI and 53 patients had a liver biopsy. The mean serum ferritin level of all patients was 1906 ± 1644 ng/mL. Serum ferritin levels showed a moderate negative correlation with liver T2* MRI values (r = -0.586, P = 0.000) (Figure 1). Of the 53 patients who had a liver biopsy, 5 patients had grade I liver siderosis, 19 had grade Ⅱ, 17 had grade Ⅲ and 12 had grade Ⅳ liver siderosis. The degree of siderosis assessed by liver biopsy showed a moderate negative correlation with liver T2* MRI (r = -0.573, P = 0.000) and a low positive correlation with ferritin level (r = 0.350, P = 0.001). DISCUSSION It is evident that different non-invasive methodologies have been implemented for the detection of organ-specific iron burden in patients with thalassemia major. Among these, MR relaxometry has the potential to become the method of choice for non-invasive, safe and accurate assessment of organ-specific iron load [10] . Until recently serum ferritin levels and liver biopsy have been the most commonly used methods for estimating body iron stores in the thalassemic population. However, ferritin levels are not fully acceptable because there have been significant variations due to inflammation, infections and chronic disorders [3] . Liver iron load vs serum ferritin We found a moderate correlation (r = -0.59, P < 0.001) between serum ferritin levels and hepatic T2* levels. These findings are compatible with other reported studies with highest correlation [3,11,12] . Attempts to correlate serum ferritin levels and hepatic iron concentrations have failed to demonstrate a linear relationship between the two parameters [11] . Histological grade of siderosis vs liver T2* Liver biopsy has been regarded as the most precise method to measure body iron content if direct measurement of the iron concentration was applied. However, this is an invasive procedure which is not available in most clinical settings. Our results revealed no reasonable correlation between histological grade of siderosis (HGS) and serum ferritin. However, a moderate correlation with liver T2* (r = 0.57, P < 0.001) indicated that HGS could still be considered as a method of evaluating thalassemic patients. Liver iron concentration showed significant correlation with hepatic T2* (r > 0.9). These results indicated that MRI T2* measurement is of more value than HGS in thalassemic patients. The most important limitation of our study was the lack of a consideration of intervening factors that may affect the serum ferritin levels such as C-reactive protein, white blood cell count and liver function tests. In conclusion, although hepatic iron content and serum ferritin levels have been considered as the gold standards in evaluating body iron load for several years, iron accumulation in different organs proceeds independently. This emphasizes the importance of direct iron load measurement in each involved organ and direct evaluation of the efficacy of different therapeutic measures. According to our study, the serum ferritin level is not a reliable method for estimating the level of iron overload in thalassemic patients. MRI T2* is a more accurate and non-invasive method which we recommend for measurement of iron load in these patients. Background Iron overload is a common and serious problem in thalassemic major patients. As iron accumulation is toxic in the body's tissues, accurate estimation of iron stores is of great importance in these patients to prevent iron overload by an appropriate iron chelating therapy. Research frontiers Liver biopsy is the gold standard for evaluating iron stores but it is an invasive method which is not easily repeatable in patients. Introduction of other more applicable methods seems to be necessary. COMMENTS Zamani F et al . MRI T2* in thalassemic patients
2018-04-03T00:21:13.914Z
2011-01-28T00:00:00.000
{ "year": 2011, "sha1": "06c22a0355fe3322b147bd0a8589dab3aaf6ef2a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v17.i4.522", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "92319df626f6d4c95cb27f78fa8b71b8fa782063", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251881771
pes2o/s2orc
v3-fos-license
Eating Smart: Free-ranging dogs follow an optimal foraging strategy while scavenging in groups Foraging and acquiring of food is a delicate balance between managing the costs, both energy and social, and individual preferences. Previous research on the solitary foraging of free ranging dogs showed that they prioritized the nutritionally highest valued food patch first but do not ignore other less valuable food either, displaying typical scavenger behaviour. The current experiment was carried out on groups of dogs with the same set up to see the change in foraging strategies, if any, under the influence of social cost like intra-group competition. We found multiple differences between the strategies of dogs foraging alone versus in groups with competition playing an implicit role in the decision making of dogs when foraging in groups. Dogs were able to continually assess and evaluate the available resources in a patch and adjust their behaviour accordingly. Foraging in groups also provided benefits of reduced individual vigilance. The various decisions and choices made seemed to have a basis in the optimal foraging theory wherein the dogs harvested the nutritionally richest patch possible with the least risk and cost involved but was willing to compromise if that was not possible. This underscores the cognitive, quick decision-making abilities and adaptable behaviour of these dogs. Introduction Animal foraging strategies are a delicate balance between food preference and food selection influenced by various factors like nutritional requirements, availability of food, predation, competition etc 1 . Food preference is defined as the discrimination exerted by animals on different food types resulting in the selection of one food type over the other when no constraints bear on their choice 2 . Multiple experiments have shown that, given the chance, animals do indeed prefer certain food items more than others 3,4 . But such idealized situations rarely occur in the natural world and thus animals may not always be able to acquire food according to their preference. For example, goats and giant anteaters make different dietary choices when foraging in wild versus in experimental conditions 5,6 . Thus, it is necessary to distinguish between food preference and food selection, defined as preference modified by environmental conditions 7 . The search for the best possible food with least risk and energy output gives rise to different strategies. Members of a group-living species might forage alone or in groups depending on the food abundance in the area 8 . Foraging in groups provide animals with the advantages of lower predation risk and higher efficiency 9 . But living in groups have their disadvantages too. Intra-group competition is one of the costs of group living 10 . Competition has an effect on the quality and quantity of food acquired by individuals. As a trade-off to competition, individuals may change dietary preferences or forage from lower quality patches 11 resulting in eating inferior food 12 or eating less 10 The free-ranging dog (Canis lupus familiaris) is an ideal model organism to study the effect of various factors and challenges of an urban, human-dominated environment on food preference and selection, if any. Despite subsisting on a carbohydrate-rich omnivorous diet they maintain a strong preference for meat in cafeteria-type trials 13,14 . They find their preferred food using the Rule of Thumb: "If it smells like meat, eat it". This preference is also exercised while scavenging by the dogs in a more realistic scenario of finding food from garbage bins. Individual dogs were found to preferentially find and eat meat pieces from the noisy background of garbage first, as compared to other types of food using the "Sniff & Snatch" strategy 15 . They show qualities of a periscopic forager by optimizing the order of sampling and eating from resource patches (in this case, the boxes) on simultaneously encountering them 16 . This type of feeding strategy has characteristics of an optimal foraging model where the highest ranked food (in both quality and quantity) is sought out first and then less preferred food are added to the diet 17 . They show different levels of social organisation, from solitary living, to living in pairs, and living in packs 18 and thus may face similar pressures as other group-living animals. We carried out a multiple-choice task with free-ranging dog (FRD) groups to understand whether they preferentially feed on meat using the Rule of Thumb, in the presence of competitors. The objectives of this paper are to examine the feeding patterns and strategies of freeranging dogs in groups and to compare these against those of solitary dogs in the context of the predictions of optimal foraging models. We hypothesize that presence of other group members will cause a relaxation in diet selectivity and have an effect on their foraging strategies. We hypothesized that dogs would try to minimize competition by relaxing resource selectivity, rather than entering into active competition for the most preferred food. Study Area The study was conducted in both urban and semi-urban habitats in and around Kolkata, India (22. In order to avoid repeating the experiment on the same groups, we carried out the experiments in different areas on different days. This prevented any learning bias in the focal dogs. Experimental Protocol The composition of the boxes simulating dustbins and the experimental protocol followed was the same as that of Sarkar et al 15 with one exception. The experiment was carried out on groups of dogs. A group was defined as a cluster of three or more dogs with at least two of them being adults, and that were observed to be either resting together or engaging in affiliative interactions 19 with each other. Each group was allowed to interact with the set-up for a minute, like the previous experiment, after which the boxes were removed from the vicinity of the group. The observation time window started as soon as the first dog from the group sniffed from one of the boxes. Thus, although the entire time window for the experiment was one minute, only the first responder was able to avail of it. Each box was assigned a particular identity/category (Refer to Sarkar et al. 2019, for more details). The experiment was carried out on 136 groups and compared with 68 solitary dogs (both adults and juveniles) from the previous experiment. Behaviours and Analysis Each event of sniffing and eating by an individual dog within a group and time-limit was assigned an order from 1-6. Let us consider an example where the mixed box was sniffed first, followed by protein box and then the dog ate from it and this was followed by the dog eating from the mixed box and then sniffing the carbohydrate box. In this case, the order would be assigned as SM-1; SP-2; EP-3; EM-4; SC-5, where SM stands for sniffing mixed, EP stands for eating protein and so on. The sniffing time for each dog started from the moment they began investigating by sniffing at their first box and ended at the moment the dog encountered food and picked it up in its mouth. We also noted down the following things: a) the time taken to sniff from the time the experimental set-up was available for investigation to the dogs, called latency to sniff time b) the time taken to eat from a box after its first instance of sniffing known as latency to eat time, and c) frequency of sampling before and after eating d) the total time and attempts spent on each box and the time spent in each such attempt e) whether dogs showed vigilance behaviour, defined as visual scanning of the surrounding environment 20 . We used Bayesian statistics to analyse our data. We ran multiple generalised linear mixed effects model for parameter estimation using the brms package 21 in R 4.0.2 22 . We used a weakly informative prior specifying a normal distribution (mean = 0, sd = 1) on the fixed effects and an exponential distribution (sd = 1) on the random effects for all the models (Model specific shape parameter priors have been mentioned in relevant sections). Thus, we favoured no specific direction for the effects while keeping their magnitude within reasonable values. We used 95% Bayesian Credible Interval (BCI) to assess the variability in parameter estimates and to interpret our findings 23 . We reported log odds ratios (LOR) for the logistic, multinomial, and beta regressions. Credible intervals that did not overlap zero (unless a different value is stated) indicated difference between parameters. For models, we ran 5000 (8000, for complex models) iterations of the Markov Chain and discarded the first 1000 (or 2000) as a warm-up. We used efficient approximate leave-one-out cross-validation (loo) 24 or k-fold cross validation 25 to select the best fitted candidate model amongst the ones we fitted. There were two exceptions to the usage of this condition: a) When there was only a single candidate model b) When a model with lower loo score made more logical sense, as a better estimation of prediction error does not automatically lead to a better model 26 . Furthermore, we used graphical posterior predictive check to assess if our best fitted model gives valid predictions with respect to our observed data. We carried out time-to-event analysis using the rstanarm package 27 to compare latencies. For each model, we compared the posterior estimate of the standardised survival curve to the Kaplan-Meier survival curve which helped us to assess the model fit to the observed data (estimated survival function check). We also validated our result using restricted mean survival time (RMST), interpreted as the average event-free (as in, no sniffing/eating) time up to a pre-specified important time point, τ. For all models Gelman and Rubin's Rhat statistic was used to assess convergence and it was adequate in all cases with Rhat = 1 28 . Comparison of Preference between boxes (quantified through sniffing and eating) We analysed the activity patterns of the dogs in groups. We quantified the total instances of sniffing and eating separately and considered whether a particular box or boxes had been sniffed/eaten more than the other. The response variable was binary ("sniff"/ "eat"), "Y" when a box was sniffed/eaten and "N" when it wasn't. We ran a logistic regression with Bernoulli distribution. Our best fitted model was a hierarchical model with "random effect" of individual dog nested within group nested within place. "box" refers to the identity of the box sniffed/eaten by the dog. "groupid" refers to the identity of the group the dog belongs to and "id" refers to its own identity. Here activity is a placeholder for the sniff and eat activities. Comparison of Preference for sniffing and eating first between boxes We analysed whether the dogs sniffed and ate first from a particular box(es) more than the others. This response variable was trinary ("box"), "M" for mixed box, "P" for protein box, and "C" for carbohydrate box. We ran a multinomial logistic regression. Our model was a hierarchical Intercept only model with random effect of group nested within place. "groupid" refers to the identity of the group the dog belongs to. Comparison of Sniff and Snatch (SnS) strategy We analysed whether dogs in groups displayed SnS behaviour (sniffing a box and immediately eating from it) towards a particular box or not. The response variable was binary -"y", dummy coded as 1, if SnS was displayed and "n", dummy coded as 0, if SnS was not displayed. A dog could show one of four behaviours after sniffing from a box: (a) immediately eat from it (SnS); (b) sniff from a different box; (c) eat from a different box; (d) do nothing. Out of the four only (a) was coded as 1 while the rest were coded as 0. We ran the fitted function in brms to calculate the probabilities of each category (1 or 0) on the response scale and reported probabilities with 95% credible intervals. Credible intervals that don't overlap 0.25 indicate evidence of difference between category levels, rather than the probability of the event being the outcome of random chance. The probability value of 0.25 was used because SnS is one of four actions that the dog can take. Our model was an Intercept only hierarchical model with random effect of group nested within place. "groupid" refers to the identity of the group the dog belongs to. The dependent variable "dfood" ("food" being a placeholder for either one of the three boxes) was a binary variable (1 or 0) indicating if a dog displayed SnS behaviour towards the respective box. The results showed that the probability of dogs in groups following SnS towards the mixed and protein boxes is higher than a random chance event (For mixed -mean: 0.492; 95% CI: 0.381, 0.605; For protein -mean: 0.471; 95% CI: 0.373, 0.588) and lower than a random chance event for carbohydrate box (mean: 0.133; 95% CI: 0.064, 0.230). Sampling before Eating We analysed whether more than one box was sampled either by dogs in groups or individually or both before the first event of eating from a box. Sampling was initiated when the dog sniffed a different box from the one which it was currently sniffing. That was counted as Sampling Event 1, sniffing another box or returning to the previous box was Sampling Event 2, and so on. The end point was when the dog started eating. The response variable was binary ("morethanone"), "no" when only one box was sniffed at before eating or moving away from the set-up, and "yes" when more than one box was sampled. We ran a logistic regression with Bernoulli distribution. Our best fitted model was a hierarchical model with "random effect" of group nested within place. "firstbox" refers to the first box sniffed by the dog (dummy coded as "0" for Mixed, "1" for Carbohydrate, "2" for Protein boxes) and "status" refers to whether the dog was solitary or in a group (dummy coded as "0" for group, "1" for solitary). morethanone ~ 1 + firstbox + status + (1 | place/group) The results of the regression showed that compared to a mixed box, sniffing from a carbohydrate box first made it more likely to sample more than one box (LOR = 0.69; 95% CI = 0.13, 1.26). Protein box showed no such difference (95% CI = -0.30, 0.82). Solitary dogs were more likely to sample more than one box, compared to dogs in groups (LOR = 0.82; 95% CI = 0.08, 1.64). The conditional effects plot of the model is given in Figure 1. Sampling after Eating We analysed whether dogs sampled from other boxes after they had initiated the first event of eating and, if yes, the number of such sampling events. We ran a multilevel hurdle model for the zeroes ("hu": whether or not sampling post eating was done) and a Poisson regression for the counts of such non-zero attempts. Our best model was a hierarchical model with group nested within place. "morethanone" refers to whether or not the dog sampled multiple boxes before the first event of eating and "grpsize" refers to the number of group members present on the day of the experiment for a particular group (solitary dogs were a "group of 1", i.e. their group size was 1). The results of the regression showed that dogs who did multiple pre-eating sampling were less likely to sample other boxes post-eating (mean Estimate = 0.98, 95% CI = 0.29, 1.68). Furthermore, larger the group size, more the likelihood of sampling post-eating (mean Estimate = -0.33, 95% CI = -0.59, -0.10). Either pre eating sampling or the group size do not have any effect on the rate of post-eating sampling. Latency in sniffing We defined the latency to sniff as a time-to-event variable. The time interval available for the dog to complete the task of sniffing was 300 seconds (the total duration of a trial before it is deemed unsuccessful and started when the first box was opened). We only considered the first responders in the group data, that is, the dogs who responded first to the set-up due to the fact that they resemble solitary dogs most closely in terms of accessibility and availability of resources and time. In this study we specified a proportional hazards model for the "hazard" of sniffing. For easier interpretation, the term "hazard" is replaced by "occurrence" in subsequent sections with no change in its meaning or mathematical formulae. We considered the effect of condition, a variable that tells us whether dogs are part of a group or solitary, on the occurrence of sniffing. The occurrence ratio (OR; exp(β1/coefficient for condition)) quantifies the relative increase in the occurrence that is associated with a unit-increase in the relevant covariate, whilst holding any other covariates in the model constant. In our model, OR is a time fixed quantity. Our best fitted model was a weibull model. Here, the outcome variable is time taken to sniff after first opening a box, denoted by "secstosniff", the "censored" variable lets the model know if the dog has sniffed within the time interval of 300 seconds (1 for sniffing, 0 for not sniffing) and the predictor "condition" is a variable which can take one of two levels-solitary and group. (secstosniff, censored) ~ condition The results of the regression gave us the estimated ORs and showed that individuals in the solitary condition have lower rates of sniffing relative to the ones in groups. The inferred median condition effect is -0.939 with an estimate of 0.156 for the standard deviation of the marginal posterior distribution of the covariate effect. The occurrence of eating is 0.391 times lower than the occurrence of eating by individuals in group. The 95% CI of the posterior is completely below zero (-1.2, -0.6). We also checked the posterior distribution for the absolute difference in RMST between the two conditions at the average time, τ = 33s (rmst.grouprmst.solitary). The 95% CI of the difference posterior lies entirely below zero (-12.41, -6.73). This provided evidence that the RMST is higher in solitary condition, thus sniffing earlier is more likely in group. Latency in eating The time interval available for the dog to complete the task of eating was 60 seconds (the total duration of the experiment once a dog started sniffing a box). Our best fitted model was a cubic mspline model with degrees of freedom, df = 5 and δ = 3. The outcome variable is time taken to eat after first sniffing a box, denoted by "secstoeat", the "censored" variable lets the model know if the dog has eaten within the time interval of 60 seconds (1 for eating, 0 for not eating) and the predictor "condition" is a variable which can take one of two levels-solitary and group. (secstoeat, censored) ~ condition The results of the regression gave us the estimated ORs and showed that individuals in the solitary condition have lower rates of eating relative to the ones in groups. The inferred median condition effect is -0.449 with an estimate of 0.220 for the standard deviation of the marginal posterior distribution of the covariate effect. The occurrence of eating is 0.638 times lower than the occurrence of eating by individuals in group. As the 95% CI of the posterior is touching zero (-0.9, 0), we checked the posterior distribution for the absolute difference in RMST between the two conditions at the halfway time, τ = 30s (rmst.grouprmst.solitary). The 95% CI of the difference posterior lies entirely below zero (-5.11, -0.137). This provided evidence that the RMST is higher in solitary condition, thus eating earlier is more likely in group. The posterior probability that RMST at τ = 30 is higher for solitary than group is 0.981 giving further support to our result. We plotted the predicted survival function between 0 and 60s for a dog in each of the condition in Figure 2. Vigilance Behaviour The following behaviours were included under vigilance behaviour : a) alert scanning while eating or suspending foraging b) continuously looking at something while eating or suspending foraging c) showing wariness while looking at something (including experimenters/humans) d) following the movement of traffic, humans or other moving objects even if dog has moved away from boxes e) alert and scanning while sitting The following behaviours were not considered: a) looking at insects/walls/ground/boxes b) looking at humans in a positive gesture (for e.g. affiliative) c) If eyes are not clearly visible and head movement is negligible d) sitting and looking with a relaxed posture Our best model was a hierarchical logistical model with group nested within place. "scanenvdn" is a binary variable and refers to whether a dog showed vigilant behaviour and "status" refers to whether the dog is solitary or in group. scanenvdn ~ 1 + status + (1 | place/groupid) The results of the regression showed that compared to dogs in groups, solitary dogs are more likely to scan their environment (LOR = 1.26; 95% CI = 0.53, 2.02). The conditional effects plot of the model is given in Figure 3. Total handling attempts Total handling attempts (THA) is defined as the number of times a particular box has been engaged (sniffed/ foraged/ eaten from) with over the entirety of time available to the dog of a minute. An attempt is initiated when a dog interacts with a box through one of the activities mentioned above and ends when the dog stops interacting with the box. Disengagement happened when a dog started sniffing another box, started looking around for potential dangers or opportunities or was distracted due to external factors like car horns or ticks. We analysed whether dogs handled a box and, if yes, the number of such handling attempts. We ran a multilevel hurdle model for the zeroes ("hu": whether or not handling was done) and a Poisson regression for the counts (number: number of handling attempts for the non-zero responses). Our best model was a hierarchical model with individual dog ("id") nested within group, nested within place. "tha" refers to total handling attempts and "box" refers to the identity of the box tha ~ 1 + box + (1 | place/group), hu ~ 1 + box + (1 | place/group) The results of the regression showed that the identity of the box did have an effect on the rate of THA, with THA rate for the carbohydrate box lower as compared to mixed box (mean Estimate = -0.27, 95% CI = -0.50, -0.03). The rate of THA was not different between protein and mixed box. Furthermore, the identity of the box had no effect on the likelihood of being handled. Total handling time (THT) THT for a particular box is defined as the proportion of time spent by a dog throughout the totality of its handling events on a particular box out of its total available time to forage. We ran a zero-oneinflated beta distribution to account for the zeroes and ones in the dataset. We reported log odds ratios with 95% credible intervals. We computed the log odds difference between the three box types to check if one box type was handled more or less than the other. Our candidate model was a hierarchical model with individual dog ("id") nested within group, nested within place. "propht" refers to the proportion of the time a dog spent at a box foraging (handling a resource patch) out of the total available time to forage and "box" refers to the identity of the box the dog spent that time in and "status" refers to whether they were in group or solitary. propht ~ 0 + box: status + (1 | place/group/id) phi/zoi/coi ~ 0 + box: type + (1 | place/group/id), The results of the regression showed that dogs in groups spent lesser proportion of time handling the carbohydrate box as compared to mixed and protein boxes as shown by the posterior difference (95% CI for M -C = 0.051, 0.240; 95% CI for P -C = 0.086, 0.301). Solitary dogs spent greater proportion of time handling protein box as compared to carbohydrate box (95% CI for P -C = 0.010, 0.321) but no such difference between protein and mixed boxes (95% CI for P -M = -0.285, 0.014) or between mixed and carbohydrate boxes (95% CI for M -C = -0.125, 0.178) was observed. No difference across type (group vs solitary) was seen for the same box (e.g., group mixed vs solitary mixed). The conditional effects plot of the model is given in Figure 4. Unit handling time Unit handling time is defined as the amount of time spent in each handling attempt out of the available proportion of handling time. It is formulated as handling time divided by THA. We carried out a generalized linear regression with zero-one-inflated beta distribution to account for the zeroes and ones in the dataset. Our best model was a hierarchical model with individual dog ("id") nested within group, nested within place. "prophand" refers to the proportion of the time a dog spent at a box handling it out of the available proportion of handling time, "box" refers to the identity of the box the dog spent that time in and "type" refers to whether they were in group or solitary. Discussion Resources are often distributed in a heterogenous and stochastic fashion across time and space in patches. The acquiring of such resources is dependent on the ability to make profitable foraging decisions by foragers. Such decisions can only be made by gathering information about patch heterogeneity. Patch quality information can be gathered while exploiting it. This is known as sampling for information 29 . Foragers harvest information from the environment by sampling and investigating through visual, olfactory, and chemical cues and using such information to modify their behaviour accordingly 30,31 . In uncertain environments, animals spend a considerable amount of their time investigating, with a significant proportion of that spent in resource rich patches 30,32 . As the resources in a patch deplete, foragers may have to frequently reassess patch choice and make decisions whether to stay, move on or re-visit a previous patch 33, 34 . In our experiment, the boxes simulating dustbins acted as heterogeneous resource patches about which the dogs had no prior knowledge. Thus, the only way to collect information and assess the quality of the patches was through sampling them. A major decision node around which we split the dogs' sampling efforts was the first eating event. Previous research had shown that solitary dogs harvested the best resource patch first but showed no such preference during sniffing. Although this hinted at an optimal foraging strategy, it was not immediately clear whether dogs carefully assessed available patches before making a decision or they behaved impulsively or were able to hone on the best available resource. Our current experiment showed that sampling played an important role during foraging in FRDs. Solitary dogs sampled multiple "patches" (boxes), leaving behind sub-optimal food as evidenced by the fact that sampling likelihood increased if dogs came across carbohydrate box first, till they encountered the best available resource and then ate from it. This demonstrates that dogs were able to assess and evaluate the quality and quantity of resources available in a patch and take this into account before making a decision of eating. On the other hand, dogs in groups were more likely to eat from the first box they encountered but would frequently sample other "patches" while harvesting resource from their current "patch". This post-sampling tendency increased with an increase in group size, thus hinting at an implicit effect of competition, in the presence of conspecifics, at play. Additionally, in both cases, solitary, and group, no particular box was sniffed first more than the others thus demonstrating that the dogs did not have an instantaneous clue of the location of preferred food and thorough sampling was the only way for them to gather information about the "patches". Overall, dogs who showed pre-eat-sampling behaviour were less likely to do post-sampling elucidating that the behaviour was not random but rather had the specific purpose of finding the best available resource patch and the application of the behaviour varied according to the level of competition. In keeping with the unpredictable environment that they live in and the fluctuating food resources available therein, FRDs were found to investigate and engage with all the available resource "patches" (boxes) though they concentrated their maximal efforts on the energy-rich boxes (protein and mixed) choosing to handle them more number of times than the energy-poor boxes (carbohydrate). Furthermore, instead of allocating all their time to the best "patch" once located, the dogs allocated different proportions of their available time to different "patches" depending on their quality with better "patches" allocated more time but suboptimal patches allocated a small proportion of time too. Although this seems to be a deviation from optimal behaviour, it is hypothesized that the dogs are actually prioritizing a long-term adaptation to a fluctuating environment, akin to titmice foraging 34 . In their natural habitat, food resource clumps such as dustbins and garbage dumps vary in food abundance across time such that high abundance food sources can become low abundance ones at a later time and vice versa. In such a scenario, it is advantageous to spend some time handling all "patches", especially if the distance between the "patches" are minimal, as is the case here, in order to continually track the status of the environment and change the allocation as and when necessary. Additionally, dogs spent a lower proportion of time in each handling attempt for carbohydrate box as compared to mixed and protein box. This can be because of two reasons. One, apart from handling carbohydrate box fewer times, the smaller time allocation for each attempt shows that dogs expend less effort for a suboptimal resource. The other reason seems to be that between feeding from protein and mixed boxes, dogs would sample from carbohydrate box too and would quickly assess the value to be lower than that of the other two boxes and move back to them. Unfortunately, we were unable to observe the amount of food eaten by individual dogs. Thus, we were unable to find out the point of resource depletion at which the dogs shifted to another patch. FRDs are known to forage singly as well as in groups 18 . Group living animals have to balance between access to quality nutrition and conspecific competition. Decreased resource selectivity in the presence of conspecifics reflects an adaptive response to competition. Such decreased selectivity can also be observed in the FRDs in the current experiment. While solitary dogs sought to maximise both quality and quantity of the available food by preferentially feeding from the protein box 15 , dogs in groups show a relaxation in their preference by feeding from both protein and mixed boxes equally. While it might be hypothesized that the dogs moved to mixed box only after they believed that the protein box had no valuable resource to offer them, we don't think that is the case here. The dogs ate first from both the boxes equally showing that instead of protein box being the top "patch" in preference hierarchy, the dogs have relaxed their choice to include both as equally preferable. As dogs ate chicken pieces in overwhelmingly high amount than bread (80.5% of the total food eaten), it implies that even while eating from the mixed box, they selectively ate the chicken pieces over bread. Thus, while there is increased acceptance in terms of feeding from lower valued "patches" (lower quantity of preferred food), there is little change in acceptance of lower valued food as compared to solitary dogs. Solitary dogs follow the SnS strategy exclusively for the protein box whereas the dogs in groups apply the strategy to both mixed and protein boxes. Taken together, these results show that even in groups, dogs do follow the Rule of Thumb. These results lend credence to optimal foraging prediction that states that if foods of higher value are available, low value items should be rejected, regardless of their abundance 17 . Similarly, FRDs seem to sequester the preferred food, meat, first followed by bread. This seems to be an adaptation of the scavenging lifestyle that these dogs lead wherein they try to maximise nutrients in any form they can while maintaining their preference. FRDs are known to be bolder in groups 35 . The reduced time demonstrated by groups in responding to the experimental set-up and in eating as compared to solitary dogs imply the presence of such traits during foraging. Indeed, it was found that solitary dogs are more likely to indulge in vigilance behaviour as compared to those in groups, perhaps due to the likelihood of facing competition from other dogs and scavengers while foraging. Since this was not the focus of this experiment, we did not attempt to look into this behaviour in great detail. Dogs in groups are also known to spend more time engaging with the boxes in each foraging attempt. Foraging in groups confer multiple benefits to animals, some of which such as reduced vigilance 36 and more time spent in foraging 37 seem to be behind the dogs' action here. However, we must specifically investigate other factors that might elicit similar behaviour in future studies. Intragroup competition and social rank appropriate behaviour might be two such factors 38 . In this study, we investigated the foraging behaviour of FRDs in groups and compared it against that of solitary dogs. We found that the foraging tactics of dogs fulfil multiple characteristics of an optimal foraging strategy and that these animals are highly plastic in their behaviour and adaptable in their strategies, as befits an organism living in an unpredictable environment. Further research may focus on finding out the energy threshold that make a "patch" valuable and how animals assess that. Studies may seek to explore the mechanism behind the inverse relationship between group size and vigilance and disentangling the various factors that come into play when foraging in group such as competition, hierarchy and food density. Examining the effect of these confounding factors in tandem and alone will shed greater insight into the foraging behaviour of these animals and the causal mechanism behind such behaviours as reduced latency. Food eaten We carried out a logistic regression with zero-inflated binomial distribution to account for the excess zeroes in the dataset. We used kfold cross-validation (CV) to select the best fitted model amongst the ones we fitted. We reported log odds ratios with 95% credible intervals. We computed the log odds difference between the two food types to check if one food type was eaten more than the other. Our best fitted model was a hierarchical model with "varying" effects of group nested within place. We used a prior beta(2, 2) for "zi" to regularize it towards 0.5. The outcome variable was "Count" which referred to the number of food pieces eaten per group out of a total of 15 pieces, denoted here by trials (n). "foodType" refers to the type of food available to the dog for eating (dummy coded as "0" for bread, "1" for chicken). "participants" denoted the number of dogs eating in a group. The results of the regression showed that compared to bread, chicken increased the likelihood of proportion of food items being eaten (LOR = 2.54, 95% CI = 1.19, 3.65). The number of participants had a positive effect on the likelihood of food items being eaten (LOR = 1.92, 95%CI = 1.61, 2.27). The density plot of the model parameter estimates have been shown in Supplementary figure 2 below. The density of the group intercept "random effect" (sd_place: group_Intercept) and the random slope of foodType (sd_place : group_foodType) on it are both away from zero validating our choice of including them in the model. Clearly, group has an effect on the likelihood of food eaten and the effect of food type varied between groups. This, as the "participants" parameter shows, is probably because the number of participants in each group is different and groups with larger number of participants eat more food or when only a single member of a group eats during the experiment, it eats only one type of food. Another possibility is that different members eat different food, dependent on availability. The posterior distribution of the place Intercept contained zero, elucidating that it had no effect.
2022-08-29T01:16:13.512Z
2022-08-25T00:00:00.000
{ "year": 2022, "sha1": "234ce3faf679fc9b1022afa9544086c022730d95", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "234ce3faf679fc9b1022afa9544086c022730d95", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Biology" ] }
226976105
pes2o/s2orc
v3-fos-license
Implicit Filter-and-sum Network for Multi-channel Speech Separation Various neural network architectures have been proposed in recent years for the task of multi-channel speech separation. Among them, the filter-and-sum network (FaSNet) performs end-to-end time-domain filter-and-sum beamforming and has shown effective in both ad-hoc and fixed microphone array geometries. In this paper, we investigate multiple ways to improve the performance of FaSNet. From the problem formulation perspective, we change the explicit time-domain filter-and-sum operation which involves all the microphones into an implicit filter-and-sum operation in the latent space of only the reference microphone. The filter-and-sum operation is applied on a context around the frame to be separated. This allows the problem formulation to better match the objective of end-to-end separation. From the feature extraction perspective, we modify the calculation of sample-level normalized cross correlation (NCC) features into feature-level NCC (fNCC) features. This makes the model better matches the implicit filter-and-sum formulation. Experiment results on both ad-hoc and fixed microphone array geometries show that the proposed modification to the FaSNet, which we refer to as iFaSNet, is able to significantly outperform the benchmark FaSNet across all conditions with an on par model complexity. INTRODUCTION The design of multi-channel speech separation systems is one of the active topics in the speech separation community in the past years. Despite the advances in time-frequency domain neural beamformers where a neural network is used to assist the conventional beamformers for better robustness and performance [1][2][3][4][5][6][7][8][9][10][11], time-domain architectures have also earned the attention from the community due to their ability to perform purely end-to-end optimization towards the target signals. Moreover, as conventional beamformers often require a temporal context for better estimation of spatial features [12,13], time-domain systems have the potential to be operated at frame-level with a lower theoretical system latency. Recent time-domain systems can be classifies into three categories. The first category reformulates the multi-channel separation problem as a single-channel separation problem on a selected reference microphone, with the help of additional cross-channel features. Various cross-channel features have been proposed and analyzed in versatile datasets [14][15][16][17][18]. The second category processes the multichannel mixtures with a convolutional encoder where the channels are treated as different feature maps in a convolutional operation. Such systems learn a direct mapping between the mixtures and the target signals [19,20]. The third category performs end-to-end beamforming without solving the optimization problems required in conventional beamformers. The beamforming filters can be compared to the masks in single-channel separation systems as they both operate at frame-level [21,22]. One of the systems in the third category is the filter-and-sum network (FaSNet) which performs time-domain end-to-end beamforming [21,22]. FaSNet attempts to directly estimate the beamforming filters via a neural network, and previous results have proven its effectiveness in simulated noisy reverberation datasets comparing with single-channel methods and baseline methods in the first category. However, FaSNet has several drawbacks in the problem formulation. Since FaSNet formulates the separation problem as a time-domain beamforming problem and the end-to-end training target is typically the reverberant clean signals, asking the filters to reconstruct not only the direct-path signal but also all the reverberation components may not be optimal, as it requires the filters to achieve a complex beampattern to preserve the reverberations. Although it is possible to perform joint dereverberation and separation by setting the direct-path signals as the training target, it may significantly increase the task difficulty. Moreover, one reason for the preference of time-frequency domain beamformers than time-domain beamformers is their advantage in both robustness and performance due to the use of short-time Fourier transform (STFT), hence it is also necessary to explore such beamforming operation in a latent space. In this paper, we explore multiple aspects to improve the original FaSNet architecture. We investigate four modifications to FaSNet. First, we compare the original multi-input-multi-output (MIMO) formulation with the multi-input-single-output (MISO) formulation, where the filter is only estimated for the reference channel instead of all the channels. Second, we consider estimating the filter in a learnable latent space instead of the original waveform-domain. Together with the MISO assumption, it also matches the design of the systems in the first category. Third, we look for better cross-channel features more suitable for the MISO and latent-space filtering design. Fourth, we utilize context-aware processing to further improve the model performance. We refer to the modification of FaSNet as implicit FaSNet (iFaSNet). Ablation experiments show that such modifications are able to allow iFaSNet to outperform the original FaSNet across various data configurations while maintaining a same model size and complexity. The rest of the paper is organized as follows. Section 2 briefly goes over the original FaSNet architecture and introduces the proposed iFaSNet. Section 3 provides the experiment configurations. Section 4 presents the results and discussions. Section 5 concludes the paper. by a neural network, and the filters are applied to the input to obtain the separated outputs: where represents the convolution operation. The estimation of the filters rely on both the channel-wise features and the cross-channel features, and FaSNet applies a linear fully-connected (FC) layer to extract the channel-wise features for each input channel: whereŴ ∈ R (L+2W )×N is a learnable parameter matrix. The linear FC layer can be also regarded as a linear 1-D convolutional layer. The cross-channel feature used by FaSNet is the normalized cross correlation (NCC) feature q i ∈ R 1×(1+2W ) calculated between the center frame at reference microphone y 1 and the context frame at all Various neural network architectures can be selected to estimate the filters. FaSNet makes use of the dual-path RNN (DPRNN) [23] architecture together with the transform-average-concatenate (TAC) module [22] to perform robust filter estimation and allows the model to be invariant to the number and permutation (locations) of the microphones. This is rather important in ad-hoc array scenarios. Implicit Filter-and-sum Network To improve the performance of FaSNet, we propose the implicit filter-and-sum network (iFaSNet) which modifies the standard FaS-Net in multiple aspects. Figure 1 shows the flowchart of iFaSNet, and the modifications to the original FaSNet are highlighted. Multi-input-single-output design FaSNet applies the time-domain filter-and-sum beamforming together with an end-to-end training objective. The training target for FaSNet is typically set to the reverberant clean signals in a selected reference microphone 1 . However, the learned beamforming filters at different channels have their own beampatterns, and using the reverberant clean signals as the training target implies that the filters should not only enhance the signal coming from a certain direction, but also need to reconstruct all the reverberation components. The corresponding beampattern can thus be a mess, as the reverberations typically cover a much wider range of angles. Although FaSNet applies frame-level beamforming where infinite optimal frame-level filters may exist since the linear equation in equation 1 is underdetermined (M × (1 + 2W ) unknowns and 1 + 2W equations), finding such reverberation-preserving filters for all channels may hurt the generalization of the network and thus affect the separation performance. To maintain the end-to-end training configuration, we change the original multi-input-multi-output (MIMO) filter estimation into multi-input-single-output (MISO) estimation, where only the filter for the reference channel is calculated. This reformulates the multichannel separation problem back to the single-channel separation problem as in the first category discusse in Section 1, while the input to the model still contains the mixtures from all channels. Figure 1 (A) shows the MISO module. Implicit filtering in latent space FaSNet explicitly calculates the beamforming filters in the waveformdomain. The advantage is that it follows the standard definition of time-domain beamforming, and it is easy to analyze the filters' behaviors such as beampatterns. However, most literatures on multichannel separation focus on learning a mapping in a latent space where the signal can be better represented. Not only the existing neural beamformers are mainly designed in the time-frequency domain, but also the time-domain systems utilize a learnable latent space for better signal representations and separation performance. Motivated by these recent progress, we adopt the standard setting in time-domain systems where a pair of learnable encoder and decoder are used for signal representation in the latent space. The filter is thus similar to the "mask" in single-channel separation systems, which is defined as a multiplicative function on the encoder output of the reference channel. According to equation 2, the encoder in the original FaSNet maps the context frameŷ i to a feature vector s i by the encoder weight W. s i is only used for the estimation of the beamforming filters and does not involve in the beamforming operation. The encoder in iFaSNet is applied on the center frame y i instead of the context frameŷ i , with its corresponding encoder weight W ∈ R L×N . The feature is denoted as f i ∈ R 1×N . A decoder with its weight U ∈ R N ×L is also applied to transform the latent feature back to waveforms. Figure 1 (B) shows the newly-added decoder module. Feature-level normalized cross correlation The cross-channel feature in original FaSNet is calculated by timedomain normalized cross correlation (tNCC) defined in equation 3. The rationale behind tNCC is to capture both the delay information across channels and the source-dependent information for different targets. However, the tNCC feature requires a sample-level convolution operation, which involves L(1 + 2W )M float-point multiplications. Moreover, when the sample rate of the signals is low (e.g. in certain telecommunication systems), such sample-level correlation may fail to capture the cross-channel delay information. To save the number of operations and accelerate the feature calculation, we modify the tNCC to feature-level NCC (fNCC). Denote the context feature [f i t−C , . . . , f i t , . . . , f i t+C ] as F i t ∈ R (1+2C)×N where C denotes the context size. fNCC calculates the cosine similarity between whereF i t denotes the column-normalized feature of F i t where each column has a unit length, andq i t ∈ R (1+2C)×(1+2C) denotes the fNCC feature at time t for channel i.q i t is then flatten to a vector of fNCC only contains N (1 + 2C) 2 M float-point multiplication operations. For the default setting in FaSNet where W = L = 256 with a 50% overlap between frames, we have C = 2 and (1 + 2C) 2 = 25 1 + 2W = 513. By properly setting the value of N , fNCC can greatly save the computation needed for the cross-channel feature extraction. Figure 1 (C) shows fNCC calculation module. Context-aware filtering Existing systems for both single-and multi-channel end-to-end separation only make use of the center feature f i t at time t. On the other hand, utilizing context information to improve the modeling of local frame is very common in various systems [24,25]. iFaSNet explores a context encoder to squeeze the context feature F i t into a single feature vectorf t has the same shape as f 1 t . The filters are then applied to the encoder outputs, and mean-pooling is applied across the time dimension: denotes the Hadamard product. The implicit "filter-andsum" operation is thus defined on the context. Figure 1 (D) shows the context encoder and decoder. Dataset We evaluate our approach on a simulated ad-hoc multi-channel twospeaker noisy speech dataset. 20000, 5000 and 3000 4-second long utterances are simulated for training, validation and test sets, respectively. For each utterance, two speech signals and one noise signal are randomly selected from the 100-hour Librispeech subset [26] and the 100 Nonspeech Corpus [27], respectively. The overlap ratio between the two speakers is uniformly sampled between 0% and 100%, and the two speech signals are shifted accordingly and rescaled to a random relative SNR between 0 and 5 dB. The relative SNR between the power of the sum of the two clean speech signals and the noise is randomly sampled between 10 and 20 dB. The transformed signals are then convolved with the room impulse responses simulated by the image method [28] using the gpuRIR toolbox [29]. The length and width of all the rooms are randomly sampled between 3 and 10 meters, and the height is randomly sampled between 2.5 and 4 meters. The reverberation time (T60) is randomly sampled between 0.1 and 0.5 seconds. After convolution, the echoic signals are summed to create the mixture for each microphone. The ad-hoc array dataset contains utterances with 2 to 6 microphones, where the number of utterances for each microphone configuration is set equal. Model configurations The original FaSNet with transform-average-concatenate (TAC) module [22] is used as the backbone architecture as well as the baseline. The frame size L and the context size W are both set to 16 ms (256 points), and the overlap ratio between frames is set to 50%. In iFaSNet, we use the same configuration of L and W (with C = 2 as discussed in Section 2). The architecture for the context encoding and decoding modules has two choices: 1. MLP: F i t is flattened to a vector with shape 1 × N (1 + 2C), and a multi-layer perceptron (MLP) is used to map the vector intof i t . Another MLP is used to map the frame-level concatenation of g 1 t , the output of the MISO separation module, and each feature in F i t to estimate the filters for the entire context. 2. RNN: F i t is passed to an RNN layer followed by a meanpooling operation across time to generatef i t . The output of the MISO separation module g 1 t is concatenated to each feature in F i t and passed to another RNN layer to decoder the filters. We use the RNN configuration as empirically it leads to better performance than the MLP configuration with a same model size. We Training configurations All models are trained for 100 epochs with the Adam optimizer [30] with an initial learning rate of 0.001. Signal-to-noise ratio (SNR) is used as the training objective for all models. The learning rate is decayed by 0.98 for every two epochs. Gradient clipping by a maximum gradient norm of 5 is always applied for proper convergence of DPRNN-based models. Early stopping is applied when no best validation model is found for 10 consecutive epochs. No other training tricks or regularization techniques are used. Auxiliary autoencoding training (A2T) is applied to enhance the robustness on this reverberant separation task [31]. Table 1 presents the experiment results of the baseline FaSNet and various configurations of iFaSNet. We first note that the results for FaSNet is lower than the results reported in [22], and this is due to the use of A2T during training. We suspect that this is mainly because the filter-and-sum problem formulation makes A2T training harder and affects the separation performance. We do not dive deeper into the role of filter-and-sum operation in A2T as it is beyond the scope of this paper. For ablation experiments, we explore the effect of introducing MISO configuration or replacing the original tNCC to fNCC. We observe that modifying the MIMO configuration into MISO configuration has no harm on the overall performance, while using fNCC instead of tNCC in the MIMO setting leads to worse performance. A possible explanation for this is that fNCC cannot explicitly capture cross-channel delay information, which is important for the MIMO time-domain filter-and-sum operation. The second step for the ablation experiments starts with setting the MISO configuration as default. Unlike the MIMO+fNCC configuration, MISO+fNCC configuration is able to improve the performance especially when the number of available channels is large. Comparing with the MISO+tNCC configuration, the improvement is more significant at higher overlap ratios. However, applying implicit filtering together with the tNCC feature leads to a drastic performance degrade. Since implicit filtering without context is equivalent to the "masking" configuration in various existing multi-channel end-to-end systems [14][15][16], the result implies that the cross-channel feature needs to be carefully selected to align with the formulation of separation. RESULTS AND DISCUSSIONS The third ablation experiment combines MISO, implicit filtering and fNCC feature and achieves better performance than any previous configurations. Note that comparing with the MISO+implicit configuration, such result indicates that the fNCC feature is a suitable cross-channel feature for the implicit filtering approach. The ablation experiments end with applying context-aware filtering into the system and result in our final design for iFaSNet. Note that the context size is set to C = 2 as mentioned in Section 2.2.4, and the effect of other context sizes and window sizes is left for future work. CONCLUSION In this paper, we explored ways to improve a previously proposed end-to-end multi-channel speech separation system, the filter-andsum network (FaSNet). We considered four different aspects to modify the original model: the multi-input-single-output (MISO) problem formulation, the implicit filtering in latent spae, the feature-level normalized cross correlation feature for cross-channel information, and the context-aware filtering operation. We named our modification to FaSNet as the implicit FaSNet (iFaSNet). Ablation experiment results showed that the iFaSNet combining such four modifications can lead to a significant performance improvement across various configurations. ACKNOWLEDGMENTS This work was funded by a grant from the National Institute of Health, NIDCD, DC014279; a National Science Foundation CA-REER Award; and the Pew Charitable Trusts.
2020-11-18T02:01:17.077Z
2020-11-17T00:00:00.000
{ "year": 2020, "sha1": "3324404906396951f1c33930e44aa1cf339e0940", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3324404906396951f1c33930e44aa1cf339e0940", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
85532547
pes2o/s2orc
v3-fos-license
Is problematic exercise really problematic? A dimensional approach Objective: Though moderate exercise has numerous health benefits, some individuals may become excessively preoccupied with exercise, endorsing features akin to “addiction.” The aim of this study was to evaluate the relationships between problematic exercise (viewed dimensionally), quality of life, and psychological measures. Methods: Young adults were recruited from an established population-based cohort in the United Kingdom and completed an online survey. The factor structure of the Exercise Addiction Inventory (EAI) was characterized. Relationships between dimensional EAI factor scores and other variables (impulsivity, compulsivity, emotional dysregulation) were elicited. Results: Six hundred and forty-two individuals took part in the study (mean age 23.4 years, 64.7% female). The EAI yielded two factors – a “general factor” and a “relationship conflict factor.” Both EAI factor scores were associated with disordered eating, impulsivity (UPPS), and compulsivity (CHI-T). Only the relationship conflict factor score was significantly associated with impaired quality of life (all domains) and with maladaptive personality traits (emotional dysregulation and obsessive-compulsive personality disorder traits). Few participants met conventional threshold for full exercise addiction (1.1%). Conclusion: Higher problematic exercise scores, in a sample largely free from exercise addiction, were associated with impulsive and compulsive personality features, emotional dysregulation, and disordered eating. Further research is needed to examine whether these results generalize to other populations (such as gym attendees) and are evident using more rigorous in-person clinical assessment rather than online assessment. Longitudinal research is needed to examine both positive and negative impacts of exercise, since moderate exercise may, in fact, be useful for those with impulsive/compulsive tendencies, by dampening negative emotional states or substituting for other more damaging types of repetitive habit. Introduction While exercise has numerous health benefits, excessive exercise for some individuals may have untoward mental and/or physical health consequences. 1 For example, continuing to engage in exercise despite an injury could lead to worsening or permanence of that injury, and excessive time spent on exercise could lead to neglect of other areas of life, for example, interpersonal relationship. 2 Excessive exercise could also have a negative impact in relation to other mental health disorders, such as people with eating disorders exercising in order to maintain unhealthily low body mass indices, 3 or people with muscle dysmorphia who strive to increase muscle mass with exercise and steroids. 4 Other consequences of excessive exercise reported in the literature include immune dysfunction and chronic fatigue arising from excessive exercise. 5 The concept of "negative exercise" or "exercise addiction" was first developed in the 1980s, drawing contrast with the earlier concept of "positive exercise" indicating positive health benefits. [6][7][8] Here, we use the term "problematic exercise" to describe these phenomena, since this term focuses on the behavior rather than assuming it to be contained within a particular theoretical framework of, for example, addiction or compulsivity. Nonetheless, current measures of problematic exercise tend to utilize questions relating to addiction. Two concepts that have proven to be useful in understanding other types of repetitive behavior, and which may be relevant for excessive exercise, are impulsivity and compulsivity. [9][10][11][12][13] Impulsivity refers to hasty premature actions, without due forethought, which have untoward longer-term consequences. Compulsivity refers to repetitive habitual behaviors that have lost their relationship with the original intent or goal. While studies have examined impulsivity 14 and compulsivity 15 in problematic exercise, the interplay between these two constructs has seldom been examined in the same setting or using statistical techniques that are resilient to expected issues such as collinearity (correlations) across variables of interest. Another concept important in understanding excessive exercise is emotional dysregulation, which has been found to predict subsequent excessive/compulsive exercise. 15 Furthermore, acute exercise has been found to dampen emotion regulation deficits. 16 Understanding the roles of impulsivity, compulsivity, and emotional dysregulation in problematic exercise may help to identify predisposing factors and implicated mechanisms, as well as help account for overlap with other disorders such as eating disorders. 17 Therefore, the aims of this study were (i) to explore the factor structure of problematic exercise (measured using the Exercise Addiction Inventory, EAI) in young adults; (ii) to assess the impact of identified EAI factor scores on quality of life; and (iii) to evaluate the potential contribution of impulsive and compulsive psychological mechanisms to problematic exercise. We hypothesized that all EAI factor scores would be associated with elevated occurrence of impulsivity, compulsivity, and emotional dysregulation. Participants Participants were recruited from the Neuroscience in Psychiatry Network, which is an established cohort of young adults being followed over time to explore brain development and mental health. 18 The sample was originally recruited on a stratified basis, in order to maximize representativeness of the normal population in the catchment areas covered (Cambridge and London). We contacted all individuals who were still enrolled in this cohort at the time of data collection (2017-2018) via email and invited them to take part in an online study being conducted using SurveyMonkey. Prior to participation, individuals read an information sheet, indicated consent, and had the opportunity to contact the study team to address any queries/concerns they might have. The study was approved by research ethics committee, and the people taking part received £15 compensation in the form of a gift voucher. Measures The following demographic information was collected: age, gender, ethnicity, and education level. Problematic exercise was assessed using the EAI. 19 This is a six-item screening tool that was previously shown to have good psychometric properties and sound concurrent validity against other established scales for problematic exercise. The scale enquires about: conflicts occurring with one's partner/ family due to exercise, using exercise to change one's mood (e.g., obtain a "buzz"), escalating participation in exercise over time, irritability if missing exercise, and failed attempts to cut back. For each question, the individual responses on a scale of 1-5, ranging from "strongly disagree" to "strongly agree." Thus, the scale gives a total score out of 60. 19 In addition to the EAI, the following scales were completed, focusing on quality of life, impulsivity, compulsivity, and emotional regulation. Brunnsviken Brief Quality of Life Scale (BBQ). 20 BBQ is a previously validated self-report quality-of-life scale, which covers six life areas (leisure time, view of life, creativity, learning, friends/friendship, and view of self) that are important determinants of the overall quality of life. This scale has been validated in various normative and clinical population settings. 20 The measure of interest was the total quality of life score. SUPPS. 21 SUPPS is a short version of the UPPS, which is a questionnaire capturing different aspects of impulsivity, namely sensation seeking, lack of premeditation, lack of perseverance, negative urgency, and positive urgency. 22 There are 20 items, each responded to on a scale of 0-3. Measures of interest were summed for each domain, yielding five measures of impulsivity, each out of a maximum of 12 points. SCOFF Eating Disorder Questionnaire. 23 This is a previously validated screening tool for detection of disordered eating. The scale is sensitive to the presence of different aspects of disordered eating, such as deliberately making one's self sick, distorted body image, and loss of control over eating. We included the SCOFF as we were interested in a possible relationship between exercise and eating disorder-related symptoms. 3,24 The measure of interest was total items endorsed on the SCOFF. Cambridge-Chicago Compulsivity Trait Scale (CHI-T). 9 This is a recently developed scale designed to capture the comprehensive aspects of compulsivity, viewed transdiagnostically. The scale comprises 15 items, each scored on a Likert scale of 1-4, from "strongly disagree" to "strongly agree." The total score is 60, with higher scores indicating higher compulsivity. The scale is sensitive to compulsivity across a range of pathologies, such as disordered gambling, substance use, and obsessive-compulsive symptoms. 9 The measure of interest was the total score. Padua Obsessive-Compulsive Inventory (Washington State Revision). 25,26 This is a 39-item questionnaire measuring the broad range of obsessive-compulsive symptoms, designed for use in normative and clinical populations. Each item has a Likert scale (0-4), with higher scores indicating more symptoms. The scale gives a total score out of maximum of 156, which was the measure of interest. Difficulties with Emotion Regulation Scale (DERS). 27 This is a 36-item questionnaire encompassing multiple aspects of emotional dysregulation, indicating difficulties controlling one's emotional responses relating to social interactions. 28 Each question is responded to on a scale of 0-5, ranging from "almost never" to "almost always." The measure of interest was the total score out of 180, with higher scores indicating more emotional dysregulation. Data analysis The demographic characteristics of the sample were summarized. The structure of the EAI was explored using exploratory factor analysis, with the optimal number of factors selected using scree plot combined with the inspection of eigenvalues. The potential clinical relevance of EAI factor scores was considered by undertaking Spearman's r correlations against quality-of-life scores on the BBQ, with p < 0.05 uncorrected being defined as statistically significant. To evaluate the links between problematic exercise and impulsivity/compulsivity, and emotional dysregulation (plus relevant demographic characteristics), we used the statistical technique of partial least squares (PLS). PLS is a multivariate statistical technique that models the relationship between a number of X (explanatory) variables and one or more outcome (Y) variables through an iterative process. [29][30][31] This statistical approach is highly suited to situations such as cross-sectional analyses, in which there are relatively large numbers of variables, and these variables are correlated, hence breaching statistical assumptions of many typical statistical approaches. Here, the Y variable of interest was factor score on the EAI, and the X variables of interest were all other measures (demographic characteristics and questionnaire scores). A separate PLS model was run for each EAI factor score type identified. The optimal model was selected using leaveone-out cross-validation and by minimizing the PRESS statistic. We used a two-stage process in order to identify measures significantly associated with problematic exercise, per previously established methodologies. 32 First, X measures with variable importance parameter (VIP) < 0.8 were removed from the initial PLS model. Second, the distributions of model coefficients were characterized using bootstrap (1000 iterations). Explanatory variables with VIP > 0.8 whose 95% confidence intervals did not cross the null line in the final PLS model were deemed statistically significant. All statistical analyses were conducted using JMP Pro. Results In total, 642 people completed the study. The sample had mean (standard deviation) age of 23.4 (3.2) years, were mostly female (64.7%), and were mostly of self-defined White Caucasian ethnicity (78.7%). The distributions of EAI item responses and total scores are provided in the Supplementary Material. Factor analysis based on scree plot inspection and eigenvalues (see Supplementary Material) indicated two factors for the EAI, accounting for 57.9% and 14.8% of the variance, respectively (cumulatively 72.3% of the variance). The loadings of each individual scale item onto these two factors are shown in Figure 1. It can be seen that factor 1 related to the majority of scale items (hereafter termed "general factor") and that factor 2 related more specifically to conflicts arising between the individual and their partner/family members due to exercise (hereafter termed "relationship conflict factor"). EAI general scores did not correlate with quality of life on the BBQ (r = 0.013, p = 0.740), whereas EAI relationship conflict scores did (r = −0.120, p < 0.001). The latter was also significant for all domains of functioning on the BBQ considered separately (all p < 0.001 per domain, except for creativity-related quality of life, which was significant but less so with p = 0.0324). PLS yielded an optimal model capturing 58.6% of variance in the explanatory (X) variables and 13.2% of variance in problematic exercise general factor. Figure 2 shows variables retained in the final model (i.e., those with VIP > 0.8); those explanatory variables that were statistically S. R. CHAMBERLAIN AND J. E. GRANT  significant ( p < 0.05, bootstrap) are shown with an asterisk. PLS yielded an optimal model capturing 47.1% of variance in the explanatory (X) variables and 6.2% of variance in problematic exercise relationship conflict factor. Figure 3 shows variables retained in the final model (i.e., those with VIP > 0.8); those explanatory variables that were statistically significant (p < 0.05, bootstrap) are shown with an asterisk. Discussion This study used a dimensional approach to investigate problematic exercise in young adults recruited from a cohort of young adults recruited from the general population. The percentage of the sample with an EAI total score >24, suggestive of overt exercise addiction based on previous validation, was 1.1%. As expected due to the epidemiologic nature of the cohort, this is slightly lower than the prevalence rate of 3% found in a sample of habitual exercisers. 19 In young people, prevalence rates based on an equivalent version of this instrument have been estimated at 4% in school athletes, 8.7% in fitness attendees, and 21% in people with eating disorders. 33 Hence it should be born in mind that the current sample mainly focused on exercise that fell short of the full definition of exercise addiction. Nonetheless, this dimensional approach that includes subsyndromal people may provide useful insights into the full range of behaviors and their associations. We focused on the factor structure of a standard assessment tool for problematic exercise and how scale factor scores related to quality of life and psychological mechanisms. Factor analysis of the EAI indicated the existence of a "general" factor and a "relationship conflict" factor. Both factors were associated with disordered eating (SCOFF questionnaire), impulsivity (several SUPPS subscales), and compulsivity (CHI-T scores). The relationship conflict factor score was additionally associated with worse quality of life, emotional dysregulation, and obsessive-compulsive personality traits. Problematic exercise scores were not significantly influenced by demographic characteristics of age, gender, or ethnic grouping. The significant relationship between disordered eating as measured by the SCOFF instrument and problematic exercise is in keeping with a substantive body of literature. Studies have found elevated rates of excessive exercise in people with eating disorders or disordered eating, for example, ref. [3]. In some studies, the prevalence of eating disorders has been found to be elevated in athletes as compared to the background population, 17 though this relationship may not be universal. For example, disordered eating has been found to be higher in sports such as dancing, wrestling, and body building, but not necessarily equally so across genders. [34][35][36] In a sample of runners, those at risk of eating disorder, based IS PROBLEMATIC EXERCISE REALLY PROBLEMATIC?   on a SCOFF cut-off, had significantly higher problematic exercise scores on the EAI. 17 Here, disordered eating on the SCOFF was significantly associated with both problematic exercise factors on the EAI. Impulsivity is not a unitary phenomenon but rather encompasses a number of separable domains. We used the SUPPS, which dissects sensation seeking, lack of premeditation, lack of perseverance, negative urgency, and positive urgency. 21 In a prior study using the UPPS, sensation seeking, positive urgency, and negative urgency, all appeared to play a role in problematic exercise. 14 Here, on the SUPPS, the EAI general factor was associated with higher sensation-seeking and with lower lack of perseverance (i.e., with higher perseverative tendencies). Thus, this propensity toward excessive exercise per se may be more common in people who enjoy the risky element of sports, and in those who have a tendency toward perseveration, that is, repetition of habits, which would be more related to the construct of compulsivity than impulsivity. The relationship conflict EAI factor had different associations with impulsivity; this was significantly related to positive and negative urgency, as well as to emotional dysregulation. Positive urgency refers to the tendency to engage in acts due to a positive mental state (e.g., excitement), whereas negative urgency refers to a tendency to engage in acts due to unpleasant emotional states (e.g., low mood or anxiety). 37 Emotional dysregulation reflects difficulties handling emotionally charged social situations. 28,38 Collectively these data indicate that the relationship conflict component of problematic exercise is associated with impulsive responses to extreme emotions (positive and negative urgency), which indeed may be more commonly experienced in people with emotional dysregulation (short-term emotional lability). In a prior longitudinal study conducted in adolescents, emotional dysregulation predicted problematic exercise 12 months later. 15 This raises the possibility that, in fact, those with problematic exercise are undertaking said exercise in order to deal with emotional dysregulation. Turning to compulsivity, trans-diagnostic compulsivity on the CHIT was significantly associated with both general and relationship conflict factor scores from the EAI. This tendency toward repetitive ingrained habits could reflect a propensity (vulnerability) toward the development of problematic exercise. Longitudinal research would be valuable to test the idea that elevated compulsivity may predispose toward a range of later problem behaviors, including excessive exercise. The number of diagnostic items endorsed for obsessivecompulsive personality disorder was significantly associated with higher relationship conflict problematic exercise scores, but not general scores. This highlights that the use of trans-diagnostic markers, rather than scales specifically relating to one disorder, may be more sensitive to the detection of vulnerability issues. Several limitations should be considered. Online surveys have limitations compared to in-person clinical assessmentsinstruments may be less accurate for measuring psychopathology, and there may be less quality control. Therefore, the findings should be replicated using more rigorous in-person clinical assessments. Online surveys are subject to potential sampling bias (e.g., more extensive users of the Internet may be more likely to participate in online surveys, whereas this may not impact participation in an in-person study). In contrast, online assessment has the advantage of being extremely convenient for study participants, since it does not involve travel and can be done at a convenient time. We assessed problematic exercise using the EAI. However, in future work it would be valuable to include a broader range of rating scales pertaining to the measurement of different aspects of problematic exercise. While this study identified two factors as the optimal structure to account for the current EAI data, prior work suggested a one-factor solution. 19 Our current data do not contradict this previous finding because (i) a onefactor solution would have been reasonable with the current dataset if one focused solely on eigenvalues as the selection criteria; (ii) the majority of variance here was captured within the first extracted factor; and (iii) in some populations, a one-factor solution may be preferred. 33 Hence, depending on the focus of a given study and its purpose, one-or two-factor solutions would both appear reasonable. Nonetheless, we believe that the twofactor solution here revealed some interesting insights in relation to how distinct aspects of problematic exercise may have different associations with quality of life and underlying psychological processes. Our use of a normative sample can be viewed as being a positive feature, since results may be more likely to generalize to the background population as compared, for example, to studies that recruit people in particular clinical settings, or from gyms. The current cohort was largely free from full "exercise addiction" by conventional criteria. Future studies should also use case-control designs to explore the full disorder rather than a dimensional measure of exercise. Another limitation is that we did not evaluate the psychometric properties of various instruments. Further research is needed to confirm if these current findings do indeed generalize to other groups, including cohorts with higher degrees of pathology. Finally, we did not examine whether problematic exercise was secondary to body dysmorphic disorder or eating disorders. Conclusions We found that problematic exercise is associated with both impulsivity and compulsivity, even controlling for the interplay between these and other variables using the statistical technique of PLS. The aspect of  S. R. CHAMBERLAIN AND J. E. GRANT  problematic exercise relating to relationship conflict was additionally associated with worse quality of life and with more maladaptive personality traits (obsessive-compulsive personality disorder and emotional dysregulation). An interesting possibility suggested by these data is that people with emotional dysregulation, and perhaps impulsive/compulsive tendencies, may engage in exercise as a compensatory mechanism or coping strategy, and that exerciseeven when scoring on the EAIneed not necessarily be viewed as pathological. In a laboratory-based study, acute exercise (30 min stretching or running) dampened down emotional dysregulation due to later mood challenge. 16 The definition of "problematic exercise" and its potential clinical utility, therefore, requires more research, and it is for this reason that we used a dimensional approach in this study. As with many repetitive behaviors, drawing guidelines as to what is healthy or moderate versus pathological remains challenging.
2019-03-28T13:02:32.124Z
2019-03-27T00:00:00.000
{ "year": 2020, "sha1": "68a3ce133d097510aa0512b0be0602325057d559", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/22704A139BB31C067D165BEEF3B5E297/S1092852919000762a.pdf/div-class-title-is-problematic-exercise-really-problematic-a-dimensional-approach-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "35e92f60e5a68915ec0452ea3da2e3dd4924f13c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
229019569
pes2o/s2orc
v3-fos-license
Research on the Reform of Computer Education and Teaching Mode Based on Data Mining Data mining as a cross-discipline, it promotes people’s application of data from low-level simple queries to mining data from data and provides decision support. The reform of computer education teaching model based on data mining can help teachers provide effective guidance and decision-making information, and provide powerful data support for students’ development. This paper analyzes in detail the existing computer education and teaching modes, and the application of data mining in computer education and teaching mode. Introduction With the rapid development of computer science and technology, computer education has also been rapidly developed. In order to adapt to the rapid development of the internet big data era, the reform of the computer education and teaching model has received increasing attention. The advent of the era of big data has brought us sufficient data sources. As the core technology of big data, data mining is to discover hidden and unknown data from a large amount of data that may be of interest to users and have potential value to the decision [1] . The usage of data mining technology to discover useful information and hidden laws from large amounts of data could help to provide guidance for computer education. Applying data mining to the reform of the computer education model and using new methods to solve the problems existing in the computer's existing education and teaching model is in conformity with the requirements of the times. Therefore, the research of computer education model based on data mining is to explore new teaching methods and establish corresponding teaching ideas and teaching models. The data resource processing platform for data mining of computer basic courses is as shown in Figure 1 blow. In the process of computer education and teaching, the application of lecture-based teaching mode is very common. At present, the most commonly used teaching in computer teaching is the lecture-based education mode, the teaching platform is based on the classroom teaching, and the majority of the computer professional teachers generally adopt this teaching mode. However, judging from the effect of this education and teaching mode, the teaching of computer-based teaching methods lacks binding force on students, which lead to the low participation awareness of the students, and the teaching scene of higher authenticity is not created, making it makes it difficult to achieve good results in this kind of education and teaching mode. Collaborative Mode. The use of collaborative computer education and teaching mode requires students to fully interact with their classmates or teachers on the teaching platform, so that students will be able to embody their self-value while receiving computer professional education, thereby improving the quality of learning [2] . From the teaching point of view, to make the collaborative computer education and teaching mode could fully play its role, it is necessary to create a good teaching environment, that is, the hardware equipment can meet the teaching needs, having a good resource environment, and the training of students be based the perspective of student personality. Guiding students in learning to use various resources and use collaborative methods to solve problems. And with the enhancement of student collaboration, the effectiveness of computer education and teaching will also increase. Discussion Mode. The implementation of a discussion-type computer education and teaching model is that teaching content revolves around problems, with teachers ask questions and students solve problems through discussion. In the process of student discussion on problems, it is necessary to put forward their own views and work with other students in the discussion group to solve problems. Therefore, it requires students to have good language skills and cooperation capabilities. Due to the limited number of students in the student discussion group, it requires all students participate actively in the group discussion, which will inevitably exert restraint and control over student behavior. In the process of discussion, students' thinking ability, cooperative skill and language skills can be cultivated. Throughout the entire discussion process, teachers should play the role of organizational guidance, the intelligence of supervisory management, and the functions of behavior control to make teaching achieve the desired goals. The Principle of Data Mining Technology in Computer Education and Teaching The data mining process is to find that implicit and useful knowledge from a large amount of aggregated data. The basic steps are shown in Figure 2. Data mining technology is a new information processing technology, which has been widely applied in teaching management. The application of data mining to the teaching model of computer education, on the one hand, can promote the improvement, development and the necessary reform of the educational system. On the other hand, it can objectively reflect some problems in the teaching management of colleges and universities, provide important basis for formulating the policy and policy of the school, and further guide the students' enthusiasm for learning. Therefore, it will ultimately improve teaching quality and enhance teaching effectiveness. With the advancement of the process of education information, the application of data mining technology to education, and the discovery of hidden and useful knowledge from a large number of educational data to guide education and develop education has become an important research subject that is imperative. The teaching goal of the computer tutorial is to enable students to apply computer and information technology to their fields of work and become a composite talent who is familiar with both the professional business and the computer application technology. Data mining involves artificial intelligence, databases, and statistics, in which the database's classification algorithm classifies other data of the same type, to provide guidance for the business system to have application value. As a widely used classification algorithm, the principle of decision tree is as follows: A is the training data set, the entropy of A, which is the expected value formula, can be expressed as: Where m is the number of classes in the training data set, , i=1, 2, ... m is the categories in the training data set, is the number of classes , and is the probability that any data belongs to category . The algorithm is simple and easy to implement, thus it has a strong guiding significance. Application in Teaching Knowledge Content There are scattered knowledge points in computer teaching, but there is also a close interdependence between knowledge points. Since learning is a gradual process, there are correlations and sequential relationships between knowledge points. The failure of a particular point of knowledge will affect the learning of subsequent points of knowledge. With the deepening of information technology in colleges and universities, many colleges and universities have begun to use the online teaching system. Research and development of various types of supplementary educational information platforms, their development and application have greatly improved work efficiency. Application in the Performance Analysis Data mining could be used to excavate the wrong question information in the online answer system, to help find the useful association rules, which will guide the teachers to find out the teaching loopholes and improve the teaching quality [3] . Experiments show that the proposed method can effectively find the correlation information between the various errors. The analysis of the test paper can use the association rules to analyze the test paper database to get the validity, credibility and score distribution of one specific test. It could also use data mining to analyze the scores of the students' previous examinations and their performance scores in the each part of papers, thus to analyze the progress of the students, the learning obstacles, the knowledge points and the mastery of the knowledge units, etc [4] . These include the use of z-scores corresponding graphs to perform horizontal comparisons of student test scores for each subject at the same time, and the use of z-scores corresponding curve graphs and two columns of correlation coefficient analysis for longitudinal comparison of student scores in previous tests, and using the results distribution curve to draw the student's performance frequency distribution map. In teaching management, the commonly used data mining techniques are applied to various aspects such as teacher information, student information, and elective information to determine the degree of mastery of different knowledge points for students to understand student learning quickly and accurately. Besides, appropriate mining methods are used to explore hidden relationships among data and improve the level of teaching management so that teachers and school education decision makers can find the problems exist in teaching process. Application in Teaching Evaluation Data mining is a typical representative of computer technology, and its processing function for different data has been recognized by many users [5] . With the application of computer technology in the modern education system, data mining technology has begun to be integrated into the testing and evaluation of computer capabilities, which has brought great convenience to the evaluation work and has been analyzed on the application of computer capacity assessment under data mining. The evaluation of teaching based on student assessment teachers has greatly promoted the teaching reform and the improvement of teaching quality. Through data mining technology, the assessment of teachers' personal information, quality, and performance can be conducted. Data mining can be conducted from the teaching evaluation data, and various correlations between teaching effects and teachers' work attitude and work skills can be queried to find the teaching effect of teachers. With regard to the relationship between teacher performance and teacher performance, a class teacher can be allocated to a reasonable level so that the student can maintain a good learning status, which provides decision-making support information for the teaching department, and promotes better teaching and improves teaching quality. Based on the research of association rules mining algorithm, the author analyzes in detail the influence of teacher factors on students' test scores and students' evaluation of teaching results. The results show that teachers with rich teaching experience and good sense of responsibility can help students achieve good academic performance. Application in Cultivating Students' Interest of Learning. Clustering analysis is to divide a set of data into several categories based on their similarity and difference. The purpose is to make of the data that belonging to the same category have the same similarity as much as possible, and the similarity of the data belong to the different categories is as small as possible [6] . Students are classified into different types by clustering techniques. The first type is the students who do not often surf the Internet and only do this several times occasionally. For this type of students, who do not often use information technology to assist computer-based learning, are not very interested in the computer infrastructure. The second type is for the students whom are not access to the internet frequently, but believe that the computer foundation is very useful for their learning and life. Such students ask for improvement of the traditional methods of computer-based teaching, and they believe that it is necessary for the establishment of collaborative learning group, and the use of task-driven teaching is very helpful for their learning. Summary The reform of computer education teaching model based on data mining is conducive to solving the problems faced by computer basic education, and can recommend personalized learning programs to
2019-08-11T02:45:16.343Z
2018-06-01T00:00:00.000
{ "year": 2020, "sha1": "8884a17f955d5be4b9a1d37ad582439a08a77cfe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/1648/2/022196", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8f0bddbc8720acbf178c613b193860b11ae4fabc", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
52079734
pes2o/s2orc
v3-fos-license
Networking chemical robots for reaction multitasking The development of the internet of things has led to an explosion in the number of networked devices capable of control and computing. However, whilst common place in remote sensing, these approaches have not impacted chemistry due to difficulty in developing systems flexible enough for experimental data collection. Herein we present a simple and affordable (<$500) chemistry capable robot built with a standard set of hardware and software protocols that can be networked to coordinate many chemical experiments in real time. We demonstrate how multiple processes can be done with two internet-connected robots collaboratively, exploring a set of azo-coupling reactions in a fraction of time needed for a single robot, as well as encoding and decoding information into a network of oscillating reactions. The system can also be used to assess the reproducibility of chemical reactions and discover new reaction outcomes using game playing to explore a chemical space. Robot design and concept: The robot computational core is a pcDuino3 running Linux Ubuntu operating system that executes homebuilt code in python to control a number of pumps and a webcam. Access to internet is achieved via a wired ethernet or WiFi connection. For everyday use the board was connected also to a monitor, mouse and keyboard. Liquid handling is performed by a set of peristaltic pumps, the pumps are turned on for duration of the required addition time. The pumps are connected through tygon® tubing with the reagents and the reaction flask. Generally 5 pumps are dedicated to adding the reagents, one adds water for washing and the last one is used to empty the reaction flask. Data is aquired with a USB webcam able to record images and video from the reactions. The reactions are performed in a standard 14ml glass vial. It is magnetically stirred with a home built stirrer using a small fan. The robot has been designed to be as simple and affordable as possible. Therefore it can be assembled in just few hours. Peristaltic pumps The control over the solutions was performed using a set of peristaltic pumps. The pump is driven by a 12V DC and it is connected to the driver board mounted on pcDuino. In this work, we used the model KFS-HB2B06M, where M is either R,B,G,P which refer to pump colour (Red,Blue,Green,Purple). The pumps are designed to have a flow rate of 4ml/min towards a single direction. Since a loss in precision over time was observed the pumps were recalibrated every week and after any manteinance operations. pcDuino board The robot runs on a pcDuino3, it is powered by a 5V (2A) power supply fed through a micro USB cable. This board features the following: • CPU: AllWinner A20 SoC 1GHz ARM Cortex A7 Dual Core Power supply unit The robot is powered by a 5V (2A) DC power source. However, the Peristaltic pumps are driven by a 12V (1A) DC source. In this work, a 500W ATX power supply unit was used. Software The pcDuino3 runs with the Ubuntu operating system. The platform is controlled by a dedicated program written in python. Due to specific experiments each project part has been completed by using a dedicated program. However, the low-level software is the same and is composed by three main parts with respective external libraries: Pump control: This is based on gpio, a common library to control the pins of the pcDuino, and therefore operate the pumps. Since there is no feedback from the pumps a code converts the amount of solution required into a time interval used to run the pump. This time interval is derived from a calibration process where the flow rate of each pump is tested, verified and saved as a variable. Webcam control: This is based on the OpenCV library. The webcam is accessed by the computer and provides images and videos of the reaction. Further image/video analysis will be discussed in the respective project sections. S6 Network management: This uses the Twython library and controls the networked part of the platforms. It allows the platform to update its state by sending a tweet on its account and scan other accounts for synchronization and collaboration. Coordinator: The software core of each project section is a "coordinator" program. It manages all the experiment components: physical reactions, image analysis, network synchronization and search algorithm. Figure 5: Description of the general operational software. At the top the common and lower layer code is reported. Based on that we developed specific programs for each project. They will be discussed in the relative sections. Colour detection For the colour determination, the image frames recorded are converted into hsv colour domain. To allow the calibration of each colour, they have to be associated to a specific hsv range value. In each experiment a region of interest is analyzed and the pixel values are compared with the color ranges. The colour with the highest pixel count is considered the solution colour (red, orange, yellow, blue, colorless, black). Individual colour counts are saved and stored in a csv file for post processing. Collaborative algorithm Two identical and physically separated platforms have been used to explore the 117 reaction grid. They run the same algorithm and the aim was to find a blue reaction using a random search, sharing the results in real time using Twitter to reduce total time. The algorithm starts by selecting a random reaction and sending a Tweet with the reaction parameters. The system then performs the selected reaction and saves 4 frames. These are analyzed on board, the database is updated and an "end" Tweet with the results is sent. If a blue reaction is not present in the database the board will restart with a new random reaction, otherwise it will send a "stop" Tweet and stop. A separated thread in the background checks every 5 minutes the other board's Tweets and update the database with those reactions result. In this way both boards will avoid performing the same reaction twice. This script has been used to look for a blue reaction out of 117 total combinations. After 14 sequences blue has been found on average after 15.1 reactions. The theoretical number of reactions necessary for two platforms sharing results looking for 3 blue reactions out of 117 is 19.5. Plot the oscillations In order to observe the oscillation period behaviour over time a script for data processing was created and the output demonstrated in Supplementary Figure 11 Predicting the chemical influence on oscillation period. In order to predict the behaviour of the oscillation period when small amounts of water and potassium bromate are added we monitored several reactions while constant and regular additions were made. By processing the results, it has been possible to obtain two functions that correlate the amount of material added with the oscillation period change, within a reasonable time window and error. It will give an estimate of water amount to add in order to reach a specific period. Since it is referred to the reaction start, for real-time additions we need to consider also the current period: It easy to see that the empirical constant k is irrelevant, the function used to predict water additions is: When bromate is added there is a faster period, see Figure 18 and 19. By using the data obtained in multiple addition tests we obtained the first empirical function Num value Current peridod -Goal period S15 4. Inorganic Stage one-Collaboratively explore a chemical space At each stage of synthesis and analysis both platforms update shared network files for the other to read and proceed accordingly. When one platform selects a reaction volume at random to explore, the other will acknowledge this and remove it from its own series before continuing with its own choice. Conditions that have produced crystals are stored by both platforms for repetition later. The flow diagram (supplementary Figure 17) describes the collaborative process between two platforms, the leader and follower: Different parameters are tested by two systems sharing the workload. S16 Stage two-Repetition of Successful conditions The successful reactions conditions from the collaborative stage are compiled and repeated in order to establish the reporducibility of the chemistry/crystallization. One set of conditions is choosen and both platforms perform repeat reactions. Once enough data has been collected to establish an average percentage of reproducibility of obtaining crystals is complete and the next set of reaction conditions are begun, see Supplementary Figures 18 and 19. Seen below is the outline of stage 2, Assessment of the reproducibilty of crystal producing reaction conditions. Grid search of reaction conditions 8 reaction series each with varying Mn:W ratio were performed collaboratively by the two platforms by the methods detailed above (Supplementary table 1). Each reaction series varied in acid volume from 1.4-1.54mL HCl (approximately between pH 3-6.5) and each reaction was monitored by web cam for crystal formation within 2 hours of reaction completion. The full grid was repeated 3 times S18 to more thoroughly explore the space. The results of each grid can be seen in below as 2D colour Each of the conditions marked in red produced crystals at least once during these automated runs, all were repeated to assess the likelihood of growing crystals again. After 15 repeat reactions if no crystals had been produced the reaction was abandoned and the next set of conditions were started. A significant number of these crystalizing conditions never again produced crystals. Others varied from 10-50% in frequency of crystal formation across up to 48 reactions (Supplementary table 2). Agent based simulation To demonstrate the importance of information sharing between chemical robots we first developed less advantageous. The individual strategy is useful with a small number of robots but has diminishing returns with increasing numbers of agents. An increase of one robot from one to two yields a 50% improvement while an increase from two robots to three yields a lower improvement of 33.3% and so on. The reason of the performance constantly rising in Supplementary Figure 23-right is in the parallel search: when performing the searches in parallel with multiple robots although only one robot (most likely) reaches the goal the rest of the robots still perform their actions and these excess actions are therefore wasted. The amount of excess increases with the number of robots in use. It is important to note however that this is only an excess of actions so that the waste is one of resources. The time it takes to find the goal is always better with more robots and is unaffected by this latter issue. The simulations show that the collaboration strategy is by far the most efficient and that as the number of available robots increases the benefit of using collaboration increases as well. Supplementary Figure 23-right shows that for all strategies, the total number of searches that had to be conducted decreases. With the y axis logarithmic, the constant slopes show that the improvement in searching is exponential. The individual strategy will always be better than the random one, no matter the number S22 6. Game General Overview Two automated platforms were tasked with playing a game of Hex. New/rare reaction results allowed the player to use the optimum movement (determined algorithmically described later) with uncommon/common results allowing only for sub-optimal/random movements. Losing games trigger a change in strategy for the losing platform, in this case an expansion of the reaction grid the player was allowed to explore. The idea being to show that a game outcome could drive a player to either change or maintain its current strategy in the hope of making more chemical discoveries in future. The chemistry chosen for this project was the same seen in the Organic section (Methods part I Organic) and results were gathered and analyzed via web-cam. Decision Making The goal for players in a Hex game is to connect one side of the board with the opposite side using a continuous line of that player's color. The game cannot end in a draw. From a randomly assigned first board position or the current state of the board the optimal movement was calculated using Monte-Carlo simulations with the goal of completing the game. Once an optimal movement has been calculated, the results of the chemistry determine if the player may use it. Color rarity vs move selection allowance is determined by the following: -Unique/Rare colors observed up to 4 times Optimal movement -Uncommon Colors observed between 5 and 7 times Sub-optimal movement -Common Colors observed more than 7 times Random movement Sub-optimal movements were defined as a position beside, above or below the optimal and was selected based on availability. Communication Between Platforms In order to keep both players in sync with one another, a remote server was developed to handle all communications between the platforms. Each platform selects a reaction and processes the information through image analysis and the decision-making algorithm as described previously. The selected move is then sent to the remote server from the platform for processing. All logic for the game, such as updating board movements, is handled by the server. Once an iteration of the game has been completed, the server broadcasts a message to all connected clients detailing who has won the game. The players then adjust their strategies accordingly. S23 The reasoning behind developing a remote server system for this task was a separation of concerns. By separating the game logic from the platforms, as opposed to each platform having its own representation of the game, we minimize the risk of each platform falling out of sync with one another leading to inaccurate results. We also prevent potential race conditions with platforms attempting to access a single file at the same time. A single server with file access eliminates this risk. The design of the server allows for multiple concurrent connections and data processing which opens the possibility of increasing the number networked platforms working towards a common goal. Strategy Both players begin the first game in the sequence by selecting reactions from an identical chemical space (Supplementary Figure 24 center Given that the game sequence proceeds one platform after another the original reaction space restricts the total reaction number to 81 for each player (9 grids of 3x3 reagent volumes). A typical game sequence can consist of between 2-5 completed games. Seen below in Supplementary Figure 25 is a game sequence showing 4 complete games (5 th game was incomplete) in which the losing strategy was adopted by player 2 after game 1. Against the logical expectation the losing strategy, whilst S24 allowing player 2 to win game 2, did not result in many new unique discoveries. However, when player 1 adopted the losing strategy after game 2, its unique discovery count increased significantly, but did not result in a victory for the remained of the total game sequence. This can be explained simply by the fact a game is still based on probability and a new advantageous strategy will work most, but not all of the time. Supplementary Figure 25: Example game series. A 4 game sequence showing adoption of a new strategy allows, over time, for an increased number chemical discoveries.
2018-08-25T13:55:47.529Z
2018-08-24T00:00:00.000
{ "year": 2018, "sha1": "f058918710f118d5fe7a2dae7f7b04071f2b7639", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-05828-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e2c5255f97db46cc440069a668360abe8540604", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
236253155
pes2o/s2orc
v3-fos-license
Physical Education for Manual Wheelchair Users in Quebec: A Description of Teacher Training and Child Integration in Elementary and High Schools Pediatric manual wheelchair users (PMWUs) do not meet the Canadian 24-hour movement guidelines. Consequences are significant in terms of physical, emotional and cognitive health. Physical education (PE) classes are an important facilitator to increased physical activity among children and youth with disability. However, many teachers do not feel adequately prepared to provide education to children with special needs. Indeed, many Quebec (Canada) schools do not meet the recommended minutes of PE per week and the provincially mandated education program targets typically developing children only. Study objectives were to describe (1) the PE teachers’ training in adapted physical activity and (2) the integration of PMWUs in PE classes within Quebec elementary and high schools. An online survey asked 47 questions about: (1) demographic and descriptive information, (2) integration of PMWUs in PE classes, (3) teaching strategies (4) evaluation methods, (5) use of reference tools and (6) interests and opinions. Complete responses were received from 136 PE teachers. Forty-nine percent of PE teachers received adapted physical activity training through their university curriculum and 14.9% took additional training after university. Eighty-six percent of PE teachers were interested in receiving education about manual wheelchair skills training. While 97% of PMWUs participate with or without assistance in PE classes, some PE teachers were not satisfied with how they adapted their PE courses for PMWUs, reporting that adaptations to classes and evaluations are primarily based on professional judgment. Most PE teachers who did not currently teach PMWUs would not feel equipped to adapt their classes to do so. More work is needed to develop programs to facilitate PE teacher training. Improved PE courses may in turn increase physical activity among PMWUs. Introduction Despite global (i.e., World Health Organization Global Recommendations on Physical Activity for Health [1]), national (i.e., Canadian 24-Hour Movement Guidelines [2]) and provincial (i.e., Quebecers on the Move [3]) physical activity recommendations, physical inactivity is the 4th leading risk factor for global mortality [1]. The Canadian 24-hour movement guidelines for children and youth recommends a minimum of 60 minutes per day of moderate to vigorous physical activity [2]. Regrettably, only 39% of Canadian children and youth are meeting this recommendation [4]. Among those with disabilities, the percentage decreases to 11% [5], confirming previous findings that children with disabilities are less physically active than their typically developing peers [6]. According to the Régie de l'assurance maladie du Québec (RAMQ), there were 1612 children and youth (ages 4 to 21) who used a manual wheelchair for mobility in 2020 [7]. Using the 11% estimate, approximately 1435 pediatric manual wheelchair users (PMWUs) in Quebec are not meeting the Canadian movement guidelines. The consequences of physical inactivity for the general population are significant and indeed amplified for children with disabilities, not only physically (e.g., increased risk for cardiovascular disease, diabetes, obesity), but also emotionally and psychologically (e.g., behavior problems, depression) and cognitively (e.g., decreased attention and sensory processing) [8]. As part of the Quebecers on the Move [3] policy, the province of Quebec has an objective to increase by 20% the proportion of young people between the ages of 6 and 17 who meet the minimum recommended amount of physical activity by 2027. Studies from the Netherlands have identified participation in physical education (PE) classes in school as an important facilitator to increased physical activity among children and youth with disability [9,10]. It is thus one solution to improving physical activity among PMWUs. However, in a recent survey [11], Quebec teachers reported that they do not feel that their original education (Bachelor of Education) adequately prepared them for educating students with special needs, that there are insufficient funds for professional development and that there is a lack of support for educators. This survey also highlighted that there is a poor or even lack of evaluation of the needs and abilities of students with special needs resulting in inadequate or inappropriate services, thus jeopardizing educational success. From the PE perspective, it is not surprising that there would be a lack of evaluation of the needs of PMWUs as the existing national and provincial evaluations do not accommodate the needs of children and youth who use a manual wheelchair. For example, at the national level, the Canadian Assessment of Physical Literacy (CAPL) [12] is being used to monitor the physical literacy of children (i.e., 'the motivation, confidence, physical competence, knowledge and understanding to value and take responsibility for engagement in physical activities for life') [13] and to develop personal, individualized PE programs. However, the CAPL domain that measures the fundamental motor skills required for physical competence is not adapted for PMWUs. Specifically, 5 of 7 items (i.e., two-foot jumping, sliding, skipping, one-foot hopping and kicking) cannot be completed by PMWUs. Similarly, at the Quebec provincial level, the Physical Education and Health program [14] doesn't take into consideration the physical limitations of PMWUs. The program consists of three interrelated competencies: 1. performs movement skills in different physical activity settings; 2. interacts with others in different physical activity settings; and 3. adopts a healthy, active lifestyle. The movement skills that are assessed under the locomotor category include walking, running and jumping and no recommendations are provided to adapt these essential motor skills to assess PMWUs. At present there is a paucity of data describing PMWUs' integration in PE classes in elementary and high schools in Quebec. Given the lack of guidance for the adaptation of PE for PMWUs in existing programs, it is likely that there are differences in how well PMWUs are integrated into PE, leaving opportunities to increase participation in physical activity. Thus, the objectives of this study were to describe (1) the PE teachers' training in adapted physical activity and (2) the integration of PMWUs in PE classes within Quebec elementary and high schools. Design A descriptive cross-sectional study was conducted using an online survey. Ethical approval for this study was obtained from the Sainte-Justine University Hospital research ethics committee (#MP-21-2020-2422). Participants and Recruitment A convenience sample of PE teachers in elementary and high schools (regular and specialized) in Quebec, Canada, was recruited using advertisements shared via websites, online newsletters and social networks frequented by Quebec PE teachers (e.g., Fédération des commissions scolaires du Québec, 100 degrés organization, PE Facebook pages, Quebec's Ministry of Education and Higher Education). The advertisement was shared in both English and French languages according to the language of the site, newsletter and network. The survey was launched in June 2020 and remained open until September 2020. Participants provided informed consent by completing and submitting the survey. Measurement The survey was co-constructed by our research team and a team of school-based collaborators that included two occupational therapists, two physical therapists, two PE teachers and one school-based rehabilitation program manager. All team members were directly involved in the rehabilitation and inclusion of PMWUs in physical activity through their daily job. The final version of the survey was based on two, 3-hour survey development meetings plus iterative feedback from our collaborative team members and pilot testing of the online version by five professionals in the PE domain. The 6-section survey was formatted using Survey Monkey and contained a total of 47 questions including close-ended questions, (i.e., multiple choice, yes/no questions and Likert scale questions), open-ended, short-answer questions and comment boxes to obtain additional qualitative information. The survey logic guided the participants into different pathways based on their responses. Specifically, participants were separated into 3 distinct groups according to their responses: Group A included those participants who at the time of the survey had or previously had PMWUs in their class; Group B included those who at the time of the survey had or previously had PWMUs in their school but not in their class; and Group C included those participants who did not have PMWUs in their school ( Figure 1). Most of the questions in sections 2, 4 and 5 were not pertinent for Groups B and C. However, the questions regarding if they would feel equipped to adapt their courses if a PMWU started to attend their class (Section 2) and if they were familiar with the Wheelchair Skills Program (WSP) (Section 5) were applicable to all participants. The survey was developed in French and translated into English by bilingual members of our team. Section 1, Demographic and descriptive information, gathered demographic information to describe the sample of participants (e.g., gender, age, language, level of education, years of experience) and the school in which they worked (e.g., type of school, the amount of PE provided, accessibility of school, age of their students, number of PMWU in their PE classes). Section 2, Integration of PMWUs in PE classes, included questions about if and how PMWUs are integrated into PE classes, whether or not the PE provided is adapted for PMWUs to meet provincial PE requirements and whether PE classes included manual wheelchair skills education. Section 3, Use of Motor Learning Principles, asked about the use of a variety of motor learning principles in PE classes. Section 4, Evaluation methods, asked participants how they adapted their evaluation methods for PMWUs and how confident and equipped they felt to do so. Section 5, Use of reference tools, asked participants about their familiarity with the WSP [15], as well as about other books or resources used to adapt courses/evaluations for PMWUs. In Section 6, Interests and opinions, participants were asked about including manual wheelchair skills into their PE programs/classes and which manual wheelchair skills would be considered fundamental to improve the participation of PMWUs in PE classes. The wheelchair skills included in the survey were taken from the WSP [15] an evidence-based training program that can be used to test and train a range of manual wheelchair skills from 3 different classifications: indoor skills, community-based skills and advanced skills. Analysis Descriptive statistics (means, standard deviations, frequencies and proportions) were used to describe the population and to analyze the quantitative data. The data from the comment boxes, open-ended questions and the short answer questions was synthesized narratively. Data were categorized according to the three groups (A, B and C). Survey responses were collected online and raw data were exported into Microsoft Excel 2016 for analysis. Demographic and Descriptive Information A total of 158 PE teachers initiated the survey between June and September. Among them, 22 completed less than the first quarter (i.e., sociodemographic questions only), thus these data were removed from analyses. Of the remaining 136 participants, 115 completed all survey questions (84.6%), while 21 (15.4%) skipped some. The data of all 136 participants were retained for analyses. Group A (PE teachers who at the time of the survey had or previously had PMWUs in their class) was composed of 35 participants who initiated the survey and 28 of whom completed the survey. From Group A, 18 of 28 (64.3%) participants were from specialized schools or regular schools with specialized class. Group B (PE teachers who at the time of the survey had or previously had PWMUs in their school, but not in their class) included 23 participants of whom 17 completed the survey. Group C (PE teachers who at the time of the survey did not have PMWUs in their school) contained 78 participants, of whom 70 completed the survey. We collected data from 39/72 (54%) school service centers and school boards representing 16/17 administrative regions of Quebec ( Figure 2). Table 1 presents the sociodemographic characteristics of participants and Table 2 presents the sociodemographic characteristics of the schools in which they worked. Most of the schools represented by our sample were public regular or public regular with specialized classes (81.6%) at the elementary education level (64%). Almost 40% of the schools did not provide the minimum of 2 hours of PE classes per week that is mandated by the provincial government. Less than half of the schools currently had PMWUs enrolled and for those that did, there were a variety of diagnoses represented. Most schools (73.3%) provided extra physical activity opportunities, but most PMWUs (71.1%) did not participate in these extracurricular activities. According to participant comments, the extracurricular activities offered during lunch time or after school, were often taught by non-PE teachers, and included volleyball, ball hockey, yoga, and strength training. Description of PE Training The sample consisted of 47.8% males and 51.5% females PE teachers who were 41.5 years of age on average, with 65.4 % reporting 10 years or more experience as a PE teacher. Close to half of the participants (48.5%) received education about adapted physical activity during their university training. According to qualitative data from the comment boxes, the majority of these participants received training on general adaptation of PE lessons during a 45-hour course of 3 credits. The courses covered several disabilities (e.g., visual, motor, intellectual) and the students had the opportunity to try different adapted sports (e.g., soccer, basketball, goalball for students with visual impairment). Training was not specific to the adaptation of PE classes for PMWUs. Integration of PMWUs in PE Classes For participants of Group A, the majority (67.6%) reported that the PMWUs in their class participated actively with or without assistance, while 2.9% were only observers. The PMWUs mostly participated in the same activity (58.8%), but in some cases they had a specific role (23.5%). In a minority of situations, PMWUs participated in a different activity (17.6%). Most of the participants (70%) had access to the medical file of the PMWUs and used that information to adapt their courses according to the functional abilities of the students and to respect the medical contraindications. As represented in Table 3, a vast majority (71.0%) of the participants adapted their classes for PMWUs to meet the requirements of the Competency 1, performs movement skills in different physical activity settings, (80.6%) and Competency 2, interacts with others in different physical activity settings, of the Quebec Education Program. Most of the participants were satisfied by the adaptation of their PE classes for Competency 1 (84%) and Competency 2 (86.4%). In the comments, participants indicated that the adaptation was case by case for which they relied on student's capacities, and that they had to be flexible and find alternatives. Participants reported that there is an absence of a specific guide on how to adapt PE classes for PMWUs, therefore, they primarily rely on their professional judgment. Some participants mentioned that they used the Competency-Based Approach to Social Participation (CASP-I) Education Program, a program aimed for students with moderate to severe intellectual disabilities [16]. Only 1 participant included manual wheelchair skills training in their classes. This participant indicated that to adapt the courses, she/he trains wheelchair skills that can be used in daily life to become more independent and less dependent on adults, including moving forward and backward, changes of direction, turns and narrow passages. This PE teacher taught in a specialized school for students who use a wheelchair. Finally, among all of the participants (Groups A, B and C), 62.9% wouldn't feel equipped to adapt their courses for this population. Figure 3 presents the motor learning principles used in PE classes. Our findings indicate that the motor learning principles of demonstration, concise and precise verbal instruction, physical rehearsal: simplification and segmentation of the task, variability of learning contexts and extrinsic and intrinsic proprioceptive feedback were used by the majority of all Groups in their PE classes. PE teachers from Groups B and C used the set training goals principle more frequently than those from Group A. Mental imagery was used least frequently by all Groups. 54 Journal of Physical Activity Research Evaluation Methods The majority of the participants adapt their methods and criteria for evaluating Competency 1 (23/29; 79.3%) and Competency 2 (23/29; 79.3%). Most of the PE teachers (65.2%; 15/23) were confident with the adaptation of their assessment methods for Competency 2. The level of confidence in the adaptation of their evaluation methods for Competency 1 is not available due to an error in the survey logic. In the comments, participants reported that they make personalized and specific assessment profiles for PMWUs. Use of Reference Tools Fifty-seven percent of the sample did not use a specific reference or resource to adapt their PE classes or evaluations. Only one participant had taken an additional course, a workshop, specific to manual wheelchair skills training after the university program (i.e., the same participant that trains manual wheelchair skills in his/her classes). This participant's training consisted of a conference or workshop and self-study. As described in Table 4, the majority of participants (94.1%) were not familiar with the WSP. However, most (86.3%) were interested in receiving training about teaching basic manual wheelchair skills and most believed it would be possible to implement the training of basic manual wheelchair skills in their PE classes. All participants strongly agreed or agreed that it was important for the independence of PMWUs to learn the basic manual wheelchair skills for getting around. As presented in Figure 4, the manual wheelchair skills are less integrated in PE class activities then they are deemed essential to improve the PMWUs' participation by all participants. The skill that has a greatest difference (45.3%) in terms of integration (33.3%) compared to whether it is deemed essential (78.6%) is "getting over obstacles". The skill which has a smallest difference (4.3%) in terms of integration (60%) compared to whether it is deemed essential (64.3%) is "rolling longer distance". Participants from Group A (78.6%) deemed the skill "getting over an obstacle" more essential than participants from Groups B and C (60%), this is the largest difference (18.6%) concerning whether or not a skill is deemed essential for PMWUs to improve their participation. The skill which has the smallest difference (0.8%) between the Groups (A: 89.3% and B: 88.5%) is "rolling backward a short distance". Discussion We accomplished the objectives to survey PE teachers across the province of Quebec regarding their training in adapted physical activity and the integration of PMWU into PE classes. This was the first study to address PMWUs' participation in physical education classes in Quebec. Of the 4797 PE teachers in Quebec [17] 136 responded to the survey. Although the response rate was low, participants represented over half of the school boards and 94% of the administrative regions. All participants answered general questions about the training received concerning adapted PE and expressed their opinion/interest about manual wheelchair skills training. However, only 35 PE teachers have or ever had PMWUs in their class and therefore answered questions about adapting courses and assessments in PE. This study confirmed previous survey findings (10) regarding a lack of adapted physical activity training for PE teachers. Moreover, the adapted physical activity training was not specific to the adaptation of PE classes for PMWUs. Indeed, this present study confirms that PE teachers do not feel equipped to adapt their PE courses for PMWUs. Over 50% of participants did not receive adapted physical activity training as part of their university curricula. This is surprising given that there is currently a mandatory adapted physical activity course in the Bachelor of Education university programs of PE teachers in the majority of Quebec universities [18][19][20][21][22][23]. However, participants who did take an adapted physical activity course described that it was very general and not specific to PMWUs. Interestingly, although a short 15-credit (approximately 225 hours) adapted physical activity Graduate Program (post-Bachelor of Education) is available at the University of Quebec in Montreal (UQAM) [24], it has only been offered 2 times in 10 years due to insufficient number of registrants (personal communication, Martin Lemay). Several factors may explain the lack of registrants, including those identified in a previous survey (i.e., insufficient funds for professional education) and potentially the need for a more targeted program (e.g., PMWUs). That 86.3% of PE teachers have an interest in receiving specific training on how to teach basic manual wheelchair skills could provide an interesting avenue for exploring the development of an adapted physical activity course or program that targets the provision of PE to PMWUs. A Massive Open Online Course (MOOC) may provide a flexible, accessible option for PE teachers. This present study also confirmed the lack of guidelines on evaluation of the needs and abilities of students with special needs, [11] such as PMWUs. Indeed, although the majority of PE teachers who currently have PMWUs in their classes reported that they adapt their PE classes according to both Competency 1 and 2 of the Physical Education and Health program [14], 15% are not satisfied with their adaptations. That courses are not adapted to the needs of 20-30% of PMWUs (potentially 15% more), is concerning. That 62.9% from the entire sample reported that they would not feel equipped to adapt their course or evaluation is even more concerning. To extrapolate our data to the entire sample of 4794 PE teachers in Quebec, it is possible that over 3000 PE teachers do not feel equipped to adapt their courses and evaluations to the population of PMWUs. All that being said, it should be noted that the participation rate of PMWUs in PE classes was surprisingly high despite the lack of a PE program tailored to their needs. Overall, teachers are relatively satisfied with their course adaptation and assessment, with over half of teachers reporting that they do not use resources to adapt their courses for PMWUs. It would be interesting to understand the reasoning of PE teachers in order to explore on what their level of satisfaction is based. Regarding the motor learning principles used by PE teachers, some differences were reported between the Group A and Groups B and C. Group A teachers used the principle "set training goals" much less than Groups B and C teachers. It is possible that it is more difficult to establish precise training objectives with PMWUs due to individual variations in physical and intellectual capacities among students, such as in a specialized class with many PMWUs. Further, it is interesting to note that several motor learning principles such as "simplification of the task", "segmentation of the task", "demonstration" or "concise and precise verbal instruction" are not used by 100% of Group A participants. Indeed, these motor learning principles are particularly important to use with people presenting disabilities, such as PMWUs [15]. It may be due, in part, to the lack of teacher training about manual wheelchair skills and adapted PE, as reported in this survey. It is possible that there is a lack of awareness of these motor learning principles or that a lack of knowledge regarding how to integrate them into their courses. Finally, the strong use of the motor learning principle "customize the training process" is consistent with the fact that participants rely on their professional judgment to personalize courses and assessments for PMWUs. There are similarities and differences among the groups of participants regarding wheelchair skills considered essential to improve the participation of PMWUs in PE classes. First, all groups considered basic manual wheelchair skills as essential in a large majority compared to community-based skills. This perspective can be explained by the fact that the main physical environment in which PE classes take place in Quebec (gymnasiums) do not contain the elements requiring the use of community-based skills, in particular going up / down an inclined slope, going up / down level changes and over an obstacle. Also, team sports are commonly taught in Quebec high school PE classes [25], which minimally solicits community skills. Those skills are probably more solicited in physical activity like obstacle courses, outdoor activities and other individual activities. In fact, Figure 4 showed that a higher proportion of participants from Group A identified community-based skills as essential compared to Groups B and C. For example, a significant difference between those groups is noted for the skill "getting over obstacles". Other differences such as for the skill "picking an object from the floor" may be explained by the fact that teachers do not know how to train the skill and thus adapt their courses so the PMWUs do not have to perform this skill or the PE teacher or other students compensate for the lack of abilities of the PMWUs (e.g., a PMWU does not need to pick the basketball up off of the floor because other students do). Since the majority of teachers in Group A teach in a specialized school, it is possible that their courses are already adapted to be accessible for all PMWUs, regardless of their abilities. In addition, it is curious to see that not all of the participants answered that indoor wheelchair skills were essential to improve the participation of PMWUs in PE classes. That some teachers identified specific skills as essential to improve PMWUs participation in PE classes without actually integrating them into their course could be explained by the lack of knowledge about teaching and training manual wheelchair skills. Also, it is possible that by answering this question during the survey, the teachers identified skills that constitute an ideal to be achieved. These assumptions would have to be clarified in a later study. Preliminary evidence suggests that learning manual wheelchair skills using an evidence-based program such as the WSP [15] can improve PMWUs' mobility. Therefore, integrating wheelchair skills training into physical education may improve how PMWUs participate in PE classes. Given the association between fundamental movement skills and physical literacy, improved wheelchair skills may increase overall physical activity among PMWU and provide health benefits [26]. Provision of wheelchair skills training to PE teachers may facilitate implementation of wheelchair skills training in PE courses. Indeed, this survey revealed that the majority of PE teachers (88.9%) believe it would be possible to implement basic manual wheelchair skills training (e.g., moving forward, backward, turning in place, stopping, picking up an object on the ground) in their courses. It would be interesting to explore whether they also believe in the possibility to implement community-type skills (e.g., navigating ramps, ascending and descending small curbs) training in their courses. Indeed, the training of community-based and advanced skills would require more preparation, time, knowledge and equipment [15]. Limitations Given the total population of physical education teachers in elementary and secondary schools in Quebec is 4,797 [17], it was estimated that a sample of 357 participants was needed to be representative of the population [27]. However, the results (n = 136) are representative of half of the school boards and 94% of the administrative regions in the province. The results of this study cannot be generalized to the rest of Canada due to the differences in the school systems from province to province. Moreover, there was unequal representation of the number of participants for each study objective (i.e., the responses of 136 individuals comprised the results for objective 1, while the responses of 28 individuals comprised the results for objective 2). Therefore, the results regarding the integration of PMWU in PE classes are a preliminary first step to advancing knowledge about the adaptation of PE classes for PMWUs. Indeed, semi-structured interviews planned for the next phase of this study will provide a deeper understanding of current practices and the perceived needs. Due to the nature of the convenience sample recruited in this study, we did not have control over how the survey was completed. This may explain why, several participants abandoned the survey before completion thus limiting the response rate for all questions. It should be noted that the survey was launched during a period of instability for elementary and high schools in Quebec (i.e., during the first and second waves of the COVID-19 pandemic). Due to the circumstances, teachers had to adapt their teaching procedure to the situation and the survey was likely not a priority. Finally, due to the complexity of the survey logic, an error in the programming resulted in the elimination of one question (i.e., level of confidence of PE teachers in the adaptation of their evaluation according to the Competency 1). This information would have been complementary to the analysis of PMWUs integration in PE classes. The logic in future iterations of this survey should be simplified to avoid any errors and to facilitate the analysis of the data. Conclusion The findings from our study suggest that there is a lack of training of PE teachers about adapted PE for PWMUs. However, PE educators support the implementation of basic wheelchair skills into PE curriculum to optimize the participation of PMWUs in their PE classes. Findings reveal that PMWUs participate in a majority in PE classes and that PE teachers adapt their courses accordingly to the best of their abilities based on their professional judgment. There is a need for resources and training to facilitate PE teachers to adapt their courses.
2021-07-26T00:06:26.975Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "afc2d3182047bb9e5cb1e95722a4ba6b02bd03dc", "oa_license": null, "oa_url": "https://doi.org/10.12691/jpar-6-1-7", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1377fbe98b3076019dea0ff1dbbaf63ac2c2ae08", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
244117662
pes2o/s2orc
v3-fos-license
Study of keyword extraction techniques for Electric Double Layer Capacitor domain using text similarity indexes: An experimental analysis Keywords perform a significant role in selecting various topic-related documents quite easily. Topics or keywords assigned by humans or experts provide accurate information. However, this practice is quite expensive in terms of resources and time management. Hence, it is more satisfying to utilize automated keyword extraction techniques. Nevertheless, before beginning the automated process, it is necessary to check and confirm how similar expert-provided and algorithm-generated keywords are. This paper presents an experimental analysis of similarity scores of keywords generated by different supervised and unsupervised automated keyword extraction algorithms with expert provided keywords from the Electric Double Layer Capacitor (EDLC) domain. The paper also analyses which texts provide better keywords like positive sentences or all sentences of the document. From the unsupervised algorithms, YAKE, TopicRank, MultipartiteRank, and KPMiner are employed for keyword extraction. From the supervised algorithms, KEA and WINGNUS are employed for keyword extraction. To assess the similarity of the extracted keywords with expert-provided keywords, Jaccard, Cosine, and Cosine with word vector similarity indexes are employed in this study. The experiment shows that the MultipartiteRank keyword extraction technique measured with cosine with word vector similarity index produces the best result with 92% similarity with expert provided keywords. This study can help the NLP researchers working with the EDLC domain or recommender systems to select more suitable keyword extraction and similarity index calculation techniques. Introduction Keywords are significant for automated document processing. Keywords are the concise representation of the contents of a document [1]. From keywords, the context of the documents can be easily understood. When there is a need to process lots of documents or classify any document for any purpose, it is tedious to go through the whole document one by one and classify them. Instead, going through the keywords makes this process faster, even for a human. However, it is also a time-consuming process to go through the keywords for many documents by a human. This task can be automated by employing machines to look for the keywords and classify the documents. Since the process of keyword extraction is being automated, it should also be assured that extracted keywords represent the actual context of the document; else automated extraction will be a complete loss of time and resources. This assurance can be done by comparing the extracted keywords with human or expert assigned keywords. Therefore, this paper introduces an experimental study to measure the similarity score between expert-provided keywords and keyword extraction algorithms generated keywords to observe how similar the machinegenerated keywords' values are to the expert provided keywords. In other words, this experiment can guide if the machine-generated keywords are feasible to utilize instead of expert-provided keywords for any specific domain. There are several different keyword extraction algorithms available at present [2,3]. These algorithms are employed in different scenarios, such as recommender systems, trend analysis, similar document identification, relevant document selection [4,5,6]. All these algorithms are divided into three primary categories based on their extraction technique: supervised, unsupervised, and semi-supervised technique [7]. This study compares the similarity scores for supervised and unsupervised techniques with three prominent similarity indexes, namely, Jaccard similarity index [8], Cosine similarity index [9,10] and Cosine with Word vector similarity [11]. The key contributions of this work are, • Recommending a keyword extraction technique that provides more similar machinegenerated keywords to the expert or human provided keywords. • Recommending type of texts (positive texts only or whole text of a document) that provides more similar keywords. • Recommending a better similarity index for measuring similarity score between documents. • Finding the feasibility of utilizing machine-generated keywords instead of expertcurated keywords. The rest of the paper is organized as follows. Employed keyword extraction techniques and relevant works are presented in Section 2 with their known shortcomings and strengths. Employed methodologies for the experiment are mentioned in Section 3. Then, the result analysis of the experiment is discussed in Section 4, and concluding remarks in Section 5. Background Study In this paper, some notable and well-known similarity index calculation algorithms and keyword extraction algorithms are employed. All the text-similarity and keyword extraction algorithms with shortcomings and strengths are discussed in this section. Keyword Extraction Keyword extraction from text is an analysis technique that automatically extracts the most used and most important words or phrases from text based on different parameters [12]. In some techniques, these parameters can be defined externally, and some techniques do not support external definition [7]. Mainly there are three classes of keyword extraction techniques. Among them, supervised and unsupervised techniques are employed in this study. Unsupervised Keyword Extraction Four unsupervised keyword extraction techniques are employed in this paper. Unsupervised techniques are prone to poor accuracy and require a larger corpus input, and do not extrapolate well [13]. However, unsupervised techniques are utilized widely compared to supervised techniques, as all sorts of domain-specific training labeled data are not always available for all the domains. YAKE YAKE was proposed by Campos et al., [14]. It is a lightweight unsupervised keyword extraction technique based on TF-IDF. YAKE extracts keywords by calculating five features, namely, Word Casing (WC), Word position (WP), Word Frequency (WF), Word Relatedness to Context (WRC), and Word DifSentence (WF). The relation between five features can be expressed through the following Equation 1, where S(w) is the measure for each word. After calculating the measure for each word, the final keyword is calculated utilizing a 3-gram model [15]. TopicRank Bougouin et al. proposed topicRank [16] in 2013, which is a clusteringbased model. It divides the document into multiple topics employing the hierarchical agglomerative clustering [17]. Then utilizing the PageRank [18], it scores each topic and selects each top-ranked candidate keyword from each topic. After that, it selects all the top candidate words as final keywords. MultipartiteRank MultipartiteRank is a topic-based keyword extraction model. It encodes topical information of a document in a multipartite graph structure. This technique represents candidate keywords and topics of a document in a single graph, and utilizing the mutually reinforcing relationship of the candidate keywords and topics improves candidate ranking. This method has two steps of selecting candidate words as keywords, i) Representing the whole document in a graph and ii) Assigning relevance score to each word. Between these two steps, position information is captured utilizing edge weights adjustment. As a result, most of the time, it outperforms different other key-phrase extraction techniques [19]. KPMiner El-Beltagy et al. proposed the KP-miner [20] in 2009. This method also utilizes TF-IDF to calculate words as keywords. This calculation is done in three steps, i) Selecting candidate words from the document utilizing least allowable seen frequency (lasf) factor and CutOff factor, ii) Calculating candidate word's score, and iii) Selecting the candidate word with the highest score utilizing the candidate word position and TF-IDF score as the final keyword. Supervised Keyword Extraction While unsupervised algorithms do not need a large amount of labeled training data, supervised algorithms need a large amount of that data and perform poorly except in the training domain. However, for any specific domain, supervised techniques are preferred over unsupervised techniques [15]. In this paper, two supervised techniques are employed, KEA and WINGNUS. KEA KEA is a supervised keyword extraction algorithm proposed by Witten et al. in 1999 [21]. KEA classifies a candidate keyword utilizing word frequency and position of the word in the document. After that, it predicts which candidate words are qualified as keywords utilizing the Naive Bayes machine learning algorithm. The machine learning model builds a predictive model initially. Then, keywords are extracted utilizing this predictive model [22]. WINGNUS This supervised keyword extraction technique is developed focusing on keyword extraction from scientific documents [23]. It utilizes inferred document logical structure [24] in the candidate word identification process to limit the phrase number in the candidate word list. This method utilizes regular expression rules to extract candidate words, and instead of whole document text, it utilizes input text in different levels like title and headers or abstract and introduction. Like KEA, it also utilizes the Naive Bayes machine learning algorithm to select candidate words. Text Similarity Index Determining how similar two pieces of text are to each other is the simple idea of text similarity index or text similarity calculation. In this study, keywords from different documents extracted by keyword extraction algorithms and expert-provided keywords' similarity are measured. In two ways, this similarity can be measured, one is lexical similarity, and another is semantic similarity [25,26,27,28,29,30]. This paper implemented both the similarity measures utilizing Jaccard, Cosine, and Cosine with word vector similarity indexes and presented the outcome for EDLC based scientific articles. Jaccard Similarity Jaccard similarity index is a lexical similarity index method, which calculates the similarity index at the word level. As lexical similarity is unaware of the word's actual meaning or the entire phrase, Jaccard similarity takes two sets of text and calculates the similarity between all pairs of sets. Jaccard provides a similarity score with a range of 0% to 100%. This algorithm is very sensitive to sample size and may provide unexpected results for a small sample size. Conversely, for larger sample sizes, it is computationally costly [31,32]. Jaccard similarity index is calculated utilizing the Equation 2, where A and B are two different sets of text or documents. Cosine Similarity The cosine similarity index measures the similarity between two documents utilizing the cosine angle between two multi-dimensional vectors in a multi-dimensional space regardless of their size. In this technique, sentences are converted into vectors utilizing the bag of words method. Then employing the Equation 3, where A and B are two documents converted into vectors. This algorithm is computationally expensive for larger data sample [9,10]. Word Vector Word vectors are a type of word embedding, where similar meaningful words are arranged in a similar representation, mostly with vectors. Each word is mapped to a vector in a predefined vector space [33]. It is different from Jaccard similarity in the way that Jaccard measures lexical similarity, but in word vector, it is measured for semantic similarity. Utilizing word vectors, similar meaningful words can be measured rather than the exact word, enabling better scores for similarity measures. In this study, as a word vector model, Wod2vec [11] proposed by Mikolov et el., is utilized. Word2vec is different from the traditional tf-idf measure, where tf-idf sets one number per word, but Word2vec sets one vector per word. Methodology This study diverges into three major components, i) Data collection, ii) Data Processing, and iii) Similarity score calculation. In the data collection component, ground truth data and test data are collected from respective sources. Collected data is cleaned and processed for the similarity calculation component is done in the data processing component. In the similarity score calculation component, similarity scores for collected data are calculated with different similarity indexes employing different keyword extraction techniques. The conceptual overview of the employed methodology can be found in Figure 1. Data Collection In this study, the Electric Double Layer Capacitor (EDLC) domain is considered as the experiment's use case. Hence, from the domain experts, a set of 32 keywords of the EDLC domain has been collected as ground truth keywords, and ten scientific documents are collected from the same domain, which satisfies the keywords and is suggested as the relevant document to the domain. The experiment is based on the quest that, from these ten documents, keywords are extracted through different keyword extraction techniques, and then extracted keywords are compared for the similarity score with the domain expert provided keywords. First column from the left of Table 1 contains the domain expert provided keywords for the EDLC domain. All the scientific documents are collected in portable document format (pdf), and keywords are collected in plain text. Data Processing In the data processing stage, collected pdf files are initially converted to plain text format. To convert the files, grobid [34] tool is utilized, which primarily converts the pdf files to tei xml format and then with a custom tei xml parser xml contents are converted to a plain text file. The custom xml parser is developed by the authors utilizing the python programming language. After the conversion, text contents are cleaned to remove extra spaces, special characters, extra line breaks, parentheses, references, figures, and tables employing a custom data cleaning method also developed by the authors. Text cleaning methods are dependent on the dataset and desired output. However, apart from the dataset and output, several steps are commonly performed to clean text data, namely removing punctuation, filtering out stop words, stemming and lemmatisation, and converting text to upper and lower case. For the dataset used in this study, some of the common cleaning tasks are implemented, and some of them are avoided. In addition to these tasks, some dataset-specific cleanup tasks are also performed. Based on the cleanup activities performed in the dataset, the cleaning process is described as a custom text cleaning process. For example, normalization of non-standard words (NSW) is not performed in the text cleaning process. NSW are words that are not available in a dictionary, such as numbers, dates, abbreviations, chemical symbols of materials, currency amounts, and acronyms [35]. Most scientific papers contain these NSWs, and they refer to specific processes or operations of any domain which are not available on a dictionary, e.g., "MnO2", a chemical symbol for a material called Manganese dioxide. Stemming and lemmatisation operations on the words are also discarded since most keywords are a combination of several words, e.g., "helmholtz double layer", which gives the same result when lemmatised and a meaningless result when stemmed. Table 1 represents the original keywords with the lemmatised and stemmed version of the keywords. From Table 1, it can be observed that the output of the lemmatised keywords is almost similar to the original keywords, and the stemmed version of the keywords produces unintelligible words. In the dataset-specific cleaning process, all tabular data, references, and images are removed from the articles. Then, the text contents are decoded from the UTF8 encoding format. In addition to normalizing these decoded text contents, some special character substitution operations are performed. Then, from the cleaned text of each document, texts are separated into positive sentences only and all text of the document. For each document, these two types of texts are stored for the similarity calculation component. Positive sentences are identified utilizing negatives and negation-grammar Rules [36,37,38]. There are 2840 sentences in the dataset utilized in this study. Among 2840 sentences, 2240 sentences are positive sentences. Figure 2 represents the overview of the dataset stating the number of total, positive and negative sentences. The dataset can be requested through the github repository, https://github.com/ping543f/kwd-extraction-study Similarity Calculation With two sets of text obtained from the data processing component, all keyword extraction algorithms are employed to extract keywords from each set of each document. Firstly texts are passed into all the keyword extraction techniques, namely YAKE, TopicRank, MultipartiteRank, KPMiner, KEA, and WINGNUS. All techniques return the extracted keywords of the provided texts of a document. Then those keywords and expert-provided keywords are passed to the similarity index calculator to calculate the similarity score between them. Three similarity indexes are utilized to calculate the similarity score, namely, Jaccard, Cosine, and Cosine with word vector similarity index. This whole process is executed for all the documents with positive and all texts of each document. After processing each document, scores are stored with appropriate labels to analyze the result. The similarity calculation component for the scenario described above can be expressed through the Algorithm 1 provided below. Experimental Setup All experiment-related codes are developed utilizing Python programming language version 3.7.3 [39] for this study. Jaccard and Cosine similarity algorithms are developed following the equation described in their original papers [8,40]. Cosine similarity with word vector algorithm is implemented utilizing Spacy Python library [41]. All keyword extraction algorithms are implemented utilizing pke [42] Python package. The experiment is done in a MacBook with macOS Big Sur operating system version 11.5 with a 1.2 GHz dual-core Intel Core m5 processor and 8 gigabytes of RAM. Results and Discussion To begin with the result analysis, Table 2 and Table 3 are generated from the experiment. Both tables contain the similarity scores of ten standard documents generated by different keyword extraction techniques and similarity index algorithms. Table 2 contains the results obtained from the unsupervised keyword extraction techniques, and Table 3 contains the results generated by the supervised keyword extraction techniques. For unsupervised techniques, the MultipartiteRank algorithm performs better in all three similarity indexes than other implemented keyword extraction techniques. Furthermore, it gives the best result of 92% similarity score for positive sentences and 91% for all sentences of the documents while employed with the Cosine with word vector similarity index. The lowest performing similarity index algorithm is the Jaccard similarity index for the same keyword extraction technique with a score of 14% similarity score for both positive and all sentences of the documents. It is also observed from the experimental result that, Cosine with word vector similarity index is consistently performing better than Jaccard and cosine similarity index for all the unsupervised keyword extraction techniques. This analysis can easily be understood from Figure 3a. This figure presents the distribution of all the similarity scores of all the unsupervised techniques employed in this study for Jaccard, Cosine, and Cosine with word vector similarity indexes. On the other hand, for the supervised techniques, the KEA keyword extraction algorithm performs the best with 91% of similarity score while calculating with the Cosine with word vector similarity index for both positive and all sentences of the documents. However, the WINGNUS supervised keyword extraction technique provides better similarity scores for Cosine and Jaccard similarity indexes only for positive sentences, which are 22% and 12% similarity scores. Nevertheless, KEA is performing better for all sentences while measured with Jaccard and Cosine similarity indexes. However, KEA holds the best similarity score utilizing the Cosine with word vector similarity index, which is around 70% more than those measured with Jaccard and Cosine similarity index. This analysis can be more clear with a visual representation. Figure 3b represents the distribution of all the similarity scores for all the supervised keyword extraction techniques with all three similarity indexes. Among supervised and unsupervised keyword extraction techniques, the unsupervised technique, namely, MultipartiteRank, exhibits better performance in achieving a higher similarity score for positive sentences while measured with Cosine with word vector similarity index. Furthermore, for all sentences, unsupervised technique, MultipartiteRank, and supervised technique, KEA produces the same score of 91% in Cosine with word vector similarity index. Similarity score comparisons for both supervised and unsupervised methods are projected in Figure 4. Since there are two sets of textual data: Data with positive sentences and Data with all sentences, they have implications for the experimental results seen in Table 2 and Table 3. The initial hypothesis of having two separate text datasets from the same articles is to observe how positive and negative sentences affect the similarity score of the extracted keywords with the keywords provided by the experts for the specific domain, and based on this impact, recommend the relevant text data to be used. From the experimental results, the positive sentences have a minimal impact on the similarity scores for all three similarity indices compared to the scores for all sentences. This is because the negative sentences contain very few to no keywords that could match the keywords given by the experts. Therefore, there is no or minimal effect of the similarity indices between the positive sentences and the dataset with all sentences, as shown in the experimental result. The similarity values between the positive sentences and all sentences vary from 1% to 4%. For example, in the MultipartiteRank algorithm, the Jaccard and Cosine similarity values are the same for both texts, 14% and 25%, respectively. However, for the cosine with word vector similarity index, the text of the positive sentence achieves 92% similarity, and the text of all sentences achieves 91% similarity, which is a minimal difference of 1%. On the other hand, in the algorithm KEA, the similarity value of cosine with word vector is the same for both text data, i.e., 91% of similarity value. The maximum difference of 4% in similarity score is observed for the YAKE algorithm in similarity index Cosine with Word Vector. Hence, it can be said that positive sentences and all sentences have a similar effect on the similarity index with very little difference from 1% to 4%. Although the positive sentences have a negligible effect on the similarity computation, they have a more significant impact on the running time of the similarity computation process. From the experiment results, the unsupervised algorithms MultipartiteRank and the supervised algorithms KEA perform better than the other algorithms used in terms of similarity index. Therefore, a runtime comparison is performed for both algorithms to study the runtime for both positive and all text sets for computing all similarity indices. Table 4 presents the runtime comparison result for the two better-performing keyword extraction techniques MultipartiteRank and KEA for Jaccard, Cosine, and Cosine similarity with word vector indices. The runtimes reported in the table 4 are the average of 5 runtimes of the experiment, which includes only the similarity computation. From the runtime table, it can be seen that positive texts have a great impact on the duration of the similarity calculation. When computing the similarity of the texts with the keywords given by the experts, the positive sentences take significantly less time than computing the similarity of all sentences. For example, in the unsupervised MultipartiteRank algorithm, the computation of all sentences takes 232.4, 225.1, and 230.2 seconds for the Jaccard, Cosine, and Cosine with word vector similarity indices, respectively. On the other hand, the computation of positive sentences takes only 143.6, 140.86, and 142.7 seconds for Jaccard, Cosine, and Cosine with word vector similarity indices, respectively, which is 88.8, 84.24, and 87.5 seconds less for the aforementioned similarity indices. A similar pattern is also observed for the supervised KEA algorithm, i.e., computing the similarity of positive sentences takes less time than computing all sentences. Figure 5 shows the comparison results in a more understandable form. Table 5 provides the set of keywords extracted by the top-performing keyword extraction techniques employing the Cosine with Word Vector similarity index and expert provided keywords. This table also provides a visual comparison of the similarity between all the keywords. Word cloud representation is also provided in Figure 6. Word cloud is utilized to represent the words emphasized according to their frequency, rank, or similarity. This word cloud is generated based on the frequency scores of keywords among all the documents. From the word clouds of top-performing two methods, it is also visible that there are similar keywords of the same scores among all machine-generated and expert provided keywords. The study of the experimental results suggests that for extracting keywords and checking the similarity of the extracted keywords from scientific documents, especially for the EDLC related documents, the unsupervised keyword extraction technique MultipartiteRank algorithm can be considered in addition to the expert curated keywords. Although this algorithm requires slightly more computation time than the supervised keyword extraction technique KEA, it gives better results than KEA. If computation time is considered or required over better similarity score, then it is recommended to employ the supervised keyword extraction technique KEA for 1% of similarity score drop over MultipartiteRank algorithm. When choosing between the positive and the whole article text content, it is recommended to choose the positive text as it has a very small impact on the similarity score but a larger impact on the computation time. Positive texts have no or very little impact on the similarity scores, but require less computation time than all the texts of the scientific articles. Conclusion The aim of this study is to find out which keyword extraction technique provides more similar keywords to the expert provided keywords, which text types have more similarity, which similarity index provides more similarity scores and whether the use of machine generated keywords is feasible with respect to the expert provided keywords. The experiment shows that the unsupervised keyword extraction technique MultipartiteRank provides 92% similarity with the expert provided keywords in cosine with Word Vector similarity index for positive sentences of the documents from EDLC domain. This study can be further extended with keywords for other domains with a larger dataset in other environments, including author-supplied keywords. Data Availability Dataset used in this study is available upon request and the request repository is mentioned in Section 3.2 Conflicts of Interest The authors declare no conflicts of interest.
2021-11-16T02:16:21.204Z
2021-11-13T00:00:00.000
{ "year": 2021, "sha1": "7dcfb2e8d29d482593d450c7cb924cbb39b41345", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/complexity/2021/8192320.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "7dcfb2e8d29d482593d450c7cb924cbb39b41345", "s2fieldsofstudy": [ "Economics", "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54945436
pes2o/s2orc
v3-fos-license
Binary black holes in nuclei of extragalactic radio sources If we assume that nuclei of extragalactic radio sources contain binary black hole systems, the two black holes can eject VLBI components in which case two families of different VLBI trajectories will be observed. Another important consequence of a binary black hole system is that the VLBI core is associated with one black hole, and if a VLBI component is ejected by the second black hole, one expects to be able to detect the offset of the origin of the VLBI component ejected by the black hole that is not associated with the VLBI core. The ejection of VLBI components is perturbed by the precession of the accretion disk and the motion of the black holes around the center of gravity of the binary black hole system. We modeled the ejection of the component taking into account the two pertubations and present a method to fit the coordinates of a VLBI component and to deduce the characteristics of the binary black hole system. Specifically, this is the ratio Tp/Tb where Tp is the precession period of the accretion disk and Tb is the orbital period of the binary black hole system, the mass ratio M1/M2, and the radius of the Binary Black Hole system Rbin. From the variations of the coordinates as a function of time of the ejected VLBI component, we estimated the inclination angle io and the bulk Lorentz factor gamma of the modeled component. We applied the method to component S1 of 1823+568 and to component C5 of 3C 279. Introduction VLBI observations of compact radio sources show that the ejection of VLBI components does not follow a straight line, but undulates. These observations suggest a precession of the accretion disk. To explain the precession of the accretion disk, we assumed that the nuclei of radio sources contain binary black hole systems (BBH system, see Figure 1) . A BBH system produces three pertubations of the VLBI ejection due to 1. the precession of the accretion disk, 2. the motion of the two black holes around the center of gravity of the BBH system, and 3. the motion of the BBH system in the galaxy. In this article, we do not take into account the possible third pertubation due to the motion of the BBH system in the galaxy. A BBH system induces several consequences, which are that 1. even if the angle between the accretion disk and the plane of rotation of the BBH system is zero, the ejection does not follow a straight line (due to the rotation of the black holes around the center of gravity of the BBH system), 2. the two black holes can have accretion disks with different angles with the plane of rotation of the BBH system and can eject VLBI components; in that case we observe two different families of trajectories; a good example of a source with Send offprint requests to: J. Roland, e-mail: roland@iap.fr Fig. 1. BBH system model. The two black holes can have an accretion disk and can eject VLBI components. If it is the case, we observe two different families of trajectories and an offset between the VLBI core and the origin of the VLBI component if it is ejected by the black hole that is not associated with the VLBI core. The angles Ω 1 and Ω 2 between the accretion disks and the rotation plane of the BBH system can be different. two families of trajectories is 3C 273 whose components C5 and C9 follow two different types of trajectories (see Figure 2), and 3. if the VLBI core is associated with one black hole, and if the VLBI component is ejected by the second black hole, there will be an offset between the VLBI core and the origin of the ejection of the VLBI component; this offset will correspond to the radius of the BBH system. The precession of the accretion disk can be explained using a single rotating black hole (Lense-Thirring effect) or by the magnetically driven precession (Caproni et al. 2006). However, a single black hole and a BBH system have completely different Fig. 2. Trajectories of the VLBI components C5 and C9 of 3C 273 using MOJAVE data (Lister et al. 2009b). We observe two different types of trajectories, suggesting that they are ejected from two different black holes. consequences. In the case of a BBH system, one has an extra perturbation of the ejected component due to the motions of the black holes around the center of gravity of the BBH system. One can expect to observe two different families of trajectories (if the two black holes eject VLBI components) and an offset of the origin of the ejected component if it is ejected by the black hole that is not associated with the VLBI core. We modeled the ejection of the VLBI component using a geometrical model that takes into account the two main perturbations due to the BBH system, i.e. 1. the precession of the accretion disk and 2. the motion of the two black holes around the center of gravity of the BBH system. In section 2 we recall the main lines of the model. The details of the model can be found in Roland et al. (2008). We determined the free parameters of the model by comparing the observed coordinates of the VLBI component with the calculated coordinates of the model. This method requires knowing of the variations of the two coordinates of the VLBI component as a function of time. Because these observations contain the kinematical information, we will be able to estimate the inclination angle of the source and the bulk Lorentz factor of the ejected component. In this article we present a method to solve this problem, either for a precession model or for a BBH system model, based on understanding the space of the solutions. Practically, two different cases can occur when we try to solve this problem. 1. Either the VLBI component is ejected from the VLBI core, or the offset is smaller than or on the order of the smallest error bars of the VLBI positions of the ejected component (case I), 2. or the VLBI component is ejected with an offset larger than the smallest error bars of the VLBI positions of the ejected component (case II). Case II is much more complicated to solve than case I, because the observed coordinates contain an unknown offset that is larger than the error bars. Therefore, we first have to find the offset, then correct the VLBI data from the offset, and finally find the solution corresponding to the corrected data. We present the method for solving the problem in section 3. To illustrate case I, we solve the fit of component S1 of 1823+568 using MOJAVE data in section 4. To illustrate case II, we solve the fit of component C5 of 3C 279 using MOJAVE data in section 5. Introduction: Two-fluid model We describe the ejection of a VLBI component in the framework of the two-fluid model (Sol et al. 1989;Pelletier & Roland 1989Pelletier & Sol 1992). The two-fluid description of the outflow is adopted with the following assumptions: 1. The outflow consists of an e − − e + plasma (hereafter the beam) moving at a highly relativistic speed (with corresponding Lorentz factor 1 γ b ≤ 30) surrounded by an e − − p plasma (hereafter the jet) moving at a mildly relativistic speed of v j ≤ 0.4 × c. 2. The magnetic field lines are parallel to the flow in the beam and the mixing layer, and are toroidal in the jet (see Figure 3). Fig. 3. Two-fluid model. The outflow consists of an e − − e + plasma, the beam, moving at a highly relativistic speed, surrounded by an e − − p plasma, and of the jet, moving at a mildly relativistic speed. The magnetic field lines are parallel to the flow in the beam and the mixing layer, and are toroidal in the jet. Muxlow et al. (1988) and Roland et al. (1988) found that the Cygnus A hot spots could be explained by a an e − − p plasma moving at a mildly relativistic speed, i.e. v j ≤ 0.4 × c. Consequently, the two-fluid model was introduced to explain superluminal radio sources observed in the nuclei of radio sources. The e − − p jet carries most of the mass and the kinetic energy ejected by the nucleus. It is responsible for the formation of kpcjets, hot spots, and extended lobes (Roland & Hetem 1996). The relativistic e ± beam moves in a channel through the mildly relativistic jet and is responsible for the formation of superluminal sources and their γ-ray emission (Roland et al. 1994). The relativistic beam can propagate when the magnetic field B is parallel to the flow in the beam and in the mixing layer between the beam and the jet, and when it is greater than a critical value Achatz & Schlickeiser 1993). The magnetic field in the jet becomes rapidly toroidal as a function of distance from the core (Pelletier & Roland 1990). The observational evidence for the two-fluid model has been discussed by e.g. Roland & Hetem (1996). Observational evidence for relativistic ejection of an e ± beam comes from the γ-ray observations of MeV sources (Roland & Hermsen 1995;Skibo et al. 1997) and from VLBI polarization observations (Attridge et al. 1999). The formation of X-ray and γ-ray spectra, assuming relativistic ejection of e ± beams, has been investigated by Marcowith et al. (1995Marcowith et al. ( , 1998 for Centaurus A. The possible existence of VLBI components with two different apparent speeds has been pointed out for the radio galaxies Centaurus A (Tingay et al. 1998), Virgo A (Biretta et al. 1999) and 3C 120 (Gómez et al. 2001). If the relativistic beam transfers some energy and/or relativistic particles to the jet, the relativistic particles in the jet will radiate and a new VLBI component with a mildly relativistic speed will be observed (3C 120 is a good example of a source showing this effect). Geometry of the model We call Ω the angle between the accretion disk and the orbital plane (XOY) of the BBH system. The component is ejected on a cone (the precession cone) with its axis in the Z OZ plane and of opening angle Ω. We assumed that the line of sight is in the plane (YOZ) and forms an angle i o with the axis Z OZ (see Figure 4). The axis η corresponds to the mean ejection direction of the VLBI component projected in a plane perpendicular to the line of sight, so the plane perpendicular to the line of sight is the plane (ηOX). We call ∆Ξ the rotation angle in the plane perpendicular to the line of sight to transform the coordinates η and X into coordinates N (north) and W (west), which are directly comparable with the VLBI observations. We have The sign of the coordinate W was changed from Roland et al. (2008) to use the same definition as VLBI observations. General perturbation of the VLBI ejection For VLBI observations, the origin of the coordinates is black hole 1, i.e. the black hole ejecting the VLBI components. For the sake of simplicity, we assumed that the two black holes have circular orbits, i.e. e = 0. Therefore, the coordinates of the moving components in the frame of reference where black hole 1 is considered the origin are (Roland et al. 2008 Fig. 4. Geometry of the problem. The planes X -η and westnorth are perpendicular to the line of sight. In the west -north plane, the axis η corresponds to the mean ejection direction of the VLBI component. Ω is the opening angle of the precession cone. where -R o (z) is the amplitude of the precession perturbation, given by R o (z) = R o z c (t)/(a + z c (t)), with a = R o /(2 tanΩ), -ω p is ω p = 2π/T p , where T p is the precession period, and k p is defined by k p = 2π/T p V a , where V a is the speed of the propagation of the perturbations, -T d is the characteristic time of the damping of the perturbation, x 1 and y 1 are given by We define with R bin the distance between the two black holes as the size of the BBH system. It is In mas units (milli arc second units), it is R bin ≈ 2.06 10 8 where D a = D l /(1 + z) 2 is the angular distance, D l is the luminosity distance, and z is the redshift of the source. The differential equation governing the evolution of z c (t) can be obtained by defining the speed of the component, namely where v c is related to the bulk Lorentz factor by v c /c = (1 − 1/γ 2 c ). Using (3), (4) and (5), we find from (9) that dz c /dt is the solution of the equation The calculation of the coefficients A, B and C can be found in Appendix A of Roland et al. (2008). Equation (10) admits two solutions corresponding to the jet and the counter-jet. Following Camenzind & Krockenberger (1992), if we call θ the angle between the velocity of the component and the line of sight, we have The Doppler beaming factor δ, characterizing the anisotropic emission of the moving component, is 2.4. Coordinates of the VLBI component Solving (10), we determine the coordinate z c (t) of a point-source component ejected relativistically in the perturbed beam. Then, using (3) and (4), we can find the coordinates x c (t) and y c (t) of the component. In addition, for each point of the trajectory, we can calculate the derivatives dx c /dt, dy c /dt, dz c /dt and then deduce cos θ from (11), δ c from (12), S ν from (13) and t obs from (14). After calculating the coordinates x c (t), y c (t) and z c (t), they can be transformed to w c (t) (west) and n c (t) (north) coordinates using (1) and (2). As explained in Britzen et al. (2001), Lobanov & Roland (2005), and Roland et al. (2008), the radio VLBI component has to be described as an extended component along the beam. We call n rad the number of points (or integration steps along the beam) for which we integrate to model the component. The coordinates W c (t), N c (t) of the VLBI component are then and and can be compared with the observed coordinates of the VLBI component, which correpond to the radio peak intensity coordinates provided by model-fitting during the VLBI data reduction process. When, in addition to the radio, optical observations are available that peak in the light curve, this optical emission can be modeled as the synchrotron emission of a point source ejected in the perturbed beam, see Britzen et al. (2001) and Lobanov & Roland (2005). This short burst of very energetic relativistic e ± is followed immediately by a very long burst of less energetic relativistic e ± . This long burst is modeled as an extended structure along the beam and is responsible for the VLBI radio emission. In that case the origin t o of the VLBI component is the beginning of the first peak of the optical light curve and is not a free parameter of the model. Parameters of the model In this section, we list the possible free parameters of the model. They are i o the inclination angle, -φ o the phase of the precession at t = 0, -∆Ξ the rotation angle in the plane perpendicular to the line of sight (see (1) and (2)), -Ω the opening angle of the precession cone, -R o the maximum amplitude of the perturbation, -T p the precession period of the accretion disk, -T d the characteristic time for the damping of the beam perturbation, -M 1 the mass of the black hole ejecting the radio jet, -M 2 the mass of the secondary black hole, -γ c the bulk Lorentz factor of the VLBI component, -ψ o the phase of the BBH system at t = 0, -T b the period of the BBH system, t o the time of the origin of the ejection of the VLBI component, -V a the propagation speed of the perturbations, n rad is the number of steps to describe the extension of the VLBI component along the beam, -∆W and ∆N the possible offsets of the origin of the VLBI component. We will see that the parameter V a can be used to study the degeneracy of the solutions, so we can keep it constant to find the solution. The range of values that we study for parameter V a is 0.01 × c ≤ V a ≤ 0.45 × c 2 . The parameter n rad is known when the size of the VLBI component is known. This means that, pratically, the problem we have to solve is a 15 free parameter problem. We have to investigate the different possible scenarios with regard to the sense of the rotation of the accretion disk and the sense of the orbital rotation of the BBH system. These possibilities correspond to ± ω p (t − z/V a ) and ± ω b (t − z/V a ). Because the sense of the precession is always opposite to the sense of the orbital motion (Katz 1997), we study the two cases denoted by +− and −+, where we have ω Introduction In this section, we explain the method for fitting VLBI observations using either a precession model or a BBH system model. The software is freely available on request to J Roland (roland@iap.fr). This method is a practical one that provides solutions, but the method is not unique and does not guarantee that all possible solutions are found. We calculate the projected trajectory on the plane of the sky of an ejected component and determine the parameters of the model to simultaneously produce the best fit with the observed west and north coordinates. The parameters found minimize where χ 2 (W c (t)) and χ 2 (N c (t)) are the χ 2 calculated by comparing the VLBI observations with the calculated coordinates W c (t) and N c (t) of the component. For instance, to find the inclination angle that provides the best fit, we minimize χ 2 t (i o ). A good determination of the 1 σ (standard deviation) error bar can be obtained using the definition which provides two values (∆i o ) 1σ+ and (∆i o ) 1σ− (see Lampton et al. (1976) and Hébrard et al. (2002)). The concave parts of the surface χ 2 (i o ) contain a minimum. We can find solutions without a minimum; they correspond to the convex parts of the surface χ 2 (i o ) and are called mirage solutions. To illustrate the properties of the surface χ 2 (i o ) we plot in Figure 5 a possible example of a profile of the solution χ 2 (i o ). In Figure 5, there are two possible solutions for which χ 2 (S ol1) ≈ χ 2 (S ol2), solution 2 is more robust than solution 1, i.e. it is the deepest one, and it will be the solution we will keep. We define the robustness of the solution as the square root of the difference between the smallest maximum close to the minimum and the minimum of the function χ 2 . A solution of robustness 3 is a 3 σ solution, i.e. 3 σ ⇔ ∆χ 2 = 9. The main difficulties we have to solve are the following: 1. find all possible solutions, 2. eliminate the mirage solutions, 3. find the most robust solutions. There are two possible solutions for which χ 2 (S ol1) ≈ χ 2 (S ol2). They correspond to the concave parts of the surface χ 2 (i o ). However, solution 2 is more robust than solution 1, i.e. it is the deepest one, and it will be the solution we adopt. For a given inclination angle of the BBH system problem, there exists a parameter that allows us to find the possible solutions. This fundamental parameter is the ratio T p /T b , where T p and T b are the precession period of the accretion disk and the binary period of the BBH system respectively (see details in paragraph 2 of section 3.3, section A.4, and section B.5). Any minimum of the χ 2 function can be a local minimum and not a global minimum. However, because we investigate a wide range of the parameter T p /T b , namely 1 ≤ T p /T b ≤ 1000, we expect to be able to find all possible solutions (the limit T p /T b ≤ 1000 is given as an indication, in practice a limit of T p /T b ≤ 300 is enough). Note that when the solution is found, it is not unique, but there exists a family of solutions. The solution shows a degeneracy and we will see that the parameter to fix the degeneragy or to find the range of parameters that provide the family of solutions is V a , the propagation speed of the perturbation along the beam. Generally, for any value of the parameters, the surface χ 2 (λ) is convex and does not present a minimum. Moreover, when we are on the convex part of the surface χ 2 (λ), one of the important parameters of the problem can diverge. The two important parameters of the problem that can diverge are 1. the bulk Lorentz factor of the e ± beam, which has to be γ b ≤ 30. This limit is imposed by the stability criterion for the propagation of the relativistic beam in the subrelativistic e − − p jet, 2. the total mass of the BBH system. The most frequent case of divergence we can find corresponds to γ b → ∞. These mirage solutions are catastrophic and must be rejected. As we will see, generally, we have to study the robustness of the solution in relation to the parameters T p /T b , M 1 /M 2 , γ and i o . Solution of the precession model In a first step, we fit a simple precession model without a BBH system. This corresponds to the precession induced by a spinning BH (Lense-Thirring effect) or by the magnetically driven precession (Caproni et al. 2006). This has the advantage of determining whether the solution corresponds to case I or to case II and of preliminarily determining the inclination angle and the bulk Lorentz factor of the ejected component. We have to investigate the different possible scenarios with regard to the sense of the rotation of the accretion disk. These possibilities correspond to ±ω p (t −z/V a ). Accordingly, we study the two cases. Assuming a simple precession model, these are the steps to fit the coordinates X(t) and Y(t) of a VLBI component: 1. Determining the solution χ 2 (i o ) and the time origin of the component ejection. In this section we assume that V a = 0.1 c (as χ 2 (V a ) remains constant when V a varies, any value of V a can be used, see details in the next paragraph). We calculate χ 2 (i o ), i.e., we minimize χ 2 t (i o ) when the inclination angle varies gradually between two values. At each step of i o , we determine each free parameter λ such that ∂χ 2 t /∂λ = 0. Firstly, the important parameter to determine is the time origin of the ejection of the VLBI component. We compare the times of the observed peak flux with the modeled peak flux. The time origin is obtained when the two peak fluxes occur at the same time. The solutions corresponding to case II show a significant difference between the time origin of the ejection of the VLBI component deduced from the fit of the peak flux and the time origin obtained from the interpolation of the core separation. Second, we can make a first determination of the inclination angle and of the bulk Lorentz factor. 2. Determining the family of solutions. The solution previously found is not unique and shows a degeneracy. The parameter V a can be used to study the degeneracy of the solution. Indeed, if we calculate χ 2 (V a ) when V a varies, we find that χ 2 (V a ) remains constant. For the inclination angle found in the previous section and the parameters of the corresponding solution, we calculate χ 2 (V a ) when V a varies between 0.01 c ≤ V a ≤ 0.45 c and deduce the range of the precession period. 3. Determining the possible offset of the origin of the VLBI component. In this section, we keep V a = 0.1 c and using the inclination angle previously found and the corresponding solution, we calculate χ 2 (∆x, ∆y) when ∆x and ∆y vary (∆x and ∆y are the possible offsets of the VLBI origin). Solutions corresponding to case II show a significant offset of the space origin. Note that determining the offsets of the VLBI coordinates does not depend on the value of the inclination angle. Solution of the BBH model We have to investigate the different possible scenarios with regard to the sense of the rotation of the accretion disk and the sense of the orbital rotation of the BBH system. Because the sense of the precession is always opposite to the sense of the orbital motion, we study the two cases where we have ω Assuming a BBH model, this is the method for fitting the coordinates X(t) and Y(t) of a VLBI component: 1. Determining the BBH system parameters for various values of T p /T b . In this section, we keep the inclination angle previously found and V a = 0.1 c. We determine the BBH system parameters for different values of T p /T b , namely T p /T b = 1.01, 2.2, 4.6, 10, 22, 46, 100, and 220 for a BBH system with M 1 = M 2 (these values of T p /T b are chosen because they are equally spaced on a logarithmic scale). Generally, the BBH systems obtained with a low value of T p /T b , namely T p /T b = 1.01, 2.2, or 4.6 are systems with a large radius and the BBH systems obtained with a high value of T p /T b , namely T p /T b = 10, 22, 46, 100, or 220 are systems with a small radius. 2. Determining the possible solutions: the χ 2 (T p /T b ) -diagram. In this section, we keep the inclination angle previously found, V a = 0.1 c and M 1 = M 2 . The crucial parameter for finding the possible solutions is T p /T b , i.e., the ratio of the precession period and the binary period. Starting from the solutions found in the previous section, we calculate χ 2 (T p /T b ) when T p /T b varies between 1 and 300. We find that the possible solutions characterized by a specific value of the ratio T p /T b . We note that some of the solutions can be mirage solutions, which have to be detected and excluded. 3. Determining the possible offset of the space origin. In this section, we keep the inclination angle previously found, V a = 0.1 c and M 1 = M 2 . Starting with the solution found in the previous section, we calculate χ 2 (∆x, ∆y) when ∆x and ∆y vary (∆x and ∆y are the possible offsets of the VLBI origin). If we find that an offset of the origin is needed, we correct the VLBI coordinates by the offset to continue. Note that determining the offsets of the VLBI coordinates does not depend on the value of the inclination angle. 4. Determining the range of possible values of T p /T b . In this section, we keep V a = 0.1 c, M 1 = M 2 . Previously, we found a solution characterized by a value of T p /T b for a given inclination. Therefore we calculate χ 2 (i o ) when i o varies with a variable ratio T p /T b . We obtain the range of possible values of T p /T b and the range of possible values of i o . 5. Preliminary determination of i o , T p /T b and M 1 /M 2 . In this section, we keep V a = 0.1 c. This section is the most complicated one and differs for solutions corresponding to case I and case II. We indicate the main method and the main results (the details are provided in section A.7 for the fit of component S1 of 1823+568 solutions and in section B.7 for the fit of component C5 of 3C 279). We calculate χ 2 (i o ) for various values of T p /T b and M 1 /M 2 . Generally, we find that there exist critical values of the parameters T p /T b and M 1 /M 2 , which separate the domains for which the solutions exist or become mirage solutions. The curves χ 2 (i o ) show a minimum for given values (i o ) min and if necessary, we study the robustness of the solution in relation to the parameter γ, therefore we calculate χ 2 (γ) at i o = (i o ) min for the corresponding values of T p /T b and M 1 /M 2 . When these critical values are obtained, we find the domains of T p /T b and M 1 /M 2 , which produce the solutions whose robustness is greater than 1.7 σ and the corresponding inclination angle i o . 6. Determining a possible new offset correction. Using the solution found in the previous section, we calculate again χ 2 (∆x, ∆y) when ∆x and ∆y vary. When a new offset of the origin is needed, we correct the VLBI coordinates by the new offset to continue. Note that this new offset correction is smaller than the first one found previously. 7. Characteristics of the final solution to the fit of the VLBI component. We are now able to find the BBH system parameters that produce the best solution for the fit with the same method as described in point 5, Preliminary determination of i o , T p /T b and M 1 /M 2 . 8. Determining the family of solutions. The solution previously found is not unique and shows a degeneracy. The parameter V a can be used to study the degeneracy of the solution. Indeed, when we calculate χ 2 (V a ) for varying V a , we find that χ 2 (V a ) remains constant. Using the solution found in the previous section and the parameters of the corresponding solution, we calculate χ 2 (V a ) when V a varies between 0.01 c ≤ V a ≤ 0.45 c and deduce the range of the precession period, the binary period, and the total mass of the BBH system. 9. Determining the size of the accretion disk. Because we know the parameters of the BBH system, we can deduce the rotation period of the accretion disk and its size. Method -Case I 4.1. Introduction: Fitting the component S1 of 1823+568 Case I corresponds to a VLBI component ejected either from the VLBI core or to one where the offset of the origin of the ejection is smaller than or on the order of the smallest error bars of the VLBI component coordinates. It is the simplest case to solve. To illustrate the method of solving the problem corresponding to case I, we fit the component S1 of the source 1823+568 ( Figures 6 and 7). VLBI data of 1823+568 1823+568 is an quasar at a redshift of 0.664 ± 0.001 (Lawrence et al. 1986). The host galaxy is elliptical according to HST observations (Falomo et al. 1997). The jet morphology on kpc-scales is complex -a mirrored S in observations with the MTRLI at 1666 MHz and with the VLA at 2 and 6 cm (O'Dea et al. 1988). The largest extension of 1823+568 is 15", corresponding to 93 kpc. On pc-scales the jet is elongated and points in a southern direction from the core (Pearson & Readhead 1988) -in accordance with the kpc-structure. Several components could be identified in the jet, e.g., Gabuzda et al. (1989) and Gabuzda et al. (1994), Gabuzda & Cawthorne (1996), Jorstad et al. (2005). A VSOP Space VLBI image of 1823+568 has been obtained by Lister et al. (2009a). All identified components show strong polarization. The linear polarization is parallel to the jet ridge direction. Most of the components show slow apparent superluminal motion. The fast component S1 moved with an apparent velocity of about 20 c ± 2 c until 2005 and subsequently decreases (Glück 2010). Twenty-two VLBA observations obtained at 15 GHz within the 2-cm MOJAVE survey between 1994.67 and 2010.12 have been re-analyzed and model-fitted to determine the kinematics of the individual components. For details of the data reduction and analysis see Glück (2010). The radio map of 1823+568, observed 9 May 2003, is shown in Figure 6. The data are taken from Glück (2010). Preliminary remarks The redshift of the source is z s ≈ 0.664, and using for the Hubble constant H o ≈ 72 km/s/Mpc, the luminosity distance of the source is D l ≈ 3882 Mpc and the angular distance is For details of the values of the data and of their error bars see Glück (2010). At 15 GHz, calling the beam size Beam, we adopted for the minimum values ∆ min of the error bars of the observed VLBI coordinates, the values in the range: see Section C for details concerning this choice. (Lister et al. 2009b). For details concerning the plot and the line fits see Lister et al. (2009b). We fit component S1 corresponding to component 4 from the MOJAVE survey. Component S1 moves fast, which may indicate that two families of VLBI components exist in the case of 1823+568. If this is the case, the nucleus of 1823+568 could contain a BBH system. For 1823+568, observations were performed at 15 GHz and the beam size is mostly circular and equal to Beam ≈ 0.5 mas. We adopted as minimum values of the error bars the values (∆W) min ≈ Beam/12 ≈ 40 µas and (∆N) min ≈ Beam/12 ≈ 40 µas for the west and north coordinates of component S1, i.e., when the error bars obtained from the VLBI data reduction were smaller than (∆W) min or (∆N) min , they were enlarged to the minimum values. The minimum values were chosen empirically, but the adopted values were justified a posteriori by comparing the χ 2 value of the final solution and the number of constraints to make the fit and to obtain a reduced χ 2 close to 1. For the component S1, we have (χ 2 ) f inal ≈ 51 for 56 constraints, the reduced χ 2 is χ 2 r = (χ 2 ) f inal /56 ≈ 0.91. Lister & Homan (2005) suggested that the positional error bars should be about 1/5 of the beam size. However, if we had chosen (∆W) min = (∆N) min ≈ Beam/5 ≈ 100 µas, we would have (χ 2 ) f inal 56, indicating that the minimum error bars would be overestimated (see details in Section C). To obtain a constant projected trajectory of the VLBI component in the plane perpendicular to the line of sight, the integration step to solve equation (10) changes when the inclination angle varies, . The integration step was ∆t = 0.8 yr when The trajectory of component S1 is not long enough to constrain the parameter T d , i.e., the characteristic time for the damping of the beam perturbation. We fit assuming that T d ≤ 2500 yr; this value produced a good trajectory shape. The time origin of the ejection of the component S1, deduced from the interpolation of VLBI data, is t o ≈ 1995.6 ( Figure 7). Close to the core, the size of S1 is ≈ 0.24 mas, therefore we assumed that n rad = 75, where n rad is the number of steps to describe the extension of the VLBI component along the beam. At i o = 5 • with an integration step ∆t = 0.8 yr, we calculated the length of the trajectory corresponding to each integration step. The size of the component is the sum of the first n rad = 75 lengths. Final fit of component S1 of 1823+568 Here we present the solution to the fit of S1, the details for the fit can be found in Section A. We studied the two cases ±ω p (t − z/V a ). The final solution of the fit of component S1 using a BBH system corresponds to +ω The main characteristics of the solution of the BBH system associated with 1823+568 are that the radius of the BBH system is R bin ≈ 60 µas ≈ 0.42 pc, the VLBI component S1 is not ejected by the VLBI core, and the offsets of the observed coordinates are ∆W ≈ +5 µas and ∆N ≈ 60 µas, The results of the fits obtained for T p /T b = 8.88 and T p /T b = 9.88 are given in section A.9. The solutions found with T p /T b ≈ 8.88 are slightly more robust, but both solutions can be used. To continue, we arbitrarily adopted the solution with T p /T b ≈ 8.88 and M 1 /M 2 ≈ 0.17. We deduced the main parameters of the model, which are that the inclination angle is i o ≈ 3.98 • , the angle between the accretion disk and the rotation plane of the BBH system is Ω ≈ 0.28 • (this is also the opening angle of the precession cone), the bulk Lorentz factor of the VLBI component is γ c ≈ 17.7, and the origin of the ejection of the VLBI component is t o ≈ 1995.7. The variations of the apparent speed of component S1 are shown in Figure 8. We can determine the Doppler factor (equation 12), and consequently, we can estimate the observed flux density (equation Fig. 8. Apparent speed of component S1 increases at the begining, then it is ≈ 17.5 c until 2005, and finally, it decreases slowly assuming a constant bulk Lorentz factor γ c ≈ 17.7. 13). This was used to fit the temporal position of the peak flux and to determine the temporal origin of the ejection of the VLBI component (see Section A.1 for the details). The fit of the two coordinates W(t) and N(t) of the component S1 of 1823+568 is shown in Figure 9. The points are the observed coordinates of component S1 that were corrected by the offsets ∆W ≈ +5 µas and ∆N ≈ 60 µas, and the red lines are the coordinates of the component trajectory calculated using the BBH model assuming the solution parameters, i.e., Finally, we compared this solution with the solution obtained using the precession model. The χ 2 min (i o ) is about 51 for the fit using the BBH system and about 67 for the precession model (see section A.1), i.e., the BBH system solution is a 4 σ better solution. To fit the ejection of component S1 we used 56 observations (the west and north coordinates corresponding to the 28 epochs of observation), so the reduced χ 2 is χ 2 r = 51/56 ≈ 0.91, indicating that the minimum values used for the error bars are correct. Determining the family of solutions For the inclination angle previously found, i.e., i o ≈ 3.98 • , T p /T b ≈ 8.88, M 1 /M 2 ≈ 0.17, and R bin ≈ 60 µas, we gradually varied V a between 0.01 c and 0.45 c. The function χ 2 (V a ) remained constant, indicating a degeneracy of the solution. We deduced the range of variation of the BBH system parameters. They are given in Table 1. The period of the BBH system is not obviously related to a possible periodicity of the radio or the optical light curve. Determining the size of the accretion disk From the knowledge of the mass ratio M 1 /M 2 ≈ 0.17 and the ratio T p /T b ≈ 8.88, we calculated in the previous section the Fig. 9. Fit of the two coordinates W(t) and N(t) of component S1 of 1823+568. They correspond to the solution with T p /T b ≈ 8.88, M 1 /M 2 ≈ 0.17, and i o ≈ 3.98 • . The points are the observed coordinates of component S1 that were corrected by the offsets ∆W ≈ +5 µas and ∆N ≈ 60 µas (the VLBI coordinates and their error bars are taken from Glück (2010)). The red lines are the coordinates of the component trajectory calculated using the BBH model. mass of the ejecting black hole M 1 , the orbital period T b , and the precession period T p for each value of V a . The rotation period of the accretion disk, T disk , is given by (Britzen et al. 2001) Thus we calculated the rotation period of the accretion disk, and assuming that the mass of the accretion disk is M disk M 1 , the size of the accretion disk R disk is We found that the size of the accretion disk does not depend on V a and is R disk ≈ 0.090 pc ≈ 0.013 mas. Introduction: Application to component C5 of 3C 279 Case II corresponds to an ejection of the VLBI component with an offset of the origin of the component larger than the smallest error bars of the VLBI component coordinates. This is the most difficult case to solve because data have to be corrected by an unknown offset That is larger than the smallest error bars. When we apply the precession model, there are two signatures of case II, which are 1. the problem of the time origin of the VLBI component, and 2. the shape of the curve χ 2 t (i o ). Using the precession model, we modeled the flux and compared the time position of the first peak flux with the time position of the observed peak flux. If the origin time deduced from interpolating the VLBI data was very different than the origin time deduced from the precesion model, we concluded that there is a time origin problem (see Section B.1). We show that this origin-time problem is related to the offset of the space origin of the VLBI component, i.e., the VLBI component is not ejected by the VLBI core and this offset is larger than the smallest error bars (see Section B.3). When the offset of the space origin is larger than the smallest error bars of the component positions and the VLBI coordinates are not corrected by this offset, the curve χ 2 t (i o ) can have a very characteristic shape: 1. the inclination angle is limited to a specific interval, i.e., MOJAVE data of 3C 279 The radio quasar 3C 279 (z = 0.536 Marziani et al. (1996)) is one of the brightest extragalactic radio sources and has been observed and studied in detail for decades. Superluminal motion in the outflow of the quasar was found by Whitney et al. (1971) and Cohen et al. (1971). Thanks to the increasing resolution and sensitivity of modern observation techniques, a more complex picture of 3C 279 appeared, including multiple superluminal features moving along different trajectories downstream the jet (Unwin et al. 1989). The apparent speed of these components span an interval between 4 c and 16 c (Cotton et al. 1979;Wehrle et al. 2001). We used the MOJAVE observations of 3C 279 (Lister et al. 2001). Seventy-six VLBA observations obtained at 15 GHz within the 2-cm MOJAVE survey between 1999.25 and 2007.64 were re-analyzed and model-fitted to determine the coordinates of the VLBI components. We used the NRAO Astronomical Image Processing System (AIPS) to calibrate the data. We performed an amplitude calibration and applied a correction for the atmospheric opacity for the high-frequency data (ν > 15 GHz). The parallactic angle correction was taken into account before we calibrated the phases using the pulse-scale signal and a final fringe fit. The time-and frequency-averaged data were imported to DIFMAP (Shepherd 1997), were we used the CLEAN and MODELFIT algorithm for imaging and model fitting, respectively. The fully calibrated visibilities were fitted in DIFMAP using the algorithm MODELFIT and 2D circular Gaussian components. These components were characterized by their flux density, S mod , position r mod , position angle (P.A.), θ mod (measured from north through east), and their full-width at half-maximum (FWHM). Since the number of fitted Gaussians was initially not limited, we only then added a new component when the χ 2 value decreased significantly. This approach led to a minimum number of Gaussians that can be regarded as a reliable representation of the source. We modeled each epoch separately to avoid biasing effects. The kinematics of the source could thus be analyzed by tracking the fitted components. The average beam for the 15 GHz observations is 0.51 mas × 1.34 mas. The radio map of 3C 349, observed 15 june 2003, is shown in Figure 10. The data are taken from Lister et al. (2009a). Figure 11). Preliminary remarks The redshift of 3C 279 is z ≈ 0.536, and using for the Hubble constant H o ≈ 72 km/s/Mpc, the luminosity distance of the source is D l ≈ 3070 Mpc and the angular distance is D a = D l /(1 + z) 2 . For details of the values of the data see Lister et al. (2009a). Because the observations were performed at 15 GHz and the beam size was 0.51 mas×1.34 mas, we adopted for the minimum values of the error bars the values (∆W) min ≈ Beam/15 ≈ 34 µas and (∆N) min ≈ Beam/15 ≈ 89 µas for the west and north coordinates of component C5. The adopted values were justified a posteriori by comparing the χ 2 value of the final solution and the number of constraints to make the fit and to obtain a reduced χ 2 close to 1. For the component C5, we have (χ 2 ) f inal ≈ 150 for 152 constraints, thus the reduced χ 2 is: (χ 2 ) r ≈ 0.99. It C5 C4 Fig. 11. Separation from the core for the different VLBI components for the source 3C 279 from MOJAVE data (Lister et al. 2009b). For the obtaining of the plotted line fits see Lister et al. (2009b). We fit component C5. Component C5 is ejected from an origin with a large offset from the VLBI core. has been suggested by Lister & Homan (2005) that the positional error should be within 20% of the convolving beam size, i.e., ≈ Beam/5. See Section C for details concerning the choice adopted in this article and the determination of the χ 2 , the characteristics of the solution using minimum erros bars are as large as ≈ Beam/5. The trajectory of component C5 is not long enough to constrain the parameter T d , i.e., the characteristic time for the damping of the beam perturbation. We fit assuming that T d ≤ 2000 yr. The time origin of the ejection of the component C5 cannot be deduced easily from the interpolation of VLBI data (Lister et al. 2009b). However, we show in Section B.1 how, using the precession model, it is possible to obtain the minimum time origin of the VLBI component by comparing the time position of the calculated first peak flux with the observed time position of the first peak flux. Close to the core, the size of C5 is ≈ 0.25 mas, therefore we assumed that n rad = 75, where n rad is the number of steps to describe the extension of the VLBI component along the beam. Final fit of component C5 of 3C 279 Here we present the solution to the fit of C5, the details for the fit can be found in Section B. The fit of component C5 using a BBH system corresponds to −ω p (t − z/V a ) and +ω b (t − z/V a ). The main characteristics of the solution of the BBH system associated with 3C 279 are that the radius of the BBH system is R bin ≈ 420 µas ≈ 2.7 pc, the VLBI component C5 is not ejected by the VLBI core and the offsets of the observed coordinates are ∆W ≈ +405 µas and ∆N ≈ +110 µas, the ratio T p /T b is T p /T b ≈ 140, and the ratio M 1 /M 2 is M 1 /M 2 ≈ 2.75. The results of the fits obtained for T p /T b ≈ 140 and M 1 /M 2 ≈ 2.75 are given in Appendix B.9. Adopting the solution with T p /T b ≈ 140 and M 1 /M 2 ≈ 2.75, we deduced the main parameters of the model. We can determine the Doppler factor (equation 12), and consequently we can estimate the observed flux density (equation 13). Using the precession model, we fitted the temporal position of the peak flux and determined the temporal origin of the ejection of the VLBI component (see Section B.1 for the details). Using the BBH model, we calculated and plotted in Figure 13 the flux variations of C5 using equation (A.1). We found that the time origin of the ejection of component C5 is t o ≈ 1999.03. Although equation (A.1) is a rough estimate of the flux density variations, it allows us to check the time origin of the ejection of the VLBI component found using the BBH model, to compare the time positon of the modeled first peak flux with the observed first peak flux, to obtain a good shape of the variation of the flux density during the first few years and explain the difference between the radio and the optical light curves. In some cases, in addition to the radio, optical observations show a light curve with peaks separated by about one year, see for instance the cases of 0420-016 (Britzen et al. 2001) and 3C 345 (Lobanov & Roland 2005). Using equation (A.1), the optical emission can be modeled as the synchrotron emission of a point source ejected in the perturbed beam (Britzen et al. 2001;Lobanov & Roland 2005). This short burst of very energetic relativistic e ± is followed immediately by a very long burst of less energetic relativistic e ± . This long burst is modeled as an extended structure along the beam and is responsible for the VLBI radio emission. Finally, we compared this solution with the solution obtained using the precession model. The χ 2 min (i o ) is about 151.4 for the fit using the BBH system and > 1000 for the precession model (see section B.1). To fit the ejection of component C5 we used 152 observations (76 epochs), so the reduced χ 2 is χ 2 r = χ 2 min /152 ≈ 0.996. Determining the family of solutions The solution is not unique, but there exists a family of solutions. For the inclination angle previously found, i.e., i o ≈ 10.4 • and using the parameters of the corresponding solution, i.e., T p /T b ≈ 140, M 1 /M 2 ≈ 2.75 and R bin ≈ 420 µas, we gradually varied V a between 0.01 c and 0.45 c. The function χ 2 (V a ) remains constant, indicating a degeneracy of the solution, and we deduced the range of variation of the BBH system parameters. They are given in Table 2. Determining the size of the accretion disk From the knowledge of the mass ratio M 1 /M 2 ≈ 2.75 and the ratio T p /T b ≈ 140, we calculated in the previous section the mass of the ejecting black hole M 1 , the orbital period T b , and the precession period T p for each value of V a . We calculated the rotation period of the accretion disk, T disk , using (20). Assuming that the mass of the accretion disk is M disk M 1 , the size of the accretion disk R disk is calculated using (21). We found that the size of the accretion disk does not depend on V a and is R disk ≈ 0.26 pc ≈ 0.041 mas. Comparing of the trajectories of C5 and C10 We see from figure 11 that components C5 and C6 probably follow probably the same trajectories, component C10 follows a different trajectory than C5 and C6. Thus, using the MOJAVE data (Lister et al. 2009b), we plot in figure 15 the trajectories of C5 and C10. We found that component C10 is probably ejected by the VLBI core, component C5 is ejected with a large offset from the VLBI core, and components C5 and C10 follow two different trajectories and are not ejected from the same origins, indicating that the nucleus of 3C 279 contains a BBH system. Discussion and conclusion We show how from the knowledge of the coordinates West(t) and North ( From the fit of the coordinates of component S1 of 1823+568, the main characteristics of the final solution of the BBH system associated with 1823+568 are that the radius of the BBH system is R bin ≈ 60 µas ≈ 0.42 pc, the VLBI component S1 is not ejected by the VLBI core, and the offsets of the observed coordinates are ∆W ≈ +5 µas and ∆N ≈ 60 µas, the ratio T p /T b is 8.88 ≤ T p /T b ≤ 9.88, the ratio M 1 /M 2 is 0.095 ≤ M 1 /M 2 ≤ 0.25, the inclination angle is i o ≈ 4.0 • , the bulk Lorentz factor of the VLBI component is γ c ≈ 17.7, and the origin of the ejection of the VLBI component is t o ≈ 1995.7. From the fit of the coordinates of component C5 of 3C 279, the main characteristics of the final solution of the BBH system associated with 3C 279 are that the radius of the BBH system is R bin ≈ 420 µas ≈ 2.7 pc, the VLBI component C5 is not ejected by the VLBI core and the offsets of the observed coordinates are ∆W ≈ +405 µas and ∆N ≈ +110 µas, If, in addition to the radio observations, one can obtain optical, X-ray, or γ-ray observations that show a light curve with peaks, the simultaneous fit of the VLBI coordinates and this light curve put stronger constraints on the characteristics of the BBH system. The high-frequency emission can be modeled as the synchrotron emission or the inverse Compton emission of a point source ejected in the perturbed beam, see Britzen et al. (2001) for PKS 0420-014 and Lobanov & Roland (2005) for 3C 345. This short burst of very energetic relativistic e ± is followed immediately by a very long burst of less energetic relativistic e ± . This long burst is modeled as an extended structure along the beam and is responsible for the VLBI radio emission. The simultaneous fit of the VLBI coordinates and the optical light curve using the same method as the one developed in this article has to be achieved. Observations of compact radio sources in the first mas show that the VLBI ejections do not follow a straight line, and modeling the ejection shows in each case studied that the nucleus contains a BBH system. Accordingly, Britzen et al. (2001) assumed that all radio sources contain a BBH system. If extragalactic radio sources are associated with galaxies formed after the merging of galaxies and if the formation of extragalactic radio sources is related to the presence of binary black hole systems in their nuclei, we can explain why extragalactic radio sources are associated with elliptical galaxies, why more than 90% of the quasars are radio-quiet quasars, e.g., Kellermann et al. (1989) and Miller et al. (1990). Radio-quiet quasars are active nuclei that contain a single black hole and can be associated with spiral galaxies (Peacock et al. 1986). Although it has not been proven yet that radio-quiet quasars only contain a single black hole, the hypothesis for distinguishing between radio-loud and radio-quiet quasars on the basis of the binarity of the central engine is supported by comparing the optical properties of the two classes (Goldschmidt et al. 1999). Recent observations of the central parts of radio galaxies and radio-quiet galaxies show a systematic difference between the two classes (Kharb et al. 2012). Because GAIA will provide positions of extragalactic radio sources within ≈ 25 µas, the link between the GAIA reference frame from optical observations of extragalactic radio sources and the reference frame obtained from VLBI observations will have to take into account the complex structure of the nuclei of extragalactic radio sources, because with a resolution of ≈ 25 µas, probably all these sources will appear as double sources, and the radio core, obtained from VLBI observations and the optical core obtained by GAIA will not necessarily be the same. We conclude, remarking that if the inner parts of the accretion disk contain a warp or precess faster than the precession of the outer part, this will produce a very small perturbation that will produce a day-to-month variability of the core flux (Roland et al. 2009). Because the function χ 2 t (i o ) is mostly flat between 4 and 10 degrees, to continue we abitrarily adopted the inclination angle i o ≈ 6 • . The main results of the fit for the precession model are that 1. the opening angle fo the precession cone is Ω ≈ 0.46 • , 2. the bulk Lorentz factor of S1 is γ c ≈ 20, 3. the origin of S1 is t o ≈ 1995.7, and 4. χ 2 (i o ≈ 6 • ) ≈ 67.4. A.2. Determining the family of solutions The solution is not unique. For the inclination angle previously found, i.e., i o ≈ 6 • and using the parameters of the corresponding solution, we gradually varied V a between 0.01 c and 0.45 c. At each step of V a , we minimized the function χ 2 t (λ), where λ are the free parameters. The function χ 2 (V a ) remained constant, indicating a degeneracy of the solution, and we obtained the range of possible values for the precession period given in Table 3. A.3. Determining the BBH system parameters Because the precession is defined by +ω p (t − z/V a ), the BBH system rotation is defined by −ω b (t − z/V a ). In this section, we kept the inclination angle previously found, i.e., i o ≈ 6 • and V a = 0.1 c. To determine the BBH system parameters corresponding to a value of T p /T b , we minimized χ 2 t (M 1 ) when the mass of the ejecting black hole M 1 varied gradually between 1 M to a value corresponding to M 1 /M 2 = 2 with a starting value of M 2 , such that 10 6 ≤ M 2 ≤ 10 9 . During the minimization M 2 is a free parameter, and at each step of M 1 , we minimized the function χ 2 t (λ), where λ are the free parameters. Thus we constrained the parameters of the BBH system when the two black holes have the same masses, i.e., M 1 = M 2 . We determined the parameters of the BBH system model for different values of the parameter T p /T b , namely T p /T b = 4.6, 10, 22, 46, 100, and 220. For a given value of T p /T b , we found the radius of the BBH system defined by Equation (8). Note that the radius of the BBH system does not depend on the starting value of M 2 . In this section, we kept the inclination angle previously found, i.e., i o ≈ 6 • , V a = 0.1 c and assumed M 1 = M 2 . The diagram χ 2 (T p /T b ) provides the possible solutions at a given inclination angle. Some of the solutions can be mirage solutions when i o varies. We calculated χ 2 (T p /T b ) for 1 ≤ T p /T b ≤ 300. We started for each value of the BBH system parameters found in the previous section, i.e., corresponding to the values of T p /T b = 4.6, 10, 22, 46, 100, and 220, and covered the complete interval 1 ≤ T p /T b ≤ 300. For instance, if we started at T p /T b = 22, we covered the ranges varying T p /T b from 22 to 1 and from 22 to 300. We found the possible solutions of the BBH system, i.e,. the solutions that correspond to the minima of χ 2 (T p /T b ). They are given in Table 4. Solutions 1 and 5 are excluded because they have γ c > 30 . There are three possible solutions, the best one is Solution 2, which corresponds to a BBH system whose radius is R bin ≈ 60 µas . In the following, we continue with Solution 2. A.5. Possible offset of the origin of the ejection In this section, we kept the inclination angle previously adopted, i.e., i o ≈ 6 • . We assumed that V a = 0.1 c, M 1 = M 2 , T p /T b = 11.45 and used the parameters of Solution 2 previously found. To test whether the VLBI component is ejected from the VLBI core or from the second black hole, we calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions. The step used in the west and north directions is 5 µas. At each step of ∆W and ∆N, we minimized the function χ 2 t (λ), where λ are the free parameters ( Figure A.2). The radius of the BBH system was left free to vary during the minimization. Fig. A.2. Calculation of χ 2 (∆W, ∆N) using the BBH model. Nonzero offsets are possible, but the size of the offset must be the same as the radius of the BBH system calculated at this point. This is the case if the offsets are ∆W 1 ≈ 0.010 mas and ∆N 1 ≈ 0.070 mas. We determined the offset at i o ≈ 6 • ; it does not depend on the value of the adopted inclination angle. The minimum of χ 2 (∆W, ∆N) is ≈ 49.5, and we see from Figure A.2 that the corresponding non-zero offsets are with ∆N ≥ 0.060 mas. However, all points with the smallest χ 2 (∆W, ∆N) are not possible. Indeed, for a point with the smallest χ 2 , the size of the offset offset must be equal to the radius of the BBH system calculated at this point. This is the case if the offsets are ∆W 1 ≈ +0.010 mas and ∆N 1 ≈ +0.070 mas. The radius of the BBH system at this point is R bin ≈ 70 µas and the offset size is ≈ 71 µas, i.e., the offset and the radius of the BBH system are the same at this point. Therefore we conclude that the VLBI component S1 is not ejected from the VLBI core, but from the second black hole of the BBH system, the radius of the BBH system is R bin ≈ 71 µas. It is about twice the smallest error bars of the observed VLBI component coordinates (the component positions), but it is significantly detected (2 σ from Figure A.2). We must correct the VLBI coordinates from the offset before we continue. Note that determining the offset of the origin does not depend on the value adopted for the inclination angle. This was shown by calculating the offset at different inclination angles, i.e., i o ≈ 4 • , 5 • , 7 • . A.6. Determining T p /T b From this point onward, the original coordinates of the VLBI component S1 are corrected for the offsets ∆W 1 and ∆N 1 found in the previous section. In this section, we assumed that R bin = 71 µas, M 1 = M 2 and V a = 0.1 c. Previously, we found that Solution 2 is characterized by T p /T b ≈ 11.45 for i o ≈ 6 • . In this section we obtain the range of possible values of T p /T b when i o varies. We calculated the funtion χ 2 (i o ) in the inteval 2 • ≤ i o ≤ 7 • , assuming that the ratio T p /T b is free. The relation between T p /T b and i o is plotted in Figure A In this section, we assumed that V a = 0.1 c and the radius of the BBH system is R bin = 71 µas. We varied i o between 2 and 7 degrees and calculated χ 2 (i o ) for various values of T p /T b and M 1 /M 2 . The values of T p /T b investigated are T p /T b = 11. 45, 8.88, 8.11, 7.76, and 7.62 The ratio M 1 /M 2 = 0.37. The function χ 2 (i o ) has a minimum for T p /T b ≈ 8.88 and i o ≈ 4. The robustness of this solution, defined as the square root of the difference χ 2 (γ = 30) − χ 2 (min), is ≈ 1.8 × σ. The main results are that when i o is larger than about 6 degrees, the bulk Lorentz factor increases and becomes greater than 30, which is excluded, the critical value of M 1 /M 2 ≈ 0.5, if M 1 /M 2 > 0.5, the solution χ 2 (i o ) is a mirage solution, if M 1 /M 2 < 0.5, the solution χ 2 (i o ) has a minimum, the solutions with a robustness larger than 1.7 σ are those with M 1 /M 2 < 0.37 (see Table 8), when M 1 /M 2 decreases, the solutions are more robust, but they are of lower quality, i.e., their χ 2 (min) increases (see Table 8), and when M 1 /M 2 < 0.5, the value of T p /T b that produces the best fit is T p /T b ≈ 8.88, independently of the value of M 1 /M 2 . We present in Table 5 the results of solutions corresponding to T p /T b ≈ 8.88 and M 1 /M 2 = 0.1, 0.25 and 0.37. In this section, we assumed V a = 0.1 c. Using the solution found in the previous section (Table 8), we can verify whether if there is an additional correction to the offset of the origin of the VLBI component. For this, we calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions. We assumed the radius of the BBH system to be free to vary. We found that a small additional correction is needed ∆W 2 ≈ −0.005 mas and ∆N 2 ≈ −0.010 mas. Finally, we found that the total offset is ≈ 60 µas and the radius of the BBH system is also R bin ≈ 60 µas. A.9. Final fit of component S1 of 1823+568 From this point onward, the coordinates of the VLBI component S1 are corrected for the new offsets ∆W 2 and ∆N 2 found in the previous section. In this section, we assumed V a = 0.1 c and R bin = 60 µas. We can now find the final solution for S1. We calculated χ 2 (i o ) for various values of T p /T b assuming M 1 /M 2 ≈ 0.25. We found that the best range for T p /T b is: 8.88 ≤ T p /T b ≤ 9.88. With this we can estimate the range of the mass ratio assuming T p /T b ≈ 8.88 and T p /T b ≈ 9.88. We defined the range of the mass ratio in the following way: 1. we found the mass ratio that produces a solution of at least 1.7 σ robustness, and 2. we found the mass ratio that produces a solution that is poorer by 1 σ than the previous one, but that is more robust. The results of the fit are presented in Tables 6 and 7. The improvement of the solutions of Tables 6 and 7 compared to the solutions of Table 5 is due to the new offset and the new value of the BBH system radius. We see that the solutions found with T p /T b ≈ 8.88 are slightly more robust, but both solutions can be used. The characteristics of the final solution of the BBH system associated with 1823+568 are given in section 4.4. We studied the two cases ±ω p (t − z/V a ). The final solution of the fit of component C5 of 3C 279 using a BBH system corresponds to −ω p (t − z/V a ), therefore we discuss only this case in this appendix. To fit the component C5, we assumed T d ≤ 2000 yr. In this section, we assume that V a = 0.1 c. The range of inclination explored is 0.5 • ≤ i o ≤ 10 • . To begin, we allowed the time origin of the VLBI component to be a free parameter. We assumed 1997.0 ≤ t o ≤ 1998.5. We found that the function χ 2 t (i o ) is characteristic of a function corresponding to case II (see Section 5.1), and the possible range for the inclination angle is [0.5, 5.5]. The time origin is t o ≈ 1997.52 when i o → 0.55, and the time origin is t o ≈ 1998.15 when i o → 5.5. We plotted the first peak flux corresponding to the solution t o ≈ 1998.15 and i o → 5.5 (solution with the smallest χ 2 ) and found that it is too early by at least eight months (green curve in Figure B.2). As indicated in Section A.1, we do not aim to fit the flux light curve, but we wish to compare the time position of the modeled first peak flux with the observed first peak flux (Figue B.2). Next, we allowed t o to be a free parameter in the range 1998.80 ≤ t o ≤ 1999.10 and calculated the new function χ 2 t (i o ). The possible range for the inclination angle is reduced to 0.8 • ≤ i o ≤ 4.3 • . The plots of χ 2 t (i o ) and γ(i o ) are presented in Figure B.1. We plotted the first peak flux corresponding to the solution t o ≈ 1998.80 and i o → 4.3 (red curve in Figure B.2). From Figure B.2, we conclude that the minimun time for the ejection of C5 is t o ≥ 1998.80. The behavior of the functions χ 2 (i o ) and γ c (i o ) are the second signature of case II, i.e., the offset is larger than the smallest error bars of the VLBI component coordinates. We see from Figure B.1 that the bulk Lorentz factor is γ c ≥ 22. Because the function χ 2 (i o ) does not show a minimum, we arbitrarily chose an inclination angle such that 22 ≤ γ ≤ 26 and the corresponding χ 2 (i o ) is the smallest. To continue, we chose i o ≈ 2.98 • and the corresponding parameters of the precession solution (the χ 2 of this solution is χ 2 ≈ 1211 and its bulk Lorentz factor is γ ≈ 22.6). We used this solution to apply the method explained in section 3 and we will see in the following how the BBH system model allows us to find the concave part of the funtion χ 2 (i o ). B.2. Determining the family of solutions (precession model) The solution is not unique. For the inclination angle previously found, i.e., i o ≈ 2.98 • and using the parameters of the corresponding solution, we gradually varied V a between 0.01 c and 0.45 c. The function χ 2 (V a ) remains constant, indicating a degeneracy of the solution, and we obtained the range of possible values for the precession period given in Table 8. In this section, we kept the inclination angle previously found, i.e., i o ≈ 2.98 • . We assumed that V a = 0.1 and used the parameters of the solution previously found. To test whether the VLBI component is ejected from the VLBI core or if it is ejected with an offset of the origin, we calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions, using the precession model. The step used in West and North directions is 10 µas. We see from Figure B.3, that non-zero offsets are possible and the smallest offsets of the coordinates are ∆W ≈ +0.300 mas and ∆N ≈ +0.280 mas, which, a priori, corresponds to an offset of the space origin of ≥ 410 µas or to a BBH system of radius R bin ≥ 410 µas. This minimum offset corresponds to an improvement of about 28 σ. If the offset of the space origin can be estimated using the precession model, it cannot be explained if we assume that the nucleus contains a single black hole, but it can be explained if we assume that the nucleus contains a BBH system. It is important to note that the offset does not depend on the inclination angle chosen in section B.1. Indeed, we took the solution corresponding to i o ≈ 1.5 • , whose χ 2 is χ 2 ≈ 1610 and whose bulk Lorentz factor is γ ≈ 24, and we calculated χ 2 (∆W, ∆N), which yielded the same result. It is easy to prove that the value of the offset of the space origin is related to the time origin problem. Indeed, Figure B.3 shows that there is a significant offset of the space origin when we assume that the time origin of component C5 of 3C 279 is t o ≥ 1998.80. Now, using again the precession model, we calculated the possible offset of the space origin assuming that the time origin is a free parameter ( Figure B.4). We see from Figure B.4 that non-zero offsets are possible and the smallest offsets of the coordinates are ∆W ≈ 100 µas and ∆N ≈ +150 µas, at this point, the time origin is t o ≈ 1998.45. This time origin corresponds to an ejection that is about seven months too early. Fig. B.4. Calculation of χ 2 (∆W, ∆N) using the precession model and assuming that the time origin is a free parameter. We find that non-offset are possible and the smallest offset corresponds to the point ∆W ≈ 100 µas and ∆N ≈ +150 µas. At this point, the time origin is t o ≈ 1998.45 which is ≈ 7 months too early. To continue, two possibilities arise: 1. either we keep the original VLBI coordinates and determine the parameters of the BBH system and the χ 2 (T p /T b ) -diagram. Then, we determine a first offset correction using the BBH model, and after a preliminary determination of T p /T b and M 1 /M 2 , we determine a second offset correction using the BBH model; 2. or we apply the precession offset correction to the VLBI coordinates and then we determine the parameters of the BBH system and the χ 2 (T p /T b ) -diagram. Then we determine a first offset correction using the BBH model, and after a preliminary determination of T p /T b and M 1 /M 2 , we determine a second offset correction using the BBH model. For component C5 of 3C 279, the two possibilities were followed. We found that they provide the same result in the end. In this article, we present the first one. The determination of the offsets of the origin of the ejection does not depend on the inclination angle. B.4. Determining the BBH system parameters Because the precession is defined by −ω p (t − z/V a ), the BBH system rotation is defined by +ω b (t − z/V a ). In this section we kept the inclination angle previously found, i.e., i o ≈ 2.98 • and V a = 0.1 c. In the previous section, we saw that the BBH system has a large radius, i.e., R bin ≥ 410 µas. Therefore, we determined the parameters of a BBH system with small T p /T b and a radius for the BBH system that is a free parameter (solutions with small T p /T b have large radii), i.e., we determined the parameters of a BBH system with T p /T b = 1.01 and calculated the corresponding In this section, we kept the inclination angle previously found, i.e., i o ≈ 2.98 • , V a = 0.1 c and assumed M 1 = M 2 . Furthermore we assumed that the radius of the BBH system is a free parameter. We calculated χ 2 (T p /T b ) for 1 ≤ T p /T b ≤ 300. We started for BBH system parameters corresponding to the values of T p /T b = 1.01 and cover the complete interval of T p /T b . The result is shown in Figure B.5. Fig. B.5. Calculation of χ 2 (T p /T b ). The curve corresponds to the minimization when T p /T b varies from 1 to 300. There is one solution S1. We found the one solution given in Table 9. In this section, we kept the inclination angle previously found, i.e., i o ≈ 2.98 • . We assumed that V a = 0.1 c, M 1 = M 2 . We calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions. The step used in the west and north directions is 5 µas. The radius of the BBH system and T p /T b are free parameters during the minimization. We calculated χ 2 (∆W, ∆N) starting with the parameters of solution S1 found in the previous section. The result is shown in Figure B.6. We see from Figure B.6 that non-zero offsets are possible. However, all points with the smallest χ 2 (∆W, ∆N) are not possible. Indeed, for a point with the smallest χ 2 , the offset size must be equal to the radius of the BBH system calculated at this point. This is the case if the offsets are ∆W 1 ≈ +0.490 mas and ∆N 1 ≈ +0.005 mas. The radius of the BBH system at this point is R bin ≈ 487 µas and the offset size is ≈ 490 µas, i.e., the offset and the radius of 246,247,250,255, etc corresponding to the minimum, 1 σ, 2 σ, 3 σ, etc There is a valley of possible offsets, but the size of the offset must be the same as the radius of the BBH system. This is true when the offsets are ∆W 1 ≈ +0.490 mas and ∆N 1 ≈ 0.005 mas. the BBH system are the same at this point. Therefore we conclude that the VLBI component C5 is not ejected from the VLBI core, but from the second black hole of the BBH system, and the radius of the BBH system is R bin ≈ 490 µas. It is more than ten times the smallest error bars of the VLBI component coordinates. Note that if the size of the offset found with the BBH model is the same as the size of the offset found with the precession model, the first offsets are not the same for the coordinates. However, after the preliminary determination of the ratios T p /T b and M 1 /M 2 , the second and third offset corrections provide the same final offset corrections (the two methods indicated in section B.3 provide the same corrections in the end). B.7. Preliminary determination of i o , T p /T b and M 1 /M 2 From this point onward, the original coordinates of the VLBI component C5 are corrected for the offsets ∆W 1 and ∆N 1 found in the previous section. In this section, we assumed that V a = 0.1 c and the radius of the BBH system is R bin = 490 µas. For given values of the ratio M 1 /M 2 = 1.0, 1.25, 1.50, 1.75, and 2.0, we varied i o between 3.0 and 10 degrees and calculated χ 2 (i o ) assuming that the ratio T p /T b is variable. We found that χ 2 (i o ) is minimum for the parameters i o ≈ 5.9 • , -M 1 /M 2 ≈ 1.75, and -T p /T b ≈ 14.6. B.8. Detemining a possible new offset correction In this section, we assumed V a = 0.1 c. With i o ≈ 5.9 • , with a variable ratio T p /T b , M 1 /M 2 ≈ 1.75 and the parameters of the solution found in the previous section, we can verify whether there is an additional correction to the offset of the origin of the VLBI component. We calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions. We assumed that the radius of the BBH system is left free to vary. The result is shown in Figure B.7. We found that an additional correction is needed, namely ∆W 2 ≈ −0.085 mas and ∆N 2 ≈ +0.105 mas. At this point the total offset is ≈ 418 µas and the radius of the BBH system is R bin ≈ 420 µas. 162,163,166,171, etc corresponding to the minimum, 1 σ, 2 σ, 3 σ, etc. There is a valley of possible offsets, but the size of the offset must be the same as the radius of the BBH system. This is the case when the offsets are ∆W 2 ≈ −0.085 mas and ∆N 2 ≈ +0.105 mas. B.9. Final fit of component C5 of 3C 279 The coordinates of the VLBI component C5 are corrected for the new offsets ∆W 2 and ∆N 2 . In this section, we assumed V a = 0.1 c and R bin = 420 µas. We can now find the final solution for the fit of C5. We calculated χ 2 (i o ) for various values of T p /T b and M 1 /M 2 , namely T p /T b ≤ 1000 and M 1 /M 2 ≤ 3.5 with a typical step ∆(M 1 /M 2 ) = 0.25. The first important result is that when the ratio T p /T b is high enough, we can find non-mirage solutions in relation to the variable γ. To illustrate this result, we plot in Figure B.8 the function γ(T p /T b ) corresponding to M 1 /M 2 = 1.75. The determination of the solution used an iterative method. Starting with a given value of T p /T b , we calculated χ 2 (i o ) for various values of M 1 /M 2 . Then we calculated for the parameters corresponding to the solution found the function χ 2 (T p /T b ) to determine the new value of T p /T b that minimizes the function χ 2 (T p /T b ). Starting with the new value of T p /T b , we repeated the procedure. At each step of the procedure, we calculated χ 2 (γ) to check that the solution corresponds to the concave part and is not a mirage solution. The best fit is obtained for T p /T b ≈ 140 and M 1 /M 2 ≈ 2.75. The results of the fits are presented in Table 10. We plot in Figure B.9 the calculation of χ 2 (γ) corresponding to the solution characterized by T p /T b = 140 and M 1 /M 2 = 2.75. It shows that the solution is not a mirage solution in relation to γ. The best fit is obtained for T p /T b ≈ 140 (see Figure B.10). When the ratio T p /T b increases, the χ 2 remains mostly constant but the robustness of the solution in relation to γ increases. Finally, we plot in Figure B.11 the function χ 2 (i o ). The characteristics of final solution of the BBH system associated with 3C 279 are given in section 5.4. C.1. Minimum error bar values Observations used to fit the components S1 of 1823+568 and C5 of 3C279 were performed at 15 GHz. We adopted for the minimum values of the error bars, ∆ min , values in the range Beam/15 ≤ ∆ min ≤ Beam/12. There are three important points concerning the minimum values used for the error bars: 1. The minimum values are chosen empirically, but the adopted values are justified a posteriori by comparing of the value of χ 2 of the final solution and the number of constraints used to make the fit. Indeed, the reduced χ 2 has to be close to 1. 2. The minimum value of the error bars used at 15 GHz produces a value of (χ 2 ) f inal concistent with the value of the realistic error obtained from the VLBI Service for Geodesy and Astrometry (Schlüter & Behrend 2007), which is a permanent geodetic and astrometric VLBI program. It has been monitoring the position of thousands of extragalactic radio sources for more than 30 years. In 2009, the second realization of the International Celestial Reference Frame (ICRF2) was released (Fey et al. 2010), obtained after the treatment of about 6.5 millions of ionosphere-corrected VLBI group delay measurements at 2 and 8 GHz. This catalog is currently the most accurate astrometric catalog, giving absolute positions of 3414 extragalactic bodies at 8 GHz. The observations at 2 GHz are used for the ionospheric correction only. Therefore, the positions at 2 GHz are not provided. The ICFR2 is found to have a noise floor of only 40 microseconds of arc (µas), which is five to six times better than the previous ICRF realization (Ma et al. 1998). The positions of more than 200 radio sources are known with a precision (inflated error, or "realistic" error) better than 0.1 mas. Since the ICRF2 release, the positional accuracy of the sources has increased, and it is likely that the next VLBI realization of the ICRF will have a noise floor lower than 40 µas. 3. The adopted minimum value of the error bars also includes typical errors due to opacity effects, which shift the measured position at different frequencies (Lobanov 1998). Thus the minimum values for the error bars adopted at 15 GHz, using equation (19), are correct. The fit of VLBI coordinates of components of 3C 345 (work in progress) indicates that the adopted values for the minimun values of the error bars, using equation (19), are correct for frequencies between 8 GHz and 22 GHz. At lower frequencies, the minimum values may be higher than Beam/12 due to strng opacity effects and at 43 GHz, the minimum values are also probably higher (≈ 20 µas). It has been suggested by Lister & Homan (2005) that the positional error bars should be about 1/5 of the beam size. To study the influence of the minimum values of the error bars on the characteristics of the solution, we calculate in the next sections the solution of the fit of the component C5 assuming for the minimum values of the error bars the value suggested by Lister & Homan (2005), i.e. the value ∆ min = Beam/5 , or (∆W) min ≈ 102 µas and (∆N) min ≈ 267 µas. C.2. Fit of C5 using the precession model We look for a solution with −ω p (t − z/V a ) and t o ≥ 1980.80 (see Section B.1). In this section, we assumed that V a = 0.1 c. The range of inclination explored is 0.5 • ≤ i o ≤ 10 • . We allowed t o to be a free parameter in the range 1998. In this section, we kept the inclination angle used in section B.3, i.e., i o ≈ 3.3 • . We assumed that V a = 0.1 and used the parameters of the solution found in section C.2. We calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions, using the precession model. The step used in the west and north directions is 10 µas. The result of the calculation is plotted in Figure C the smallest offsets of the coordinates are ∆W ≈ +0.320 mas and ∆N ≈ +0.280 mas. They are similar to the offsets found assuming that the minimum error bars are ∆ min = Beam/15 (see Section B.3), the value of χ 2 at the minimum is χ 2 min ≈ 67 instead of χ 2 min ≈ 400 when the minimum error bars are ∆ min = Beam/15. The reduced χ 2 r is χ 2 r ≈ 67/152 ≈ 0.44, indicating that the minimum error bars are too large. Accordingly, with high values for the minimum values of the error bars, we find using the precession model that the component C5 is ejected with an offset of the space origin of at least 0.425 mas with a robustness higher than 11 σ. The offset of the space origin can be estimated using a single black hole and the precession of the accretion disk. It cannot be explained when we assume that the nucleus contains a single black hole, but it can be explained when we assume that the nucleus contains a The contour levels 68, 71, 76, etc correspond to 1 σ, 2 σ, 3 σ, etc. Non-zero offsets are possible and the smallest offsets are ∆W ≈ 0.320 mas and ∆N ≈ 0.280 mas, which corresponds to a BBH system of radius R bin ≥ 425 µas. C.4. χ 2 (T p /T b ) -diagram Because the precession is defined by −ω p (t − z/V a ), the BBH system rotation is defined by +ω b (t − z/V a ). As in Section B.4, we calculated the BBH parameters for the inclination angle i o ≈ 2.98 • and the ratio T p /T b = 1.01 and calculated the corresponding χ 2 (T p /T b ) -diagram assuming M 1 = M 2 and V a = 0.1 c. We calculated χ 2 (T p /T b ) for 1 ≤ T p /T b ≤ 300. The result is shown in Figure C We found a possible solution of the BBH system given in Table 11. Tables 9 and 11 and of Figures B.5 and C.3 shows that the solutions S1 and Sol1 are mostly identical. C.5. Determining the offset of the origin of the ejection (BBH model) In this section, we kept the inclination angle previously found, i.e., i o ≈ 2.98 • . We assumed that V a = 0.1 c, M 1 = M 2 . We calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions. The step used in the west and north directions is 5 µas. The radius of the BBH system and T p /T b are free parameters during the minimization. We calculated χ 2 (∆W, ∆N) starting with the parameters of solution Sol 1 found in the previous section. The result is shown in Figure C There is a valley of possible offsets, but the size of the offset must be the same as the radius of the BBH system. This is the case if the offsets are ∆W 1 ≈ +0.495 mas and ∆N 1 ≈ 0.005 mas. The corresponding radius of the BBH system is R bin ≈ 495 µas. We see from Figure C.4 that non-zero offsets are possible. However, all points with the smallest χ 2 (∆W, ∆N) are not possible. Indeed, for a point with the smallest χ 2 , the size offset must be equal to the radius of the BBH system calculated at this point. This is the case if the offsets are ∆W 1 ≈ +0.495 mas and ∆N 1 ≈ +0.005 mas. The radius of the BBH system at this point is R bin ≈ 495 µas and the offset size is ≈ 495 µas, i.e. the offset and the radius of the BBH system are the same at this point. Comparison of the result found in Section B.6 and of Figure C.4 and Figure B.6 show that the offset determined using large error bars is the same as the offset calculated with the small error bars. Therefore we conclude that the VLBI component C5 is not ejected from the VLBI core but from the second black hole of the BBH system. C.6. Determining a possible new offset correction From this point onward, the original coordinates of the VLBI component C5 are corrected for the offsets ∆W 1 and ∆N 1 found in the previous section. In this section, we assumed V a = 0.1 c. As in Section B.7, we preliminarily determined the parameters T p /T b , M 1 /M 2 and i o . With i o ≈ 5.9 • , using the ratios T p /T b free and M 1 /M 2 ≈ 1.75, we can verify whether there is an additional correction to the offset of the origin of the VLBI component. We calculated χ 2 (∆W, ∆N), where ∆W and ∆N are offsets in the west and north directions. We assumed that the radius of the BBH system is let free to vary. The result is shown in Figure C.5. We found that an additional correction is needed, namely ∆W 2 ≈ −0.085 mas and ∆N 2 ≈ +0.085 mas. At this point the total offset is ≈ 419 µas and the radius of the BBH system is R bin ≈ 419 µas. There is a valley of possible offsets, but the size of the offset must be the same as the radius of the BBH system. This is the case if the offsets are ∆W 2 ≈ −0.085 mas and ∆N 2 ≈ 0.085 mas. The corresponding radius of the BBH system is R bin ≈ 419 µas. Thus, using for the highest values of the error bars the values ∆ min = Beam/5, we found that the final offset is ∆W t ≈ +0.410 mas, and ∆N t ≈ +0.090 mas, and the radius of the BBH system is R bin ≈ 0.419 mas. These values have to be compared with the values obtained assuming for the lowest error bars the value used ∆ min = Beam/15, which are ∆W t ≈ +0.405 mas, and ∆N t ≈ +0.110 mas, and the radius of the BBH system is R bin ≈ 0.420 mas.
2013-07-14T05:24:25.000Z
2013-07-14T00:00:00.000
{ "year": 2013, "sha1": "85b206087bc2ea1b5c423ede61d3a13fdc2f8205", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2013/09/aa19165-12.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "85b206087bc2ea1b5c423ede61d3a13fdc2f8205", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
198752375
pes2o/s2orc
v3-fos-license
Comparative Study of the Effect of Cereals (Guinea Corn and Wheat) on The Physico-Mechanical Properties of Tunga and Chanchaga Core Making Sands The study presents the effect of cereals (Guinea corn and wheat) on the properties of clay-bonded foundry core sand from deposits in Niger state. The research analyses the physic-mechanical properties of prepared sand samples obtained from the two locations namely Tunga and Chanchaga core making sand respectively. Results obtained from test conducted on various mixtures studied indicated that the cereals used significantly improved the strengths, both compressive and shear in the green and dry states of the core samples produced. This shows that local additives as foundry core mixtures can effectively be used as substitutes for imported ore. INTRODUCTION Foundry practice in Nigeria can be traced back as far before the pre-colonial era but the country has not significantly benefitted from these foundries partly due to knowledge gap between the local foundrymen and the professionals. Foundry industries in Nigeria are faced with numerous challenges, [1] revealed that almost all foundries in Nigeria embark on sand casting technique with 60% of the raw materials needed imported. The properties of sand for casting is a crucial factor in foundry practice. It is important to have good knowledge of this materials, this will greatly assist in determining there suitability for foundry use. The properties and behaviour of sand when known give a clue on the mould and core moulding sand which must be readily mouldable and produce defect free casting. The properties of sand varies and some sands tend to be defective in properties. Casting plays important roles in the production of modern equipment for transportation, communication, power, agriculture, agro-allied, construction, space, chemical and petrochemical among others. In casting, there is need for a good binder with high temperature resistance, good collapsibility after casting whilst giving smooth surface finish to the cast. A binder is the second most important component after sand in the mould [2]. Proper selection of a good binder is as important as the moulding sand as it serves to hold the sand grains together; impact strength, resistance to erosion and to breakage, also degree of collapsibility [3][4][5][6]. Organic binders have excellent breakdown properties which makes them ideal for cores of casting where accessibility for fettling is difficult and the reduced fettling cost make these binder economical. Inorganic (clay, aqueous, sodium, silicates, bentonite, silicon flour and iron oxide) binders gives additional green strength, retard or increase collapsibility of a core and to prevent cutting or penetration [4][5][6]. In this study, effort was made in improving the physico-mechanical properties of foundry sand by adding some additives (Guinea corn and Wheat) to better enhance the foundry sand for good casting and also mitigate against high cost of raw materials (e.g. binders) whilst giving optimal casting output. Materials To conduct the experimental work, the sand sample for these were collected from two major locations in Niger state Nigeria namely: i. Chanchaga ii. Tunga area The cereal (Guinea corn & Wheat) used as binder were sourced locally from the season farmer in the country. Global Positioning System GPS, array of sieves etc. Methods The American Foundry Men Society standard AFS was employ to determine the foundry sand properties of the samples in order to ascertain their suitability for foundry applications. The physical properties of the clay bonded core sand were conducted and the results presented in Bar graph. Experimental Section The two deposits namely chanchaga and Tunga were selected to provide information on the suitability for exploitation and use in foundry application in Niger state and Nigeria at large. Five test sand sample namely test 1test 5 were prepared from the bulk sample taken from the aforementioned location. The proportionate percent mixture of the wheat and Guinea corn were varied as represented below: The GPS coordinate of the two locations are Chanchaga co-ordinate -Latitude 9°36′50″N longitude 6°33′25″E, Tunga coordinate -Latitude 9°16'60" N Longitude 6°34'60" E. Green and Dry Strength Test Green compression strength test were performed on the prepared test sample. AFS specification was used to prepare the standard test specimen with sufficient measured quantity of tempered sand from the mixture prepared. The mixture placed in the specimen pedestal and rammed three times for tolerance. The specimen were removed from the container and fixed into compressive testing machine. Thereafter, for the dry compressive strength, the specimens were dried at 105 0 C for about 2hours. Permeability Test Permeability or porosity of the moulding sand is the measure of its ability to permit air to flow through it. The moulding sand specimen is placed in a specimen tube, 2000cm 3 of air at a pressure of 980Pa to pass through the specimen was noted to determine permeability of each sand samples, Permeability P is given as: Green Shear Strength With sand similar to the test in green compression, a different adapter is fitted in the universal machine so that loading now be made for the shearing of the sand sample. The stress required to shear the specimen along the axis is then represent as the green shear strength. The green shear strength may vary from 10kPa to 50kPa (1.45Psi to 7.25Psi). Moisture Content The moisture content test; moisture contents of the test specimens were determined using a moisture teller method. The instantaneous moisture content values in percentage were recorded from the instrument range. Chemical Constituents The results of chemical composition of the two location (Chanchaga and Tunga) core-making sand are presented in Table 2. The results shows silicon dioxide (SiO2) and Alumina (Al2O3) as the major constituent with values ranging from 92.70% and 88.77%, 2.50% and 4.24% for the two deposits respectively. Silica are very important in moulding as they impact properties like refractoriness, chemical resistivity and permeability to the sand [8], the higher the percentage of silica sand, the better is the refractoriness of the sand. Other constituent present are oxide of calcium, magnesium, sodium, iron, titanium and potassium. This is in line with recommendation in literature [9]. However, from the chemical analysis of the core making sand determined, it can be opined that the two deposits are safe for use in sand casting foundry application. Moisture Content The moisture content of the core making sand was constant in this study. The aim is to bring to fore the effect of cereal with the purpose of significantly enhancing the properties of the sand as reported by [6]. It should be noted that moisture content is extremely critical parameter that needs to be considered. However, excess moisture content results in lowering of bond particles of the sand, weakening of the bond strength and poor bonding properties [10]. This setback can be overcome using moisture based additives while also improving on bonding and other properties. Green and Dry Compressive Strength The green compressive strength (GCS) and dry compressive strength (DCS) for Tunga and Chanchaga core making sand determined are represented in Fig. 1 and 2 Permeability Permeability number of a moulding sand depends on degree of fineness of the sand, as well as its moisture content [9]. Insufficient porosity (i.e. permeability) of moulding sand leads to casting defects such as holes and pores. The experimental results of the given permeability (GP) and dry permeability (DP) for the test samples prepared from the two deposits as presented in the bar graph (Figs. 1 and 2) indicate that the sand samples had good permeability values are within the standard range for most ferrous and non-ferrous metals [5,12,15,16]. CONCLUSION Comparative study on the effect of cereals (Guinea corn & Wheat) on the physico-mechanical properties of two different core making sands deposits has been investigated and in conclusion the following were drawn: 1) The experimental analysis shows that the physico-mechanical properties of the two deposits as core making sand agree with the standard recommended properties for various types of casting. 2) The chemical analysis results for the silica sand indicates its suitability for moulding sand application. 3) That the mixture had a good permeability for casting ferrous and non-ferrous metals. Also the additives can best be substitute for imported ores.
2019-07-26T14:45:53.452Z
2019-07-03T00:00:00.000
{ "year": 2019, "sha1": "2a9a8459121deaec91381ea799ce299e94db32b4", "oa_license": "CCBY", "oa_url": "https://www.ijert.org/research/comparative-study-of-the-effect-of-cereals-guinea-corn-and-wheat-on-the-physico-mechanical-properties-of-tunga-and-chanchaga-core-making-sands-IJERTV8IS060665.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0a72092262af44af9e8d651243c1c24a42cdce64", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
265697862
pes2o/s2orc
v3-fos-license
Evaluation of monte carlo to support commissioning of the treatment planning system of new pencil beam scanning proton therapy facilities Objective. To demonstrate the potential of Monte Carlo (MC) to support the resource-intensive measurements that comprise the commissioning of the treatment planning system (TPS) of new proton therapy facilities. Approach. Beam models of a pencil beam scanning system (Varian ProBeam) were developed in GATE (v8.2), Eclipse proton convolution superposition algorithm (v16.1, Varian Medical Systems) and RayStation MC (v12.0.100.0, RaySearch Laboratories), using the beam commissioning data. All models were first benchmarked against the same commissioning data and validated on seven spread-out Bragg peak (SOBP) plans. Then, we explored the use of MC to optimise dose calculation parameters, fully understand the performance and limitations of TPS in homogeneous fields and support the development of patient-specific quality assurance (PSQA) processes. We compared the dose calculations of the TPSs against measurements (DDTPSvs.Meas.) or GATE (DDTPSvs.GATE) for an extensive set of plans of varying complexity. This included homogeneous plans with varying field-size, range, width, and range-shifters (RSs) (n = 46) and PSQA plans for different anatomical sites (n = 11). Main results. The three beam models showed good agreement against the commissioning data, and dose differences of 3.5% and 5% were found for SOBP plans without and with RSs, respectively. DDTPSvs.Meas. and DDTPSvs.GATE were correlated in most scenarios. In homogeneous fields the Pearson’s correlation coefficient was 0.92 and 0.68 for Eclipse and RayStation, respectively. The standard deviation of the differences between GATE and measurements (±0.5% for homogeneous and ±0.8% for PSQA plans) was applied as tolerance when comparing TPSs with GATE. 72% and 60% of the plans were within the GATE predicted dose difference for both TPSs, for homogeneous and PSQA cases, respectively. Significance. Developing and validating a MC beam model early on into the commissioning of new proton therapy facilities can support the validation of the TPS and facilitate comprehensive investigation of its capabilities and limitations. Introduction Proton beam therapy (PBT) is becoming increasingly available for cancer treatment worldwide.The growing interest in making PBT more available reflects the favourable dose distributions that can be achieved, allowing for integral dose reduction in younger patients and the treatment of complex diseases (Foote et al 2012, Mohan 2022).With a remarkable number of facilities currently treating patients and many more in development stages (PTCOG 2023), efficient clinical commissioning standards and procedures become more critical.Commissioning of the multiple individual components of the proton therapy system is essential prior to clinical use.These include the treatment delivery system (accelerator, beamline and nozzle), the patient imaging components and the treatment planning system (TPS), amongst others (Farr et al 2021). The TPS is one of the key elements of the PBT system as it is used to plan, optimise, and assess patients' treatments.The clinical commissioning of the TPS includes beam data acquisition, modelling of the beam and validation of the calculated doses, for which a set of comprehensive but time-consuming experimental measurements are required.For the TPS dose validation in pencil beam scanning (PBS) systems, the American Association of Physicists in Medicine (AAPM) Task Group 185 recommends measurements of dose outputs in both uniform and non-uniform fields of varying complexity (Farr et al 2021).The uniform fields in homogeneous media should comprise monoenergetic layers and spread-out Bragg peaks (SOBPs) of different combinations of field size, range, and width (modulation).The non-uniform fields consist of a variety of patientspecific quality assurance (PSQA) plans, representative of clinical indications to be treated at the facility.Both types of fields should be verified for plans with and without beam modifying devices like range shifters (RSs) (Farr et al 2021).It is fundamental that the dose calculation algorithms used clinically are accurate and this must be assessed during commissioning, prior to starting patient treatments. Analytical pencil beam (PB) algorithms embedded within TPSs are the most popular tools for dose calculations in the clinic.The main limitation of analytical models is the poor modelling of the lateral scatter which leads to inaccurate dose calculation in complex and heterogeneous media (de Martino et al 2021, Saini et al 2018).Monte Carlo (MC) codes are considered the most accurate dose calculation engines in radiotherapy, as these simulate particle transport in matter based on fundamental particle interactions (Paganetti et al 2008, Paganetti 2012, Grassberger et al 2015, Yepes et al 2018, Tommasino et al 2018).Historically, MC algorithms were available in very specialised software packages tailored for research use, limiting their application in clinical settings (Waters et al 2007, Agostinelli et al 2003, Böhlen et al 2014).More recently, there has been an effort to develop toolkits to make general-purpose MC codes more user-friendly for researchers and clinical staff working in medical physics applications.Examples of such toolkits are TOPAS (Perl et al 2012) and GATE (Jan et al 2011, Sarrut et al 2014) for Geant4 (Agostinelli et al 2003) and Flair (Vlachoudis 2009) for FLUKA (Böhlen et al 2014).Similarly, commercial TPSs have started to incorporate MC algorithms for dose calculations-for example, AcurosPT in Eclipse (Varian Medical Systems) and the MC algorithm in RayStation (RaySearch Laboratories).To reduce computational times, strategies like simplifying or neglecting some particle transport processes (Schreuder et al 2019, Varian Medical Systems 2020) and/or GPU (Graphical Processing Units) implementations (Varian Medical Systems 2023, RaySearch Laboratories 2023) are often employed.Regardless of the superiority of MC, PB algorithms are still largely used clinically due to their convenience and availability. The simulation of clinical treatment plans in general purpose MC toolkits requires accurate modelling of the incident beam.MC models of clinical beams may be used to support decision making at clinical facilities.The most popular use of general purpose MC codes in proton therapy is for independent dose calculations, with multiple studies demonstrating in-house workflows for this application (Paganetti et al 2008, Tourovsky et al 2005, Grevillot et al 2012, Magro et al 2015, Fracchiolla et al 2015, Aitkenhead et al 2020, Verburg et al 2016, Guterres Marmitt et al 2020).However, there are other applications that have been less explored.The AAPM Task Group 185 suggests MC as an alternative to direct measurements to support the different stages of the commissioning process of new proton facilities (Farr et al 2021).In this context, some studies have shown that MC generated beam data could reduce the number of measurements required to configure a beam model in the TPS (Newhauser et al 2007, Clasie et al 2012).Alternatively, an adequately validated MC model of a PBS system could be developed early into the commissioning stage of a new facility to support the dose validation of the radiotherapy TPS.This could help reduce the number of TPS validation measurements performed during this process (for example, to specific field configurations where MC would indicate larger dosimetric differences), enhance the number and variety of cases tested, as well as allow a more comprehensive understanding of the dose calculation engine and its limitations before starting the treatment of patients.The number of measurements required during commissioning of new facilities are resource-and time-consuming.It is recognised that efficiency improvements during the early stages of a new facility can lead to transitioning to the routine clinical phase on schedule and reduce the risk of delays going clinical. In this work, our aim was to demonstrate the potential of MC during the TPS dose validation process for a new proton therapy facility.First, we tuned and benchmarked a model of the proton system installed in our institution using beam commissioning data and a limited number of homogeneous fields.Then, with a properly benchmarked MC implementation, we demonstrated the potential of MC to complement the extensive measurements that comprise the optimisation and validation of the TPS of new proton therapy facilities.The analysis was performed for two TPSs to validate our approach for multiple dose calculation engines.To the best of our knowledge, this is the first time an independent MC tool was investigated in detail to support the TPS dose verification and validation during the commissioning of a new PBS system. Methods and materials The proton beam system modelled in this work was the Varian ProBeam (Varian Medical Systems) installed at University College London Hospitals proton centre, in the UK.This clinical system was retrospectively modelled in GATE (Jan et al 2011, Sarrut et al 2014), an open-source MC package, and in two commercial TPS systems-Eclipse (Varian Medical Systems) and RayStation (RaySearch Laboratories). Beam commissioning data The beam commissioning data used consisted of measurements of integral depth-doses (IDDs) and absolute dose calibration in water and spot profiles in air, for nominal energies ranging from 70 to 245 MeV, in steps of 5 MeV.The IDDs of monoenergetic pencil beams were measured in a water phantom, using a 4.08 cm sensitive radius plane-parallel PTW Bragg peak chamber 34070 (PTW-Freiburg).The absolute dose was determined by measuring the dose at the reference depth of 2 cm water-equivalent depth, for 10 × 10 cm 2 layers, with 2.5 mm spot spacing, using a plane-parallel PTW Roos chamber 34001 (PTW-Freiburg).For each IDD, the whole curve was normalised to the absolute dose determined at the reference depth.Finally, the IDDs were scaled according to a relative biological effect (RBE) factor of 1.1 [units: Gy mm 2 MU -1 (RBE)].The spot profiles were measured, both with and without RSs in the beam's path, at the isocentre and at six distances both up and downstream from the isocentre plane (0 cm, ±10 cm, ±20 cm and ±40 cm), using the 42 × 32 cm scintillation detector XRV-4000 Hawk Beam Profiler (Logos Systems).The uncertainty of the measured profiles is within ± 0.1 mm at the full width at half maximum position, and within ±0.3 mm at centroid positions.The spot sizes in the x and y directions according to the beam's eye view coordinates were defined as one standard deviation (σ) of a Gaussian function fitted to the measured spot profiles.For spot profile measurements with each of the three RS options (40 × 30 cm 2 Lexan Polycarbonate blocks of 2, 3 and 5 cm thickness), the RSs were inserted into a fixed position within the ProBeam's fully retracted snout.The water equivalent thickness (WET) of each RS was measured using the Giraffe detector (Ion Beam Applications SA), which has an uncertainty in range determination of 0.5 mm. Beam modelling and dose calculation in Eclipse and RayStation Beam models were built for the proton convolution superposition (PCS) algorithm in Eclipse v16.1 and the MC algorithm in RayStation v12.0.100.0 following the specifications provided by the manufacturers (Varian Medical Systems 2020, RaySearch Laboratories 2019).In Eclipse, the user must provide measurements of IDDs (calibrated to units of Gy mm 2 MU −1 (RBE)) and spot profiles both with and without RSs.The PCS algorithm requires measurement data of spot profiles with RSs to model the lateral scattering of the beam in the presence of RSs.The lateral cut-off calculation parameter, σ Ecl , was set to the maximum value of 4 (unless stated otherwise).In RayStation, the user must provide measurements of IDDs, absolute dose calibration and spot profiles without RSs only, as it is not an option to import spot profiles measured with RSs.The material of the RSs was selected according to the details provided by the manufacturer. Beam modelling in GATE The MC model of the clinical beam was implemented using GATE v8.2.Our modelling approach consisted of a parameterisation of the pencil beam source as a function of the nominal energy.The pencil beam type source in GATE is characterised by its energy (E meanmean energy and σ Eenergy spread) and optical parameters (σ x and σ yspot size, σ θ andσ jspot divergence and ò x and ò yemittance in the x and y directions, respectively).The source was positioned beyond the nozzle exit, prior to the beam modifying devices, at 60 cm upstream from the isocentre, so that the RSs could be physically modelled in the simulations.The parametrisation was achieved by tuning iteratively the beam properties at the source to match the experimental beam commissioning data in the absence of any RSs.The RSs were then modelled at the nozzle exit as 40×30 cm 2 blocks of Lexan Polycarbonate (fraction by weight: 5.5% H, 75.5% C, 19% O) with varying thicknesses (2, 3 and 5 cm).The density of Lexan Polycarbonate is 1.21 g cm −3 according to the manufacturer specifications (Varian Medical Systems 2020); this density was empirically adjusted to 1.195 g cm −3 in the MC simulations to match the simulated and experimental WET. Figure 1 shows a summary of the beam parametrisation methodology, which consisted of three sequential optimisation steps: (1) optical parameters, (2) energy parameters and (3) absolute dose calibration.The fraction of energy scored during the optimisation of the IDDs is dependent on the optical properties (Grevillot et al 2011).Therefore, the optical parameters were determined first and were used in the tuning of the energy parameters (Grevillot et al 2011, Yeom et al 2020).The absolute dose calibration was obtained by normalising the area under the curve of the IDDs in GATE to that of the measured IDDs.The QGSP_BIC_EMZ physics list was used in all simulations and the production cuts on secondary particles (electrons, photons, positrons) were set to 0.01 mm according to the recommendation provided by Winterhalter et al (2020b).All simulations had a statistical uncertainty below 0.25%.The optimisation process was implemented in MATLAB 2019b (Mathworks).Complete details of the optimisation strategy can be found in supplementary material 1. Dose calculation parameters for tested plans All the plans used in this work to demonstrate the potential of MC to complement experimental measurements were optimised in Eclipse using the non-linear universal proton optimizer (NUPO) algorithm (Varian Medical Systems 2020, Nocedal and Wright 2006).Two types of plans were evaluated: homogeneous (such as layers and box-fields) and non-homogeneous (clinical PSQA fields).The doses were calculated using a 40×40×40 cm 3 water box (for homogeneous fields) or a 30 × 30 × 30 cm 3 solid water (PTW RW3) box (for non-homogeneous fields).The water material was defined by assigning to the box structure a relative stopping power (RSP) of 1 for dose calculations.The RSP for the solid water phantom was calculated for multiple solid water slabs as the ratio between the WET, measured with the Giraffe detector (Ion Beam Applications SA), and the physical thickness, and the average RSP of 1.041±0.002was considered.A resolution of 1 × 1 × 1 mm 3 was the default value used for 3D dose calculations in all algorithms.The plans were then delivered experimentally with the ProBeam at our institution, as well as recalculated independently in the other dose calculation engines-GATE and RayStation.In GATE, the parameterised beam model described in section 2.3 was utilised.For plans recalculated in water, a water box geometry was defined, whereas for plans recalculated in solid water, an image with the corresponding HUs was imported and positioned according to the plan.The water material was defined with a density of 1 g cm −3 and mean excitation energy of 78 eV (ICRU report 90 2016).The solid water material was defined as per specifications of the manufacturer (fraction by weight: 7.6% H, 90.4% C, 1.2% Ti, 0.8% O) and its density of 1.045 g cm −3 was tuned to 1.057 g cm −3 in GATE to match the experimental RSP.In all GATE simulations the number of primary particles simulated was chosen to achieve a statistical uncertainty below 0.5%.In RayStation, the doses were recalculated using the MC algorithm.Water was defined with a density of 1 g cm −3 and solid water (same elemental composition as in GATE) was defined with a density of 1.062 g cm −3 to match the experimental RSP.All plans were calculated in RayStation with a statistical uncertainty of 0.5%. Demonstration of the potential of MC to support TPS commissioning The experiments performed to demonstrate the potential of MC to support and complement the experimental measurements necessary during the TPS dose validation are summarised in figure 2 and described in detail in the following sections. Benchmarking of the beam models The first experiment consisted of benchmarking the beam models built in GATE and the two TPSs against experimental measurements with the aim of verifying the accuracy of their implementation and performance in a limited but representative number of scenarios.This step should be performed in the early stage of the commissioning to gain confidence in both the MC and the TPS's models.Therefore, the benchmarking data included the beam commissioning data and a limited number of IDDs and lateral profiles in SOBPs in water (with and without RSs). The three beam models were first compared against the beam commissioning data.The AAPM task group 224 recommends range tolerances for the IDDs to be within ± 1 mm and maximum differences of ±10% for the spot sizes (Arjomandy et al 2019).In GATE, IDDs and spot profiles (with and without RSs) were simulated using the parametrised beam model.In Eclipse, the calculated IDDs and spot profiles with and without RSs were exported from the system.In RayStation, the model fitted IDDs and spot profiles without RSs were exported from the system, similarly to Eclipse.However, since RayStation does not model range-shifting devices using measurement data, the modelled spot profiles with RSs (2, 3 and 5 cm) could not be directly exported.Instead, treatment plans of single spots in air with RSs of varying thickness for all energies were created.The 3D dose was calculated and the array of voxels in the x and y directions corresponding to the depth of measurements were extracted from the 3D dose files, using an in-house developed MATLAB code. The performance of the models was then comprehensively assessed in a set of seven box fields delivered to water: three 10×10 cm 2 spread-out Bragg peaks (SOBP) plans of 15, 20 and 30 cm range (R) and 10 cm width (W) (without a RS), and four 5×5 cm 2 SOBP plans of 12 cm R and 5 cm W (one without a RS and three with RSs of varying thickness).For each plan, the following data was measured experimentally: IDDs in the centre of the volumes using a PTW Roos 34001 ionisation chamber and lateral profiles using the PTW microDiamond 60019 detector.IDDs and lateral profiles were extracted from the 3D dose distributions by integrating the dose along the depth for the area of the PTW Roos ionisation chamber and by extracting the lateral array of voxels at the central plane, respectively.Each lateral profile, both measured and calculated, was normalised to its value at the central coordinate. MC to support TPS commissioning The second experiment consisted of demonstrating the potential of MC to complement the time-consuming and resource-intensive measurements of TPS dose validation in new facilities.Here we aim to investigate how centres may incorporate an adequately validated MC beam model to support the verification process of their TPS.We investigated how MC may inform the optimisation of TPS calculation parameters (in Eclipse) and the validation of the two TPS models in both homogeneous and non-homogeneous field plans.To demonstrate these application scenarios, the output of the two commercial TPSs was compared against measurements or GATE for an extensive set of plans with diverse complexity just like those used during commissioning to evaluate the TPS dose calculation.A summary of the scenarios and a description of their measurement and calculation in the three algorithms is presented in table 1; complete details are provided in the paragraphs below. Scenario 1: optimisation of calculation parameters-lateral cut-off and dose grid resolution Commercial dose calculation algorithms may provide the freedom for the user to select their preferred dose calculation parameters.The choice of the dose calculation parameters has a non-negligible influence on the dose calculation accuracy (Zhao 2013).Therefore, as part of commissioning, these parameters must be evaluated to identify the adequate balance between calculation accuracy and computation time.In Eclipse, we investigated two calculation parameters: the lateral cut-off, σ Ecl , and the grid resolution.The σ Ecl calculation parameter is defined as 'the cut-off value for the extent of the lateral dose calculation in units of the beamlet sigma or spot sigma' (Varian Medical Systems 2020) and may influence the absolute dose calculation.The resolution of the dose grid may influence the local dose distribution significantly, especially for small and non-homogeneous fields (Zhao 2013).In order to demonstrate that these calculation parameters could be tuned relying on MC only, a set of ten 10 × 10 cm 2 SOBP plans in water with varying range and width were simulated in GATE and calculated in Eclipse using a σ Ecl value of 2, 3 and 4. To evaluate the influence of the grid resolution on both homogeneous and non-homogeneous fields, the same ten SOBP plans and seven PSQA cases were recalculated in Eclipse with varying grid resolution (1 × 1 × 1, 2 × 2 × 2 and 3 × 3 × 3 mm 3 ), using the previously optimised σ Ecl parameter.The point dose outputs at the centre of the SOBP for each parameter from Eclipse were compared to both measurements and GATE, in terms of the mean difference value and standard deviation. Scenario 2: dose evaluation in homogeneous fields One step in the performance verification of a clinical TPS consists of evaluating the lateral effect of the beam halo by analysing the dose outputs at the centre of squared mono-energetic fields.The Gaussian fit approximation of the transverse spot profiles in analytical dose calculation algorithms disregards the broad tails of the spot profiles, also known as beam halo (Harms et al 2020).The beam halo is the result of particle scattering within the beam line and nuclear interactions, which have a larger weight for higher energy beams (Sawakuchi et al 2010, Gottschalk et al 2015).It is challenging to assess the effect of the beam halo in single spots.However, the cumulative contribution of the low-dose envelope is significant for larger fields, where there is superposition of single pencil beams (Sawakuchi et al 2010, Grevillot et al 2011, Harms et al 2020).The dose outputs for monoenergetic layer fields should increase with increasing field size, reaching a dose plateau when charged particle equilibrium is achieved -as the number of single beams is larger, a larger contribution of low dose is expected in total (Grevillot et al 2011).To evaluate the lateral effect of the beam halo, the dose at 2 cm WET for a 100 MeV mono-energetic layer was measured and obtained with the three calculation algorithms for seven different field sizes (3 × 3, 4 × 4, 5 × 5, 7 × 7, 10 × 10, 12 × 12, 15 × 15 cm 2 ).The 2 cm depth was chosen as it presents a low dose gradient, which decreases detector positioning uncertainties.All dose values were normalised to the output of the 10×10 cm 2 field.The calculated point dose outputs using GATE and both TPS were compared against measurements to evaluate the accuracy of each algorithm.Eclipse and RayStation outputs were also compared against GATE.The contribution of the beam halo for spots across multiple energy layers can be further evaluated in SOBP plans, which will be dependent on SOBP range and width. An extensive set of uniform SOBP fields of varying field size and varying range/width, covering a representative range of clinical energies, should be evaluated.The AAPM Task Group 185 suggested a list of fields to investigate in table V of their publication (Farr et al 2021).Considering the significant number of fields recommended and the associated resources and time required for measurements, a representative subset of plans could be first measured.One could make use of this experimental data to infer the dose outputs for the remaining plans through MC simulations, to gain a better understanding of the TPS dose calculation algorithm and its limitations.If large deviations in dose for these plans are observed when comparing to MC, then those cases should be verified against measurements.A total of 39, 10×10 cm 2 SOBP fields, with varying range (10-35 cm), width (2-20 cm) and range shifter options were tested.Within these, a subset of 10 SOBP fields were selected (the same ones as tested for the optimisation of the calculation parameters), ensuring variability in ranges and widths.We then investigated if the standard deviation of the differences between GATE and measurements could be used as threshold to consider when comparing TPS dose outputs with GATE, to identify plans that fail the established passing criteria of 2% (equation ( 1)).Equation (1) and the threshold value found considering the 10 fields was further applied to the full set of 39 SOBPs.The final verification of a clinical TPS before approving its clinical use consists of evaluating dose calculations for a range of preclinical PSQA plans against measurements.This step is essential to understand the behaviour of the dose calculation algorithm for different types of non-homogeneous plans and anatomical sites.Furthermore, standard PSQA procedure workflows should be developed and adopted by the centre, which may also include the use of MC as an independent dose calculation tool.Measuring a range of clinical plans allows one to understand the performance of both the TPS and MC in complex fields. A total of 11 PSQA plans of different anatomical sites, delivered to solid water, were tested.The anatomical sites included brain, spine, pelvis, head and neck and breast cases.2D plane doses were measured at up to three pre-defined depths in the solid water phantom per field, using the ionisation chamber matrix PTW OCTAVIUS detector 1500XDR.This detector contains 1405 ionisation chambers (4.4 × 4.4 × 3 mm 3 ) arranged on a 27 × 27 cm 2 matrix, with a centre-to-centre distance between each ionisation chamber of 7.1 mm.Then, for each plane, a dose point was also measured using a PTW Semiflex 3D 31021 cylindrical ionisation chamber.This small ionisation chamber (sensitive volume of 0.07 cm 3 ) provides high spatial resolution and is adequate for point dose measurements.A total of 72 planes and correspondent points were measured and analysed.All fields were simulated in GATE and calculated in both TPSs, and a 3D dose grid was obtained for each field.The point doses were derived from each 3D dose distribution by taking the coordinates of the measuring points and extracting the dose in a region of interest corresponding to the volume of the PTW Semiflex ionisation chamber.Similar to the SOBP plans, the standard deviation of differences between GATE and measurements for all fields considered was used as threshold when comparing TPS dose outputs with GATE.Point dose differences below 3% were considered as the clinical passing criteria.A 3%/3 mm local gamma analysis with a lower cut dose threshold of 5% was adopted, following our institution's protocol to evaluate PSQA plans.This analysis was performed for the 2D dose planes to test the agreement between the three dose calculation algorithms and the measured arrays, following Hussein et al 2017.Gamma pass rates above 95% were defined as within clinically passing criteria (Farr et al 2021). Benchmarking of the beam models Table 2 summarises the validation data of the beam models against commissioning beam data of IDDs and air profiles, plus a representative set of seven box fields in water.Examples of measured and calculated IDDs and spot profiles for 70 MeV and 245 MeV are shown in supplementary material 2. Overall, the three beam models matched the commissioning beam data within the tolerances described in section 2.5.1 for most cases.However, the Monte Carlo-based beam models (GATE and RayStation) presented a superior performance on average when considering the IDDs and lateral profiles for the tested box field scenarios. The IDDs modelled in GATE, Eclipse and RayStation were compared against the experimental IDDs in terms of R 80% , W 80% , area under the curve (AUC), dose at 2 cm depth and dose at the peak.The absolute differences in R 80% (figure 3(a)) were within 0.1 mm for the TPSs and 0.4 mm for GATE.GATE R 80% differences were larger and had a larger standard deviation across the energy range, likely because the energy parameters were optimised with the trade-off of balancing different quantities (R 80% , W 80% and the peak-to-plateau ratio).The absolute differences in W 80% (figure 3(b)) ranged from −0.1 to 0.4 mm for GATE and RayStation, and a maximum difference of -0.7 mm was observed for Eclipse.RayStation overestimated W 80% for most energies whilst Eclipse underestimated W 80% for energies higher than 150 MeV.Regarding the AUC (figure 3(c)), the absolute and mean differences across all nominal energies were within 0.4% and 0.1%, respectively, for all algorithms.GATE presented the largest standard deviation amongst all models, while Eclipse overestimated the area under the curve for most energies.Differences in dose at 2 cm depth and at the peak (figure 3(d) and (e), respectively) between the GATE model and the measured IDDs were within 2%, with a tendency to underestimate the dose at 2 cm depth and overestimate the dose at the peak.Differences in dose at 2 cm depth were within approximately 0.5% for RayStation and ranged from −2% to 0.8% for Eclipse, showing an energy dependence.For most energies, the differences in peak dose were less than 1% for both Eclipse and RayStation. RayStation underestimated the peak dose by up to 6% for the lower energies, likely because the modelled IDDs could not be exported from the system with a resolution finer than 1 mm.In comparison, the resolution of the modelled IDDs in GATE was 0.1 mm; in Eclipse it varied with depth and energy, however, it was approximately 0.1 mm in the peak region for the lower energies.The mean differences in spot size across all nominal energies, between the measured commissioning data and the three beam models, without any RSs and for the 5 cm RS, are shown in figure 4(a) and (b), respectively, as a function of distance from the isocentre (x direction only).Figure 4 also shows the absolute differences in spot size at the isocentre (x direction) as a function of nominal energy for the three algorithms against measurements, without any RSs (c) and using the RS = 5 cm (d).The mean differences for the case without RSs were within 0.1 mm for all models (maximum absolute differences of 0.3 mm), with Eclipse having the smallest standard deviation (maximum of 0.04 mm) and RayStation the largest (maximum of 0.14 mm), considering all depths.In GATE, the highest differences occurred for the extreme energy values (70 MeV and 245 MeV), both with and without RS, likely due to a poorer fit of the parametrisation in this region.The use of RS was generally associated with larger errors in spot size.Mean differences were within 0.4 mm for all models and generally smaller mean differences were observed for the RS = 2 cm and RS = 3 cm options.Maximum absolute differences in spot size were 0.8 mm for the TPSs and 1.7 mm for GATE when the RS = 5 cm was included, and these were typically found for the measuring planes furthest from the source.For the RS = 5 cm case, maximum differences of 0.7 mm were found in GATE for the lowest energy at the isocentre, which is equivalent to approximately 3.5% of the spot size (20 mm).Differences were within 0.2 mm for energies above 120 MeV, for all RS options.For Eclipse and RayStation, all differences were within 0.2 mm and 0.4 mm, respectively, independently of RS thickness.There was a tendency for all algorithms to underestimate slightly the spot size for energies below 120 MeV, for the three RS options. The 10 × 10 × 10 cm 3 plans with 15, 20 and 30 cm range calculated using the three models agreed well with measured data.IDDs and lateral profiles for the 20 cm range plan are presented in figures 5 (a) and (b), respectively.In general, for the IDDs measured on these box fields, GATE underestimated the dose in the buildup region and overestimated the dose in the SOBP by up to 2%.Eclipse tended to underestimate the dose, with the largest differences found for the plan with 30 cm range (up to 3.5% in the SOBP) and presented a flatter SOBP for all cases, unlike the trend seen in measurements.There was no trend for RayStation.If differences above 3.5% occurred, these were typically in the fall-off region.For the lateral profiles, differences in the tail region were the largest for Eclipse, which underestimated the dose for all plans.IDDs for the 5 × 5 × 5 cm 3 plan with RS = 5 cm (figure 5(c)) were within approximately 3% for GATE and RayStation.Similar differences were reported in other studies (Rahman et al 2020).Generally, GATE and RayStation underestimated the dose in comparison to measurements.Differences were within 5% for Eclipse, which overestimated the dose in the build-up and underestimated the dose in the SOBP.Overall, slightly smaller differences in IDDs were achieved for plans with the 2 and 3 cm RSs.For the corresponding lateral profiles (figure 5(d)), all algorithms presented point by point differences within 1.7% for all RS options and Eclipse showed a better agreement with measurements in the tail region in comparison to the results for the plans without RSs. Demonstration of the use of MC to complement commissioning measurements The following sections aim to demonstrate the potential of a benchmarked MC beam model to support the different stages of the validation of the TPS in a new proton therapy facility.The outputs of Eclipse and RayStation were compared against measurements and GATE for an extensive set of plans for different scenarios: (1) optimisation of TPS calculation parameters, (2) dose assessment in homogeneous fields and (3) dose assessment in non-homogeneous clinical fields. Scenario 1: optimisation of calculation parameters Ten SOBP plans were calculated in Eclipse using a lateral cut-off (σ Ecl ) of 2, 3 or 4. The point dose outputs were compared to either measurements or GATE simulations.Let DD TPS versus Meas.be the dose differences obtained when comparing the TPSs with experimental data and DD TPS versus MC the dose differences between the TPSs and GATE.There was a strong correlation between DD TPS versus Meas.and DD TPS versus MC for the three σ Ecl options (ρ = 0.93, Pearson's correlation coefficient).The mean percentage difference between Eclipse and experimental measurements was −16.6 ± 1.4%, −2.8 ± 0.9% and −1.5 ± 0.9% for σ Ecl = 2, 3 and 4, respectively.The corresponding differences against GATE were −16.6 ± 1.7%, −2.8 ± 1.2% and −1.5 ± 1.3%, indicating very similar trends.The smallest differences between Eclipse and measurements were for σ Ecl = 4, and this conclusion could be derived by comparing Eclipse to GATE.The value of σ Ecl = 4 was used for all subsequent Eclipse dose calculations. The ten SOBP plans were also recalculated in Eclipse for a grid resolution of 1 × 1 × 1, 2 × 2 × 2 and 3 × 3 × 3 mm 3 .The dose grid resolution had an impact only on fields with small W 80% , where finer resolutions improved the dose calculation accuracy-in these cases, the excess error for using 3 × 3 × 3 mm 3 was 0.6%.The same trend was observed when comparing Eclipse dose outputs directly to GATE results.The Pearson's correlation coefficient between DD TPS versus Meas.and DD TPS versus MC was 0.95.For the seven PSQA cases tested, the mean absolute percentage differences between doses calculated in Eclipse and measurements were 1.3 ±1.0%, 1.4 ± 1.2% and 1.8 ± 2.6%, while between Eclipse and GATE the differences were 1.4 ± 1.0%, 1.6 ± 1.2 and 1.9 ± 2.5%.The two datasets, DD TPS versus Meas.and DD TPS versus MC , were strongly correlated as well for all grid resolutions (ρ = 0.84 Pearson's correlation coefficient).The standard deviation of the differences increased with increasing grid spacing-for example, an excess dose difference of 10% was observed for a field with large dose inhomogeneities when using a dose grid of 3 × 3 × 3 mm 3 .In summary, while dose outputs extracted from the 1 × 1 × 1 mm 3 agreed best with measurement and GATE for both homogeneous and heterogeneous fields, using such a fine grid was more important for heterogeneous fields, where the errors in the positioning of the point dose are larger.A resolution of 1 × 1 × 1 mm 3 was applied to the rest of the plans calculated in this work using Eclipse and RayStation. Scenario 2: Homogeneous fields Figure 6 (a) shows normalised dose values at 2 cm depth for a 100 MeV monoenergetic layer of field sizes ranging from 3 × 3 cm 2 to 15 × 15 cm 2 , obtained through measurements, GATE, Eclipse and RayStation.All dose values were normalised to the reference field size of 10 × 10 cm 2 .The measured dose generally increased with increasing field size and a similar trend was observed for GATE.Surprisingly, the dose for the largest field size (15 × 15 cm 2 ) was 0.2% lower than for the 12 × 12 cm 2 field.However, this difference was within the uncertainty limits of the measurements and all calculation algorithms.For both Eclipse and RayStation, the dose was constant for field sizes larger than 4 × 4 cm 2 .Figure 6(b) shows the percentage difference for the three dose calculations algorithms in comparison to measurements and figure 6(c) shows the percentage differences in dose for Eclipse and RayStation against GATE.For GATE, maximum differences of approximately 1.4% against measurements were observed for the smaller field sizes of 3 × 3 cm 2 and 4 × 4 cm 2 .The differences for Eclipse and RayStation against measurements were comparable with those detected with comparisons against GATE- i.e. increased dose differences with decreasing field size, following very similar trends.A maximum difference of 3.5% and 2.5% was obtained for Eclipse and RayStation, respectively, when comparing with measurements.These differences were underestimated by 1.4% when comparing against GATE.In this experiment with monoenergetic layer fields, the Pearson's correlation coefficient between DD TPS versus Meas.and DD TPS versus MC was also strong (ρ = 0.97 for Eclipse and ρ = 0.90 for RayStation). Figure 7(a) presents the percentage differences in the output dose for the three dose calculation algorithms against measurements for the total of 39 SOBP plans.The corresponding mean differences were −0.1 ± 0.5%, −2.0 ± 0.9% and −0.5 ± 0.5%, for GATE, Eclipse and RayStation respectively.GATE presented larger differences for SOBP with smaller W of 2 cm and Eclipse underestimated the dose for most fields.(2 cm to 20 cm), between measurements and the three dose calculations algorithms (a); point dose differences between the TPSs and measurements and the correspondent difference region estimated by GATE, for Eclipse (b) and RayStation (c).The GATE estimated intervals were obtained from the TPSs difference against GATE plus or minus the standard deviation of the GATE differences against measurements for the 10 subsets of fields.Plans marked in bold represent the selected sample of 10 SOBP plans. (solid dots) or GATE (shaded area).The shaded area was created by applying a tolerance to the dose differences (DD TPS versus MC ), which was defined as the standard deviation of the differences between GATE and measurements (±0.54%) for a subset of 10 fields (marked in bold in the axis of the figure).The Pearson's correlation coefficient between DD TPS versus Meas.and DD TPS versus MC was 0.92 for Eclipse and 0.68 for RayStation, indicating that GATE could better identify dose differences between Eclipse and measurements. The coloured shaded regions in figures 7(b) and (c) represent the difference between the TPSs and GATE plus a tolerance to account for the fact that GATE itself presents a difference against measurements.Ideally, the solid dots curves would fall within the confidence region of GATE (shaded coloured region).Out of the 29 points evaluated (i.e. after excluding the 10 plans used to define the tolerance), 21 points (72%) were within the GATE prediction shaded area and 8 points (28%) were outside, for both Eclipse and RayStation.Generally, the points were close to fall within the shaded area and the maximum difference between the solid dots and the border of the shaded area was approximately 0.4% for both Eclipse and RayStation.The grey areas in subfigures (b) and (c) are the regions outside the acceptance criteria established for these plans (maximum 2% difference).If the shaded region overlaps the grey region, based on comparison with GATE, there is a likelihood that the difference will fall within the non-acceptance region.In total, 19 points were outside the 2% acceptance criteria when comparing Eclipse to measurements.According to the comparison with GATE, 24 points were predicted to be outside the acceptance criteria and 17 of these points (out of 19) were correctly predicted.No points were outside the acceptance criteria when comparing RayStation to measurements, while one point had a small likelihood of being outside when comparing RayStation directly to GATE.These results show that homogenous fields simulated in a properly commissioned MC system can be used to predict TPS deviations from measurements for validation purposes, since there were no cases for which point dose differences were within 2% when compared to GATE but outside tolerance when compared to experiments.This would prevent the need for measuring the entire range of fields, and rather a focus could be made on the situations of predicted failure, reducing the amount of in-person time required for physical measurements. Figure 8 shows the mean dose differences for Eclipse and RayStation, considering the 39 SOBP plans, against measurements (solid line) or GATE (dashed line), as a function of R, W and RS option, which allow to identify trends and limitations of the TPS in the dose calculation of different field types (deep/shallow, wide/thin, with or without RS).For instance, Eclipse presented the largest differences for plans with 25 cm R and there was a trend for differences to increase with increasing RS thickness-both limitations could be identified through comparisons with GATE alone.The largest disagreement between DD TPS versus Meas.and DD TPS versus MC was for W = 2 cm, in agreement with the results in figure 7 (a) where it was shown that GATE presented larger differences for W = 2 cm. Scenario 3: non-homogeneous fields Figure 9(a) shows the percentage differences in dose measured in up to three points per field for 11 PSQA plans (a total of 72 points) for GATE, Eclipse and RayStation against measurements.The mean differences across the 72 points were −0.7±1.2%,0.4±1.9% and −0.3±1.0%for GATE, Eclipse and RayStation, respectively.The absolute maximum differences against measurements found were 2.9% for both GATE and RayStation and 4.5% for Eclipse.Both GATE and RayStation tended to underestimate the dose for plans with the RS = 5 cm, in comparison to plans without or with thinner RSs.In fact, the two algorithms presented a similar trend for differences against measurements across the entire dataset.Eclipse tended to overestimate the dose for plans without any RSs and underestimate the dose for shallow fields with the RS of 5 cm (Breast case).Figures 9(b) and (c) show the differences for Eclipse and RayStation, respectively, when comparing against measurements (solid dots) and GATE (shaded area).The standard deviation of the differences between GATE and measurements was ±1.2% and this value was applied as the tolerance interval when comparing TPSs dose outputs directly to GATE, similarly to what was done in the case of homogenous fields.Out of the 72 points, 43 points (60%) were within the GATE prediction shaded area and 29 points (40%) were outside, for both Eclipse and RayStation.Out of the 29 points that were outside the GATE predicted area, 23 (∼80% of the points) corresponded to fields containing the RS = 5 cm, for which GATE presented larger discrepancies in comparison to measurements, whilst still within the established acceptance interval (maximum of 3% difference). The 3%/3 mm gamma pass rate results for GATE, Eclipse and RayStation are presented in figure 10.The gamma pass rates for GATE and RayStation calculated planes in comparison to measured planes were all above 97% and 98%, respectively.Both the point dose differences (figure 9 (a)) and the gamma pass rate results for the two algorithms followed a similar trend, and this is most likely due to both algorithms being MC-based.For most plans, the gamma pass rates fluctuated between 95% and 100% for Eclipse, with 8 out of 72 points presenting gamma rates slightly below the established passing criteria.Although all algorithms showed different trends for cases with or without RSs for the point dose evaluation (figure 9 (a)), the same trends were not directly translated into the gamma pass rate results with the chosen specifications. The analysis of the 2D dose planes indicated that overall, GATE and RayStation tended to underestimate slightly the dose whereas Eclipse tended to overestimate it.Examples of 3%/3 mm gamma index and dose difference ((meas.-alg.)/alg.)maps can be found in figure 11 for a clinical brain plan without RS and for a pelvis plan with RS = 5 cm. Discussion In this study we have demonstrated the potential of Monte Carlo to support the commissioning of the treatment planning system of a new proton beam therapy machine.A MC model may be developed early in the commissioning process using the same beam data required to commission a new PBS system.The MC and TPS models should first be benchmarked against commissioning data and comprehensive measurements on a small number of representative homogeneous fields to verify the accuracy of the implementations.Then, by evaluating the dose calculation algorithms on an extensive set of homogenous and non-homogenous plans, we have shown that MC may be used as an independent dose calculation tool to complement (and potentially reduce) the number of measurements during the TPS dose validation.MC has the potential role to identify the parameter space in which the TPS is expected to deviate from measurements and so focus in-person measurement efforts on these cases for best use of commissioning time.Furthermore, it can help understand the limitations and outputs from the TPSs, as well as in inform the optimisation of the clinical dose calculation algorithms.This potential was demonstrated for different dose calculation engines available in two commercial TPS systems.To the best of our knowledge this is the first study to focus on the use of MC to support the dose validation and verification steps of the commissioning of a treatment planning system. The first part of the work demonstrated a semi-automated process to develop a proton beam model in GATE and proposed a set of detailed measurements to benchmark its performance.This methodology is generalisable and could be applied to model beam data from other PBS-PT centres with similar technology.The beam modelling methodology applied in this work was based on that of Yeom et al (2020).This consisted of an iterative optimisation of the energy and optical properties of the beam and the final model was a parametrisation of the optimal beam parameters as a function of nominal energy.To model the optical parameters in GATE, the best initialisation parameters were first roughly estimated, which accelerated the convergence of the optimisation.The IDDs were calibrated using the area under the curve, thus avoiding the normalisation to be performed at a single point along the IDD, similar to Aitkenhead et al (2020).One limitation of this modelling approach is the parametrisation itself, as the error between the optimal values for the beam parameters and the fitted values can be considerable (up to 10%), particularly for the divergence and the energy spread.Similar findings were observed by Grevillot et al (2011) and Aitkenhead et al (2020) for the energy spread parameter, although exact error values were not reported in the publications. Maximum differences of 0.3 mm were found when comparing spot sizes without RSs obtained with GATE, Eclipse and RayStation to measurements, for all measuring depths, and maximum differences ranging from 0.15 to 0.4 mm were reported in the literature (Grevillot et al 2011, Rahman et al 2020, Saini et al 2017, Yeom et al 2020).An underestimation of the spot sizes of profiles with RSs was observed for all algorithms, for energies below 120 MeV.The measured profiles were noisier for the lower energies, therefore, there is a larger uncertainty associated to these measured spot sizes.Differences in spot size against the air profiles obtained during commissioning were slightly lower for Eclipse in comparison to GATE and RayStation, which may be related to the way the different systems model the RSs.Eclipse system uses as input the measured spot profiles with and without RSs in the beam modelling process.However, in both GATE and RayStation the commissioning data of spot profiles with RSs were not used, and only the material of the RSs was modelled.In RayStation, the vendor optimised material was provided within the material options.In GATE, the density of the RS material was tuned to match the measured WET, however, this could be further improved to better model the true scattering properties of the material, perhaps by tuning the exact chemical composition and I -value.In our MC beam modelling process, we optimised optical parameters to match only the experimental data without RS.Improvements could be achieved by, for example, finding the best optical parameters that match spot profiles both with and without RS, or to generate an independently optimised model for each RS separately (Fracchiolla et al 2015, Winterhalter et al 2020a). The differences in R 80% between the measured and the modelled Bragg peaks were within 0.4 mm for all algorithms and maximum differences of 0.6 mm and 1 mm have been reported in the literature (Grevillot et al 2011, Saini et al 2017).These small differences were translated into errors in range of the SOBP plans within the calculated dose grid resolution.The overall shape of the IDDs and the way the absolute dose calibration is implemented may reflect on the performance of the models.In GATE, the IDDs were calibrated to the area under the curve, therefore there is a balance between the agreement in build-up and peak regions, when comparing to measurements.The underestimation of the dose at 2 cm depth in the IDDs was reflected in the lower dose in the build-up region of the SOBPs and the overestimation of the dose at the peak can be associated with the higher dose in the flat region of the SOBPs.Additionally, the overestimation of the dose in the peaks of the IDDS is reflected in the dose outputs in the centre of SOBP (figure 7).This overestimation was larger for SOBP fields with smaller width, where a greater proportion of the dose is coming from the peak region, and decreased with increasing width, where there is a larger contribution from the build-up regions of the individual beams.Despite a good agreement against measurements of the IDDs peak dose in Eclipse, a flat high dose region in SOBP plans was observed, unlike the pattern of the measurements or GATE and RayStation.Furthermore, it underestimated the dose in the centre of SOBPs by up to 4%.This can be associated with the fact that no correction factor was applied from these box-field results, which is a possible dose calculation refinement in Eclipse.We opted against applying this correction to our Eclipse beam model since the dose underestimation found for the homogeneous box-fields did not propagate to non-homogeneous fields. It is of utmost importance to compare the final beam models built both in the TPSs and MC against the commissioning data itself, as any discrepancies present at this stage will be reflected directly on more complex homogeneous and non-homogeneous clinical plans.For instance, the current version of RayStation does not automatically compute spot profiles in the presence of the RSs, since this data is not used to build the models.Users must perform the dose calculations of monoenergetic pencil beams in air for the full energy range and extract the corresponding spot sizes independently, and from our experience this should not be overlooked.In an earlier version of our RayStation dose model we found large differences against both measurements and GATE for non-homogeneous complex fields with RS for which were struggling to find a justification.It was upon explicitly benchmarking the air profiles with RS that we realised an error in defining the distances between the isocentre and RS tray position.This error was subtle when analysing simpler, homogeneous fields.Having a benchmarked MC when we started the commissioning process of RayStation was crucial to identify (and correct for) this error.More details on the differences in spot size in the presence of RSs pre-and post-correction of the RS position can be found in supplementary material 2. When benchmarking the beam models, we tested their performance in seven representative SOBPs.Measuring IDDs and lateral profiles in SOBP fields is time consuming.We believe that performing these measurements in four SOBPs only would provide a good understanding of the models' performance (three SOBPs plans of different ranges and one SOBP plans with the thickest RS).Other SOBPs with different configurations could be tested based on MC.In the second part of this work, an extensive range of measurements was performed to demonstrate the potential of MC to support the TPS dose validation and evaluation, therefore, not exclusively to validate the MC and TPS models.First, it was shown that MC can help optimise TPS calculation parameters using a limited number of experimental data (10 SOBP fields).The conclusion regarding the most suitable TPS dose calculation parameters, like lateral cut-off σ Ecl and grid resolution, was straightforward from MC, therefore we believe that calculation parameters can be chosen based on comparisons of dose outputs in homogeneous and non-homogeneous fields against MC only, without the need for measurements.Additionally, MC can be used to understand the TPS performance in homogeneous fields, where from a smaller number of plans measured experimentally (the same 10 SOBP plans), one could gain confidence on the performance of the TPS on a wider range of fields (in this work, we investigated 29 more plans).For our delivery system, selecting and measuring only a quarter of the total field of interest was adequate to find a GATE acceptance interval applicable to most fields.Furthermore, it was shown that MC can help inform on the impact of aspects such as field size, range and width of SOBPs and the use of RS in complex plans.Regarding the experiment which aimed to understand the dose output variation in monoenergetic layers of different field size, MC could potentially be used to explore the dose variation for other energies and other depths along the Bragg peak, although we did not investigate this in our study.Finally, MC can support the early development and streamline of PSQA processes.Such a system can allow a more efficient and thorough exploration of the TPS's performance over the full range of clinically relevant scenarios and help identify any limitations ahead of going live. Rich experimental data helps building confidence in the dose models used clinically.We found maximum differences against experimental measurements and the three beam models to be within 3.3% in homogeneous fields, and 4.5% in non-homogeneous fields.This is in agreement with values reported in the literature (Trnková et al 2016, Winterhalter et al 2018, Aitkenhead et al 2020).It is important to add that although experimental measurements are considered the gold standard in dosimetry, these also have an associated uncertainty.One source of uncertainties are the detectors, which are susceptible to positioning and setup errors.Detectors may have calibration uncertainties and variations due to the operation and environmental conditions, which may be also accompanied by beam output variations.Coutrakon et al (2010) estimated the error in dose delivered to a water phantom by introducing multiple random beam delivery errors and calculating the root mean square of the dose variation.The authors verified that dose errors due to beam energy and spot positioning variations could be approximately 1.85%; errors due to beam spill non-uniformity, intensity regulation and finite scanning speed were below 0.5%.Although dosimetry uncertainties are recommended to be as low as possible, these could add up to approximately 2% (Arjomandy et al 2019). It was demonstrated that MC tools have the potential to complement the time-consuming measurements, as long as there is an awareness and confidence about the level of uncertainty of the MC model itself.The strong correlation coefficient between DD TPS versus Meas.and DD TPS versus MC for homogeneous fields indicated that TPS dose can be confidently compared with MC to complement measurements.A tolerance was identified on a smaller number of fields (10 fields), and it was applied to understand dose differences on a wider range of scenarios (29 additional fields).By using a carefully calibrated MC system, one can study many more scenarios than those that can be measured due to time constraints and gain a deeper understanding of the TPS system and its limitations.In our work, most point dose outputs that fell outside the clinically established passing criteria when comparing to measurements, were also outside the same criteria when comparing against GATE.Having a preliminary knowledge from MC about the expected measurements may help understand and anticipate what types of fields are more likely to not meet set tolerance criteria and inform the need for additional refinement of the TPS models. MC workflows are being increasingly used as independent dose calculation tool for PSQA processes (Aitkenhead et al 2020, Xu et al 2022), and MC can also be applied during commissioning to support the early development of PSQA protocols used at each centre.We analysed a small number of PSQA plans for a variety of anatomical sites for proof-of-concept purposes.Unlike for homogeneous plans, where one tolerance value was applied for TPS comparisons against GATE and it was applicable to most plans, the threshold established as tolerance for the PSQA results (standard deviation of differences between GATE and measurements for the 11 cases) was not adequate for all types of plans.Having a comprehensive analysis of both TPS and MC doses for a small number of cases, can give some early indication of the dose differences between dose models and measurements for different clinical sites or treatment configuration and help the clinical teams decide on the most adequate processes and criteria for PSQA.For example, our findings suggests that some plan configurations, like typical head and neck fields with RS, may require different thresholds for MC to be use as an independent dose calculation algorithm.Furthermore, once we started patient treatment in our institution, we realised that Eclipse tended to overestimate the point dose outputs for smaller field plans without RSs and underestimate the dose outputs for large shallow plans with RSs, like breast plans.These discrepancies were not expected prior to collecting measurements for a significant number of plans, but having the MC information, we could have had better insights on the PSQA procedure to adopt for these cases.Furthermore, planes not passing the established criteria of 95% for the gamma pass rate did not necessarily fail the passing criteria of 3% for the point dose measurements within the same 2D plane and vice-versa.More plans for the same anatomical site and with similar treatment configurations must be evaluated to gain better confidence and reproducibility or to identify any peculiar trends in the dose calculation outputs. The commissioning period in very intensive and there may be a compromise in the number of measurements that would ideally be performed for the TPS dose model validation.Additionally, it may also be hard to identify when enough measurements have been done to be confident in the dose model.Measuring IDDs and lateral profiles in SOBP fields or clinical PSQA fields is extremely time consuming as the entire field must be redelivered for each measurement point.Full experimental validation for a range of different field sizes, depths, axes of measurement, RSs options, etc, would require numerous days of measurements.Due to constraints on commissioning timelines and staffing, the full set of planned measurements may not be performed, which reduces the chances of identifying any limitations in the TPS beam model ahead of going clinical, risking the clinical acceptance of a non-optimal solution.We believe that MC can help reaching the confidence level in the TPS dose model quicker.MC can help troubleshoot if any discrepancies are present in the TPS model, test if tuning TPS parameters will improve model accuracy, and overall explore more scenarios than those that can be realistically verified experimentally. There are further benefits to having a tailored MC model once a facility is clinically operating.MC can also be used to support translational research work on applications such as linear energy transfer and relative biological effectiveness calculations (non-available in all TPSs) (Smith et al 2022), out-of-field dosimetry studies and assessment of radiation-induced late effects (Yeom et al 2020, De Saint-Hubert et al 2022).Additionally, it is common practice for centres with multiple gantries to commission these sequentially and acquire first all the commissioning data required to build a beam model in the TPS from a single gantry, whilst the other gantries are being installed.Ideally, the beam properties would match exactly across all gantries, however, in practice, this will not be the case and there will always be some discrepancies in spot sizes, outputs, range, etc.An established process in-house for automated MC modelling can also facilitate future work evaluating the impact of differences between gantries and a refined beam model which provides a more representative match to all gantries could be created. In summary, in this work, we have demonstrated that an adequately benchmarked MC model, developed early in the commissioning of a new PBT facility, can support the commissioning of the TPS on different applications, including optimisation of TPS calculation parameters, understanding of the dose calculation limitations and early development of PSQA protocols.However, regardless of the advantages that MC brings in both the shorter-and longer-terms, building a MC beam model may not be viewed as a priority during the busy commissioning period, particularly due to lack of in-house MC expertise.However, MC methods for beam modelling are becoming increasingly available and shared and commercial products of MC tools as independent dose calculations are becoming available (Fuchs et al 2021).The detailed description of the MC implementation process and evaluation of its performance and limitations on a comprehensive range of experimental data, as presented in this study, along with the need for developing tools to facilitate, advance and automate commissioning steps, will help proton centres achieve shorter commissioning periods and streamline their daily work. Conclusions In this work, we developed a MC model in GATE of the clinical beam at our institution and investigated how that MC could be used to support the extensive and time-consuming experimental measurements during the commissioning of the TPS system in a new proton therapy facility.We compared two commercial TPSs with different dose calculation engines (Eclipse PCS and RayStation MC), against experiments and GATE, for an extensive set of homogeneous plans in water and non-homogeneous PSQA fields in solid water.The three beam models were first benchmarked against experimental measurements, which verified their performance to be within clinically acceptable limits.This work demonstrates that establishing a MC system early on in the commissioning process can greatly enhance a centre's ability to fully explore the performance and limitations of their TPS by reducing the number of time-intensive measurements that must be performed.It may also support the development of PSQA processes and acceptance criteria for different sites ahead of treatment start. Figure 1 . Figure 1.Workflow of the automated process developed to model a pencil beam scanning system in GATE. Figure 2 . Figure 2. Workflow of the steps and experiments performed to demonstrate the potential of an independent MC beam model to inform and complement the commissioning and dose validation of the TPS of a new proton therapy facility. Table 1 . Summary of scenarios investigated to demonstrate the potential of Monte Carlo to support the commissioning of clinical treatment planning systems.off value SOBP plans with varying range and width in water (n = 10) Point doses in the centre of SOBP (R-W/2) using a PTW Roos 34001 ionisation chamber and point doses per selected PSQA plane using a PTW Semiflex ionisation chamber 40 × 40 × 40 cm 3 water box; dose scored in a cylinder (radius = 0.78 cm and thickness = 0.1 mm) positioned at R-W/2 of the SOBP σ Ecl = 2, 3 or 4 and resolution of 1 × 1 × 1 mm 3 ; Dose extracted at the R-W/2 coordinate N/A Dose calculation resolution SOBP plans with varying range and width in water (n = 10) and PSQA plans (n = 6) σ Ecl = 4 and resolution of 1 × 1 × 1, 2 × 2 × 2 and 3 × 3 × 3 mm 3 ; Dose extracted at the R-W/2 coordinate N/A Homogeneous fields 100 MeV mono-energetic layer plans of different field sizes (3 × 3, 4 × 4, 5 × 5, 7 × 7, 10 × 10, 12 × 12, 15 × 15) delivered to solid water (n = 7) Point doses at 2 cm WET using a PTW Roos 34001 ionisation chamber 40 × 40 × 40 cm 3 solid water box; dose scored in a cylinder (radius = 0.78 cm and thickness = 0.1 mm) positioned at 2 cm WET σ Ecl = 4 and resolution of 1 × 1 × 1 mm 3 Dose extracted at 2 cm WET in a structure representing the PTW Roos ionisation chamber Resolution of 1 × 1 × 1 mm 3 Dose extracted at 2 cm WET in a structure representing the PTW Roos ionisation chamber SOBP plans with varying range (10 cm to 35 cm), width (2 cm to 20 cm) and range-shifter options delivered to water (n = 39) Point doses in the centre of SOBP (R-W/2) using a PTW Roos 34001 ionisation chamber 40 × 40 × 40 cm 3 water box; dose scored in a cylinder (radius = 0.78 cm and thickness = 0.1 mm) positioned at R-W/2 of the SOBP Dose extracted at the R-W/2 coordinate of the SOBP Dose extracted at the R-W/2 coordinate of the SOBP Heterogeneous fields PSQA plans of different anatomical sites delivered to solid water (n = 11) 2 to 3 planes per field using a PTW OCTAVIUS array detector 1500XDR 40 × 40 × 40 cm 3 phantom extracted from the TPS and imported as CT image; dose scored within a 1 × 1 × 1 mm 3 grid Point doses extracted from the 3D dose map using an in-house script σ Ecl = 4 and resolution of 1 × 1 × 1 mm 3 Point doses extracted from the 3D dose map using an inhouse script Resolution of 1 × 1 × 1 mm 3 Point doses extracted from the 3D dose map using an in-house script One point dose per selected plane using a PTW Semiflex 3D 31021 ionisation chamber Matching plane closest to measurement depth selected from the 3D dose map Matching plane closest to measurement depth selected from the 3D dose map Matching plane closest to measurement depth selected from the 3D dose map R = range, W = Width, SOBP = Spread Out Bragg Peak 2.3.Scenario 3: dose evaluation in non-homogeneous field 1 ± 0.1 [−0.3 to 0.0] 10 × 10 × 10 cm 3 box fields (without RSs) Figure 3 . Figure 3.Comparison between the modelled and the measured IDDs in terms of R 80% (a), W 80% (b), AUC (c), dose at 2 cm depth (d) and dose at the peak (e). Figure 4 . Figure 4. Comparison between the modelled and the measured spot profiles.Average spot size difference across all energies, for the seven measuring depths, for the x profiles, without the presence of RSs (a) and considering RS = 5 cm (b); difference between the modelled and the measured spot sizes, for all clinical energies, at the isocentre plane, for spot profiles without RSs (c) and with the RS = 5 cm (d). Figure 7 also shows the dose differences at the centre of the 39 SOBPs for Eclipse (b) and RayStation (c) versus measurements Figure 6 . Figure 6.Normalised dose at 2 cm depth obtained through measurements and calculated using GATE, Eclipse and RayStation, for a 100 MeV monoenergetic layer, as a function of the field size (a); percentage dose difference for GATE, Eclipse and RayStation against measurements (b); percentage dose difference for Eclipse and RayStation against GATE (v). Figure 7 . Figure 7. Percentage differences in dose at the centre of 39 SOBP plans with varying SOBP range, R, (10 cm to 35 cm) and width, W,(2 cm to 20 cm), between measurements and the three dose calculations algorithms (a); point dose differences between the TPSs and measurements and the correspondent difference region estimated by GATE, for Eclipse (b) and RayStation (c).The GATE estimated intervals were obtained from the TPSs difference against GATE plus or minus the standard deviation of the GATE differences against measurements for the 10 subsets of fields.Plans marked in bold represent the selected sample of 10 SOBP plans. Figure 8 . Figure 8.Average difference in dose between the TPS and measurements (solid line) and the TPS and GATE (dashed line) across all SOBP considered, as a function of R (a), W (b) and RS option (c). Figure 9 . Figure 9. PSQA point doses differences for the three dose calculation algorithms against measurements (a); differences between the TPS and measurements and the correspondent difference region estimated by GATE, for Eclipse (b) and RayStation (c). Figure 10 . Figure10.Local gamma pass rates (3%/3 mm, with 5% lower threshold cut) for GATE, Eclipse and RayStation against the measured planes.The data points are not correlated, and each point corresponds to an independent result within a treatment field. Figure 11 . Figure11.Examples of local 3%/3 mm gamma index and local dose difference maps between the measured planes and the three dose calculation algorithms, for two fields; case 1 is a brain case without RS and case 2 is a pelvis case with the 5 cm RS. Table 2 . Benchmarking data of the dose calculation models against basic experimental data: IDDs, spot profiles with and without RSs and box fields in water with varying range, width, with and without range shifting devices.
2023-12-07T06:17:13.765Z
2023-12-05T00:00:00.000
{ "year": 2023, "sha1": "ba1a3796cdb4ac24ddd16c25568ec39d0edd2596", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1361-6560/ad1272/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "12105afbde3677afae585d677ef159cf4eaa3f2f", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
15268642
pes2o/s2orc
v3-fos-license
The Caulobacter crescentus DNA-(adenine-N6)-methyltransferase CcrM methylates DNA in a distributive manner The specificity and processivity of DNA methyltransferases have important implications regarding their biological functions. We have investigated the sequence specificity of CcrM and show here that the enzyme has a high specificity for GANTC sites, with only minor preferences at the central position. It slightly prefers hemimethylated DNA, which represents the physiological substrate. In a previous work, CcrM was reported to be highly processive [Berdis et al. (1998) Proc. Natl Acad. Sci. USA 95: 2874–2879]. However upon review of this work, we identified a technical error in the setup of a crucial experiment in this publication, which prohibits making any statement about the processivity of CcrM. In this study, we performed a series of in vitro experiments to study CcrM processivity. We show that it distributively methylates six target sites on the pUC19 plasmid as well as two target sites located on a 129-mer DNA fragment both in unmethylated and hemimethylated state. Reaction quenching experiments confirmed the lack of processivity. We conclude that the original statement that CcrM is processive is no longer valid. INTRODUCTION DNA methylation at position N 6 of adenine, or at position N 4 or C 5 of cytosine bases is a chemical modification of DNA present in a wide variety of prokaryotic and eukaryotic organisms (1,2). The methylation reaction is catalyzed by DNA methyltransferases (MTases) which employ S-adenosine-L-methionine (AdoMet) as methyl group donor. In bacteria, DNA methylation is most often associated with restriction-modification (RM) systems, which protect the bacterial cell against bacteriophages (3). However, there exists a distinct class of bacterial DNA MTases, known as solitary MTases, which are not part of an RM system. The best known examples of solitary MTases are the Escherichia coli DNA adenine MTase (EcoDam) which recognizes GATC sequences and regulates DNA repair, gene expression and DNA replication (1,4), and the Caulobacter crescentus cell-cycle regulating MTase (CcrM) which methylates the adenine in GANTC sites and has a central role in the regulation of the bacterial cell division cycle (5)(6)(7). Furthermore, CcrM is an essential protein in several a-Proteobacteria, including pathogens, which makes it a potential antibacterial drug target (8)(9)(10). One important property of DNA MTases is their processivity in the methylation of DNA molecules containing more than one target site. Processive enzymes stay bound to one DNA molecule after first turnover and methylate several target sites on that molecule without dissociation. Thereby, they directly convert unmethylated DNA into DNA modified at all target sites. Distributive enzymes, in contrast, always dissociate from the DNA after one methyl group transfer leading to an accumulation of methylation intermediates, i.e. DNA molecules that are modified at some but not all target sites. Since methylation intermediates are not released by processive enzymes, detection of the presence or absence of intermediates is the most direct and reliable experimental approach in processivity analysis. The processivity of DNA methyltransferases has a strong impact on their biological function, because DNA methylation is established in a radically different way by each type of enzyme. The EcoDam enzyme, for example, was shown to be highly processive, thus leading to efficient re-methylation of the GATC sites after DNA replication (11), although particular flanking sequences were shown to reduce processivity (12). T4Dam was shown to be processive as well (13), while most of the methyltransferases associated with RM systems are distributive, which may help prevent the methylation of incoming phage DNA before its cleavage by restriction digestion (1,11 In a publication of Berdis et al. (14), it was reported that CcrM methylated DNA in a processive manner. The assay applied in that work for detection of CcrM processivity probed the methylation of GANTC sites through protection against HindII (GTYRAC) cleavage at overlapping sites ( Figure 2A). However, although the substrate contained two CcrM sites, only one of them was flanked by a HindII site. Therefore, only one GANTC site was being probed for methylation and no conclusion could be drawn toward processivity. It is clear that this error is not just a typographical mistake in the 'Materials and Methods' section of the manuscript, because at the zero time point in Figure 5 of the Berdis et al. (14) publication, the long 51-mer HindII cleavage product was observed, which is indicative of the absence of cleavage at the second CcrM site. Since no methylation can be present at this point, this result can only be explained by the absence of the second HindII site. Thus, the issue of processivity of CcrM must be considered open and not resolved. In this work, we have re-investigated the processivity of CcrM and show that it methylates pUC19 plasmid DNA and a linear 129-mer substrate in a distributive manner. DNA substrates The 129-mer DNA substrate containing two CcrM target sites which was used for processivity analysis was produced by PCR using the pBAD24 vector as template. The hemimethylated 129-mer HM substrate was obtained by PCR using an in vitro synthesized DNA oligonucleotide as template. The M.TaqI methylation was carried out at 65 C for 2 h, with 80 mM AdoMet and using NEB4 buffer supplied by New England Biolabs. The methylated DNA was purified using a standard DNA extraction kit (Macherey-Nagel NucleoSpin ExtractII). All substrates described above are shown in Figure 2B-D. The 23-mer substrates used for determining the relative preference for hemimethylated over unmethylated DNA and for the quenching experiment were obtained by heating of an equimolar (20 mM) mixture of complementary oligonucleotides to 95 C for 5 min and allowing the mixture to slowly cool down to room temperature. The quality of the annealing procedure was assessed by polyacrylamide gel electrophoresis. 23-mer: Bt_d(GGCAGCTACGAATCGCAACAGCT) 23-mer revmet:_d(AGCTGTTGCG m ATTCGTAGC TGCC) 23-mer_rev: d(AGCTGTTGCGATTCGTAGCTGCC) In addition, 12 double-stranded 23-mer substrates were used, in which the base pairs at positions 1, 3, 4 or 5 were exchanged against all other base pairs. All substrates were biotinylated and hemimethylated with the methylation in the lower strand (except of variants at the fourth position, which did not contain an A in the lower strand). These substrates were used to investigate the sequence specificity of CcrM for the first, third, fourth and fifth position of the GANTC site. DNA binding experiments DNA binding was analyzed by using the nitrocellulose filter-binding assay. For the experiment, radioactively labeled hemimethylated 129-mer DNA substrate was prepared by phosphorylation of the hemimethylated DNA with g-[P 32 ]-ATP using T4 polynucleotide kinase (NEB) following the recommendations of the supplier. CcrM concentrations were varied between 0 and 10 mM, 10 nM of DNA was used. The binding reactions were incubated in binding buffer (50 mM HEPES pH 7.5, 50 mM NaCl, 1 mM EDTA and 500 mM DTT) supplemented with 200 mM sinefungin (Sigma) for 30 min at ambient temperature. The nitrocellulose filter membrane (Macherey & Nagel, Du¨ren, Germany) was soaked in binding buffer for 30 min. Afterwards the membrane was transferred into the dot blot chamber (BioRad) and the slots were washed with binding buffer. The samples were transferred into the wells of the dot blot apparatus using a multiple pipette, immediately sucked through the nitrocellulose filter membrane, and washed several times with 100 ml of binding buffer. The membranes were dried and the radioactivity of the spots analyzed using a PhosphorImager (Fuji). The results were fitted to the equation describing a bimolecular association equilibrium to determine the binding constant. CcrM DNA methylation reactions for processivity analysis All methylation kinetics were performed in a buffer containing 50 mM HEPES pH 7.5, 50 mM NaCl, 1 mM EDTA and 500 mM DTT, in the presence of 5 ng/ml BSA. The AdoMet (Sigma-Aldrich) cofactor was used at a concentration of 200 mM for the processivity experiments. All methylation reactions were carried out at room temperature and started by addition of CcrM. Methylation of pUC19 was carried out using 2 mM CcrM and 600 nM pUC19. To assess the methylation state of the pUC19 vector, a double digestion was carried out using HinfI and NdeI (New England Biolabs), the latter being used for linearization of the vector. The double digestion was carried out overnight at 37 C in buffer NEB4. The methylation reactions with the unmethylated 129-mer were carried out using 1 mM DNA and 2 mM CcrM, with the hemimethylated 129-mer HM 1 mM DNA and 0.5 mM CcrM were used. Aliquotes were taken at various time points and reactions were stopped by shock freezing in liquid nitrogen and purified using PCR purification kit (Macherey-Nagel NucleoSpin Extract II). All restriction endonuclease treatments were carried out for at least 1 h under the appropriate conditions, as recommended by the provider. For the quenching experiment, a standard methylation was prepared and time points were collected and treated the same way as described above, except that 3 min after starting the reaction 10 mM double stranded 23-mer competitor substrate was added. Methylation of oligonucleotide substrates using radioactively labeled AdoMet In order to study the methylation of the unmethylated and hemimethylated 23-mer substrates by CcrM, the in vitro biotin/avidin methylation assay was performed, as previously described (15). The methylation reactions were performed in a 40 ml total volume, under single turnover conditions using 760 nM 3 H-labeled AdoMet. The enzyme and DNA concentrations are given in the main text. The enzymatic activity was assessed by linear The experimental data were fitted by linear regression of the initial data points to derive the initial rate of DNA methylation. Experiments were conducted twice, the errors bars display the standard deviation between the individual results. (C) Methylation of hemimethylated (black squares) and unmethylated (gray circles) 23-mer by CcrM. The methylation experiments were carried out as described in B. The experimental data were fitted to an exponential model (black line for hemimethylated DNA, gray line for unmethylated DNA), and the enzymatic turnover rate was extracted. The rate of the methylation reaction carried out using unmethylated DNA as substrate was 0.50 turnovers/min, whereas for hemimethylated DNA the rate was 0.76 turnovers/min. Thus, CcrM has an apparent 1.5-fold preference for hemimethylated, over unmethylated DNA. regression of the initial data points or by fitting the reaction progress curve to the following exponential equation: where BL indicates the background signal, F is the intensity factor, and k represents the catalytic rate constant, expressed as turnovers per minute. Calibration was done with completely methylated DNA, incubated with CcrM for several hours. RESULTS We have expressed His 6 -tagged CcrM in E. coli and purified it using a Ni-NTA column ( Figure 1A). We initially investigated the specificity of CcrM by using biotinylated oligonucleotide substrates which were incubated with CcrM and AdoMet, the methyl group of which was radioactively labeled. After methylation, the DNA was purified using an avidin microplate and the incorporation of radioactivity into the DNA was detected. We used four 23-mer substrates with altered central position (N3) representing the different versions of the GANTC cognate sites. In addition, nine substrates were used which contained variants of the GANTC target site in which one base pair was altered at the G1, T4 or C5 position (near-cognate sites). Substrates were hemimethylated in the lower strand, such that methylation of the upper strand was detected. The results shown in Figure 1B illustrate that CcrM has a high preference for the GANTC sequence, because the best near-cognate substrate was methylated 200-fold less efficiently than the worst cognate one. Many of the near-cognate substrates were methylated more than 1000-fold less efficiently. At the central position we observed only moderate variations in the methylation rate which were close to the experimental fluctuations. Since hemimethylated DNA is the product of DNA replication of methylated GANTC sites in vivo, it is the major physiological substrate of CcrM. We compared the activity of CcrM using hemimethylated and unmethylated DNA substrates indicating that CcrM methylates both substrates, but it displayed a weak preference of $1.5-fold for methylation of hemimethylated DNA over the average of the methylation rates of both strands of the unmethylated DNA ( Figure 1C). Methylation of a plasmid substrate In order to assess whether CcrM methylates DNA in a processive or distributive manner, we have performed methylation kinetics using pUC19 plasmid DNA as substrate which contains six GANTC sequences ( Figure 2B). After methylation for a defined period of time, the DNA was purified and digested with the HinfI (GANTC) and NdeI restriction endonucleases. HinfI cleavage is blocked by the adenine methylation introduced by the CcrM methyltransferase, which allows probing its methylation activity. The NdeI cleavage, in contrast, is insensitive to DNA methylation by CcrM and used to linearize the plasmid DNA. In a distributive methylation reaction, the enzyme dissociates from the DNA after methylating each target site, and it needs to re-associate before carrying out another methylation reaction. Since re-association occurs randomly, incompletely methylated DNA molecules, which are protected against HinfI cleavage at some but not all sites, accumulate in the initial phases of the reaction. In contrast, in a processive reaction, the enzyme methylates all available target sequences without releasing the DNA, leading to complete protection of the plasmid DNA against HinfI cleavage. As can be seen in Figure 3, without methylation (i.e. at time point 0.1 min), the pUC19 DNA was completely cleaved by the HinfI and NdeI enzymes. Within the first 4 min of the methylation reaction, the plasmid DNA was becoming increasingly protected against the HinfI cleavage. However, we did not observe a direct conversion of the DNA into the fully protected form, but an accumulation of methylation intermediates, leading to the appearance of DNA bands corresponding to incomplete cleavage of the plasmid DNA. Similar this finding clearly indicates that CcrM methylated the plasmid DNA in a distributive manner. Methylation of a 129-mer substrate As described earlier, the pUC19 methylation experiments indicated that CcrM functions in a distributive mode. However, the distance between the GANTC target sites is relatively large in pUC19, making processive DNA methylation challenging for the enzyme, because it has to travel a long distance on the DNA after each methylation event to reach the next target site. In order to provide an experimental test system more supportive for a processive reaction mechanism, we have designed and generated a 129-mer oligonucleotide substrate which contains two GANTC target sequences separated by 22 bp and flanked by 32 bp on one side and 74 bp on the other ( Figure 2C). The DNA was incubated with CcrM for up to 2 h and aliquots were taken at various time points. The methylation reaction was stopped by flash-freezing in liquid N 2 and the methylation state of the DNA was assessed by digestion with HinfI, as described above. A processive methylation reaction would lead to a gradual appearance of fully protected 129-mer and corresponding loss of unprotected 74-mer, 32-mer and 22-mer without generation of 54-mer and 96-mer, which are obtained from cleavage of partially methylated DNA molecules. As shown in Figure 4A, a large amount of methylation intermediates was observed, indicating that CcrM has a non-processive reaction mechanism. Interestingly, the results shown in Figure 4A also reveal a considerable excess of one of the two expected intermediate fragments, 96 bp in length, and only trace amounts of the shorter 54 bp fragment. This result indicates that the central one of the two CcrM target sites is methylated more efficiently. The second site is also methylated, but with lower rate, since the fully protected form of the 129-mer substrate also appears, and after 2 h almost complete protection of the DNA is achieved ( Figure 4A). Reaction quenching studies An alternative approach to study processivity is to quench a reaction by addition of an excess of an external substrate acting as competitor. Thereby, an enzyme working distributively will be trapped and the reaction stopped. In contrast, a processive enzyme should be able to finish the methylation of the substrate it is bound to. In order to implement this assay, the 129-mer methylation reaction was performed as described and after 3 min a 10-fold excess of a competing 23-mer oligonucleotide containing a single, unmethylated GANTC target site was added. Our results show that before the addition of the competitor DNA, the 129-mer substrate was methylated with same kinetics as seen before (compare Figure 4A for methylation without competitor and Figure 4B for methylation with competitor). However, after addition of the competitor, the level of protection of the 129-mer DNA remained constant throughout the duration of the experiment, with no additional methylation taking place. Most importantly, the partially methylated intermediates were not converted into the fully methylated state indicating that the CcrM enzyme cannot move to the second site after methylation of one site of the 129-mer substrate. This behavior is characteristic of a distributive, rather than a processive, methyltransferase confirming the conclusions from the previous experiments. Analysis of processivity on hemimethylated DNA Although CcrM is able to efficiently methylate completely unmodified substrates (Figures 1C, 3 and 4), it could be argued that processive methylation might occur on the preferred substrate. To this end, a CcrM in vitro methylation experiment was conducted using a hemimethylated variant of the 129-mer substrate ( Figure 2D). Hemimethylation was introduced into the 129-mer HM substrate by using GTTGACTCGA sites which represent overlapping HincII (GTYRAC, position 1-6 of the sequence, inhibited by adenine methylation in either strand), CcrM (GANTC, position 4-9 of the combined sequence) and TaqI sites (TCGA, position 7-11 of the combined sequence). The 129-mer HM DNA was methylated by M.TaqI at the TCGA adenine residues in both strands. In the upper strand, the methylation is outside of the CcrM site and it does not influence GANTC methylation by CcrM. However, M.TaqI The weak band appearing in the last lanes at low molecular weights corresponds to the 23-mer competitor which has become methylated and, thereby, protected against HinfI cleavage. methylation of the lower strand creates a hemimethylated GANTC site. Completeness of the M.TaqI methylation was confirmed by full resistance of the DNA against R.TaqI cleavage after incubation with M.TaqI ( Figure 5A). The hemimethylated state of all the CcrM sites was assayed by a control digestion with HinfI, which has the same recognition sequence as CcrM and is inhibited by hemimethylation of its target sequence. As shown in Figure 5A, the 129-mer HM DNA was completely refractory to HinfI cleavage after incubation with M.TaqI, indicating that all GANTC sites were hemimethylated. As expected, M.TaqI methylation did not interfere with HincII cleavage because the hemimethylation resides outside of its recognition sequence ( Figure 5A). In order to characterize the reaction process (cf. 'Discussion' section), we determined the binding constant of CcrM to this substrate by nitrocellulose filter binding after radioactive labeling of the hemimethylated DNA ( Figure 5B). We observed relatively weak binding with a binding constant of K Ass = 5.4 Â 10 5 M À1 . Next, this substrate containing hemimethylated GANTC sites, which are methylated in the lower DNA strand, was incubated with CcrM, which also leads to the methylation of the upper strand of the GANTC site. Since the upper strand adenine of the GANTC site overlaps with the HincII site, and HincII is inhibited by adenine methylation in either strand, the conversion of hemimethylated to fully methylated CcrM sites can be followed by protection against HincII cleavage. As before, processive methylation would cause the direct conversion of the 22, 32 and 74 bp fragments into fully protected 129-mer while appearance of 54 and 96 bp fragments which correspond to partially methylated DNA would be expected in case of a distributive reaction mechanism. As shown in Figure 5C, appearance of the 96-mer and 54-mer DNA fragments indeed was observed, which is indicative of distributive DNA methylation. This finding confirms that CcrM is distributive on unmethylated and hemimethylated DNA. Unlike the results shown in Figure 4A, CcrM appears to have no preference for either of the target sites on the 129-mer HM substrate, since the relative abundance of both intermediate fragments is approximately the same throughout the experiment. This is expected, since the inclusion of the HincII and TaqI sites on either side of both GANTC sites has placed them in an identical sequence context, whereas in the original 129-mer DNA, the flanking sequences of the two sites were different (compare Figure 2C and D). DISCUSSION Caulobacter crescentus is an established model system to study cell cycle regulation and differentiation in bacteria. In a series of seminal papers in the 1990s, Shapiro and coworkers (5,8,(16)(17)(18)(19)(20) identified the CcrM DNA methyltransferase and described its biological role in the control of the cell cycle of Caulobacter. The enzymatic properties of CcrM were studied by Benkovic and coworkers (14,21), showing that CcrM is active as monomer and dimer and it has some preference for methylation of hemimethylated targets. Here, we show that CcrM has a high specificity for GANTC sites which is similar to results obtained with T4Dam and EcoDam (22,23). At the central position, CcrM prefers GANTC sites 200-fold over the best near cognate substrate and hemimethylated targets are modified about 1.5-fold faster than unmethylated. This slight preference of CcrM for hemimethylated targets makes physiological sense, because replication of the Caulobacter chromosome generates hemimethylated GANTC sites, which, therefore, are the main physiological substrate of CcrM. In addition, Berdis et al. (14) reported that CcrM methylates DNA in a highly processive manner in a paper that is highly cited. Unfortunately, due to a mistake in the design of the methylation substrates that has been described above (Figure 2A), the conclusion of processive DNA methylation was not justified. Berdis et al. (14) based their claim of processive DNA methylation also on a second observation, which is that CcrM methylated a substrate with two CcrM sites 2-fold faster than a single site substrate used at the same concentration. However, this experiment was not conclusive, because the two site substrate offers twice the amount and concentration of target sites, which may explain the higher incorporation of radioactivity. The question if a DNA MTase acts in a processive or distributive manner has important consequences for its behavior in the biological context of a living cell and strongly influences the pathway of re-establishing DNA methylation after DNA replication. Therefore, we have re-examined the processivity of CcrM and show that CcrM distributively methylated six target sites on the pUC19 plasmid as well as two target sites located on a 129-mer DNA fragment both in unmethylated and hemimethylated state. One technical challenge in detecting potential processivity of CcrM was the relatively weak activity of the enzyme, which may pretend a distributive reaction mechanism because of incomplete reaction progress. However, this argument does not apply to our study, since we observed complete methylation of the substrate in all our experiments (cf. Figures 3-5). Another potential problem, that could obscure a processive reaction mechanism, is that two enzymes may bind on the same substrate and methylate independently of each other or block each others movement on the DNA. To estimate if such problem could have occurred in our experiments, we determined the DNA binding constant of CcrM to the hemimethylated 129-mer and found a K Ass of 5.4 Â 10 5 M À1 . In the corresponding methylation kinetics with the hemimethylated 129-mer, we used 1 mM DNA and 0.5 mM CcrM, which corresponds to an average occupancy of the DNA of 7.5%. This result indicates that on average only 0.6% of the substrate molecules (=0.075 2 ) had two CcrM molecules bound. Therefore, processive DNA methylation should have been detectable under these conditions. Since, the reaction quenching experiments confirmed the lack of processivity, we conclude that the original statement that CcrM is processive is no longer valid. Our new finding of a distributive methylation mechanism of CcrM can be interpreted by considering the biological role of the enzyme. Since a processive enzyme methylates several sites after one DNA binding event, it typically has long residence times on the DNA. One paradigm of this type is the EcoDam enzyme (11), which shortly follows the replication fork and, therefore, probably acts on nascent DNA with few other proteins bound. In contrast, distributive enzymes have to bind to and dissociate from the DNA for each methylation event and typically have a short dwell time on the DNA. The distributive mechanism may be better suited for CcrM, because in C. crescentus DNA methylation happens after DNA replication when other DNA binding proteins have had enough time to re-bind to the DNA. Under such conditions, a processive DNA methylation may not be ideal, because bound proteins would act as roadblocks and processive MTases would be trapped due their slow dissociation from the DNA. Thereby the entire process of DNA methylation could become impeded. On the other hand, the necessity for multiple DNA binding events of distributive enzymes suggest that there may exist mechanisms for targeting CcrM to the DNA, which so far have not been discovered.
2014-10-01T00:00:00.000Z
2011-09-16T00:00:00.000
{ "year": 2011, "sha1": "e7b8ca578c153999540061c9d0cf1c4bb2bcffa0", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/40/4/1708/7185101/gkr768.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6d337a78186103e91386dccd3c3769e69c60496", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119225667
pes2o/s2orc
v3-fos-license
Nucleon parton distributions from hadronic quantum fluctuations A physical model is presented for the non-perturbative parton distributions in the nucleon. This is based on quantum fluctuations of the nucleon into baryon-meson pairs convoluted with Gaussian momentum distributions of partons in hadrons. The hadronic fluctuations, here developed in terms of hadronic chiral perturbation theory, occur with high probability and generate sea quarks as well as dynamical effects also for valence quarks and gluons. The resulting parton momentum distributions $f(x,Q_0^2)$ at low momentum transfers are evolved with conventional DGLAP equations from perturbative QCD to larger scales. This provides parton density functions $f(x,Q^2)$ for the gluon and all quark flavors with only five physics-motivated parameters. By tuning these parameters, experimental data on deep inelastic structure functions can be reproduced and interpreted. The contribution to sea quarks from hadronic fluctuations explains the observed asymmetry between $\bar{u}$ and $\bar{d}$ in the proton. The strange-quark sea is strongly suppressed at low $Q^2$, as observed. A physical model is presented for the non-perturbative parton distributions in the nucleon. This is based on quantum fluctuations of the nucleon into baryon-meson pairs convoluted with Gaussian momentum distributions of partons in hadrons. The hadronic fluctuations, here developed in terms of hadronic chiral perturbation theory, occur with high probability and generate sea quarks as well as dynamical effects also for valence quarks and gluons. The resulting parton momentum distributions f (x, Q 2 0 ) at low momentum transfers are evolved with conventional DGLAP equations from perturbative QCD to larger scales. This provides parton density functions f (x, Q 2 ) for the gluon and all quark flavors with only five physics-motivated parameters. By tuning these parameters, experimental data on deep inelastic structure functions can be reproduced and interpreted. The contribution to sea quarks from hadronic fluctuations explains the observed asymmetry betweenū andd in the proton. The strange-quark sea is strongly suppressed at low Q 2 , as observed. I. INTRODUCTION The parton distribution functions (PDFs) of the nucleon are of great importance. One reason is that they provide insights in the structure of the proton and neutron as bound states of quarks and gluons, which is still a largely unsolved problem due to our limited understanding of strongly coupled QCD. Another reason is their use for calculations of cross-sections for high-energy collision processes, factorized in a hard parton level scattering process, calculated in perturbation theory, and the flux of incoming partons given by the PDFs. This involves the factorization of processes that occur at momentum-transfer scales of significantly different magnitudes. Of particular importance here is that the PDFs f (x, Q 2 ) have the property that for Q 2 > Q 2 0 ∼ 1 GeV 2 the dependence on Q 2 can be calculated by the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi equations (DGLAP) [1][2][3] derived from perturbative QCD (pQCD), which is well-established theoretically and experimentally confirmed. However, the x-dependence needed at the starting scale Q 2 0 is not known from fundamental principles and instead parametrized to reproduce proton structure function data. This typically requires x-shapes given in terms of five parameters for each parton flavor, resulting in ∼ 30 free parameters to account for valence quarks, gluons and sea quarks (u, d, g,ū,d, s,s). There are different collaborations [4][5][6][7] performing such PDF parametrizations with DGLAP-based Q 2 -evolution that give good fits of proton structure data and are excellent tools for cross-section calculations. However, the basic xdependence at Q 2 0 originating from the bound-state proton is here only parametrized, but not understood. * andreas.ekstedt@physics.uu.se † hazhar.ghaderi@physics.uu.se ‡ gunnar.ingelman@physics.uu.se § stefan.leupold@physics.uu.se To understand the basic shape of the parton momentum distributions in physical terms, we here follow up on earlier studies [8][9][10] giving phenomenologically successful results. The first basic idea is to use the uncertainty relation in position and momentum, ∆x∆p ∼ /2, to give the basic momentum scale of partons confined in the length scale ∆x given by the hadron diameter D. In the hadron rest frame it is natural to assume a spherically symmetric Gaussian momentum distribution with a typical width σ ∼ /(2D). The Gaussian form is not only a convenient mathematical form which cuts off large momenta that correspond to rare fluctuations. It can also be motivated as a natural distribution resulting from many soft interactions within the hadron that, via the law of large numbers, add up to a Gaussian. The strength of this approach lies in its simplicity and its small number of parameters. The second basic idea is that whereas the valence quark and gluon distributions are essentially given by the bare nucleon, the sea quark distributions are given by the hadronic fluctuations of the nucleon. For example, the proton quantum state |P = α bare |P bare + α P π 0 |P π 0 + α nπ + |nπ + + · · · contains not only the bare proton but also nucleon-pion fluctuations with probability amplitudes α N π . The point is that one should consider the dominant quantum fluctuations in terms of least energy fluctuation and thereby most long-lived [11,12]. It is expected that pionic fluctuations dominate due to the small mass of the pion. In turn, its smallness compared to a typical hadronic scale ∼ 1 GeV is a consequence of spontaneous chiral symmetry breaking, which leads to the identification of pions as Goldstone bosons [13,14]. From these dominant fluctuations in the proton, with the presence of π + but lack of π − , one expects an asymmetry in the proton sea such thatd >ū [8,10,[15][16][17][18], as is also observed in data [19]. This kind of hadronic fluctuations has earlier [8,10] been handled by having the different baryon-meson (BM ) fluctuation probabilities |α BM | 2 as free parameters fitted to data. Here, we use the leading-order La-grangian of three-flavor chiral perturbation theory [20][21][22][23] to develop a theoretical model for the proton state (1) The different terms are here theoretically well defined and related to each other with only three coupling constants that are known from hadronic processes and weak decays of baryons. In addition to the probability for the different hadronic fluctuations, the theoretical formalism gives the hadron momentum distribution of the fluctuations. Incorporating the hadronic momentum distributions with the above parton momentum distributions in a hadron provides an improved model for the parton momenta of the proton quantum state. The PDFs are closely related to the proton structure functions that are measured in deep inelastic scattering (DIS) of leptons on protons. The most precise data are from electron and muon scattering, where the exchanged virtual photon has high resolution power and couples to quarks in the proton. The photon may therefore couple to a quark in the bare proton or in either the baryon or the meson in a baryon-meson fluctuation. In this paper we present the complete model we have constructed based on these basic ideas. Section II presents the formalism for DIS on the proton with its hadronic fluctuations, where some more technical details are provided in appendices at the end of the paper. In Section III we present our model for the parton distributions in a probed hadron, i.e. the x-shape at the starting scale Q 2 0 for pQCD evolution. Results are then presented in Section IV in terms of obtained parton momentum distributions and their ability to reproduce data on proton structure functions and quark sea asymmetries. We give our conclusions in Section V. II. DIS ON A NUCLEON WITH HADRON FLUCTUATIONS The cross-section for deep inelastic lepton-nucleon scattering is theoretically well known as a product of the leptonic and hadronic tensors, dσ ∝ l µν W µν . The leptonic tensor is straightforward to calculate and well known for photon exchange, l µν = tr[ / p l γ µ / p l γ ν ]/2, as well as for W or Z exchange. We consider both electromagnetic and weak interactions. The hadronic tensor W µν is a much more complex object and is of prime interest here. In order to take into account the proton target with its hadronic fluctuations, as illustrated in Fig. 1, we decompose the hadronic tensor to include the possibilities to probe either the bare proton or the meson or baryon in a fluctuation as follows (2) where the notation M B and BM denotes probing the meson and baryon, respectively. The general form of the hadronic tensor is [24] in terms of the hadronic current J µ (ξ) as a function of the spacetime coordinate ξ. Using light-cone time-ordered perturbation theory [25] we calculate the here introduced part corresponding to the hadronic fluctuations giving where the first term is for DIS probing the meson (M ) and the second term for probing the baryon (B). Ex-pressions equivalent to (4) can be found in the literature [18]. The integration variable is the fraction y of the proton's energy-momentum carried by the meson or baryon. Following common practice in DIS theory we use lightcone momenta p + = p 0 + p 3 and p − = p 0 − p 3 , and thereby y i = p + i p + (i = B, M ). This has the advantage of being independent of longitudinal boosts, e.g. from the proton rest frame to the commonly used infinitemomentum frame. The light-cone momenta p − i are given by the on-shell condition which in the p ⊥ = 0 frame be- In (4) the sum runs over all baryon-meson pairs, with helicity λ of the baryon. We have included all baryons in both the octet and decuplet of flavor SU (3), and all the Goldstone bosons represented by the mesons in the spinzero octet. Naturally, the fluctuations with a pion will dominate due to its exceptionally low mass. Kaons are needed to get the leading contribution for the strangequark sea. Table I shows the relative strengths of different fluctuations due to the couplings to be discussed further below. The dynamical behavior depends on the hadronic dis-tribution functions which are probability distributions for the physical proton to fluctuate to a baryon-meson pair. The baryon carries a light-cone fraction y and the meson the remaining momentum fraction, i.e. satisfying the relation giving flavor and momentum conservation for each particular hadronic contribution. This ensures that all parton momentum sum rules come out correctly [26]. The hadronic distribution functions are explicitly given in Eqs. (12,13). Their explicit form depends on the Lagrangian used for the hadronic fluctuations, to which we now turn. The relevant part of the leading-order chiral Lagrangian describing the interaction of spin 1/2 and spin 3/2 baryons with spin 0 mesons (as Goldstone bosons) is given by [20][21][22][23] where 'tr' refers to flavor trace. Here, the B ab are the matrix elements of the matrix B representing the octet baryons. The decuplet baryons are represented by the totally symmetric flavor tensor T µ abc . Similarly, the spin 0 octet mesons are represented by a matrix Φ appearing in the Lagrangian through u µ given by u µ ≡ iu † (∇ µ U )u † = u † µ where u 2 ≡ U = exp(iΦ/F π ). For further details see Appendix A. From this Lagrangian we derive the non-zero terms when applied to our cases of a proton fluctuating into a meson together with an octet or decuplet baryon and respectively. The effective nature of the hadronic theory -manifested by the appearance of the derivative couplings of the form ∼ γ 5 γ µ ∂ µ M (z) in the Lagrangians-introduces a slight ambiguity for the meson momentum p M appearing in the numerators in the application of the light-cone time-ordered framework. In the literature, there are two common choices for the meson momentum appearing in the numerators [15,18], We find that these two choices give nearly identical results concerning the extracted values of our model's parameters and hence both choices yield similar conclusions. But even though choice (10) gives a slightly better shape for the flavor asymmetry, to be discussed in Section IV C, we will use choice (11) since this choice is in line with the Goldstone theorem [27] whereas choice (10) is not, as explicitly shown in Appendix B. As discussed in Appendix A, the parameter values are as follows [28]. The pion decay constant F π = 92.4 MeV and the couplings D = 0.80, F = 0.46 [29] and h A = 2.7 ± 0.3 with an uncertainty range to include partial decay width data on ∆ → N π and Σ * → Λπ as well as the large-N C limit [30,31] where g A = F + D = 1.26 is well constrained by the beta decay of the neutron [32]. Using light-cone time-ordered perturbation theory, the Lagrangians (8,9) lead to the hadronic distribution functions for the baryon and meson, respectively, probed in the fluctuation. As required, they satisfy f BM (y) = f M B (1− y). The various hadronic couplings g BM are provided in Table I and the vertex functions S λ (y, k ⊥ ) are given in Appendix B. The suppression of the energy fluctuation is seen as the propagator with the difference of the squared masses of the proton and the baryon-meson system given by The function G(y, k 2 ⊥ , Λ 2 H ) is a cut-off form factor, which is used to avoid the integral getting an unphysical divergence. The physics issue to account for is the fact that the description in terms of hadronic degrees of freedom is only valid at hadronic scales, whereas for higher momentum-transfer scales parton degrees of freedom should be used. To phase out the hadron formalism it is convenient to introduce a suitably constructed form factor. In practice, it is conceivable to cut on the virtuality of the fluctuation [15] or on the modulus of the threemomentum (in a proper reference frame). While the first option sounds plausible from a point of view of Heisen-berg's uncertainty relation (or Fermi's Golden Rule), this quantum-mechanical aspect is already accounted for by the just mentioned propagator in Eqs. (12,13). An additional such cut is therefore artificial. Instead we choose to cut off the three-momentum of the hadrons in the fluctuation as seen in the rest frame of the proton. If relevant at all, high-momenta fluctuations should be of partonic not hadronic nature. To conserve the condition f BM (y) = f M B (1 − y) it is necessary to use a symmetric combination of the meson/baryon three-momentum and a natural choice is to use the average of the squares of the three-momenta of the meson and baryon. To make this manifestly frameindependent we write its value in the proton rest frame expressed in a Lorentz-invariant form and take the form factor to be where Λ H is the parameter that regulates the suppression of larger scales. Since this is related to the switch to partonic degrees of freedom, one would expect it to be of the same order as the starting scale Q 0 of the pQCD formalism. The function A 2 in the form factor is given by where light-cone momenta p + B = yp + and p + M = (1−y)p + have been used to obtain the last expression. This form factor regularizes any potential end-point (y = 0, 1) singularities. Furthermore, high values of k ⊥ are largely suppressed which renders the integrals in Eqs. (12,13) finite and restricts the hadronic fluctuations to the lowmomentum scales where the hadronic language is applicable. Using this theoretical formalism we illustrate the total fluctuation probability for a proton to a BM pair by calculating for both momentum choices, Eqs. (10,11), giving the result shown in Fig. 2. One observes that the probability for a proton to fluctuate into a baryon-meson state is quite sizable. Notably, for a cut-off Λ H around 1 GeV, the contribution from the baryon-decuplet members (mainly from the ∆'s) is comparable in size to the nucleon-pion fluctuations. Due to the hadronic fluctuations, the PDFs for the proton are given by a convolution of the hadronic distributions, Eqs. (12,13), and the PDFs for the hadron being probed. Thus, the PDF for a parton i in the proton can be written in the form [15,18,33] taking into account the contributions from the bare proton and the BM fluctuations. In our approach, the PDF for the 'bare' part in any of these contributions (bare proton, baryon or meson in a fluctuation) is obtained from a Gaussian as mentioned in the Introduction and to be discussed in Section III. These bare distributions contain constituent quarks and gluons, but no sea quarks. In this work we include all the admissible octetbaryon-meson and decuplet-baryon-meson pairs in the fluctuations, i.e. the |N π , |∆π , |ΛK , |ΣK , and |Σ * K fluctuations. The |P η contribution can be neglected due to mass suppression and its very small coupling to the proton state, see Table I. The nucleon-pion and the Delta-pion fluctuations give the largest contributions, while the |ΛK , |ΣK and |Σ * K fluctuations act as small corrections. However, since neither |N π nor |∆π contribute to the strange sea, other fluctuations like |ΛK while being small are the leading hadronic contributions to the strange sea. It is found that the |ΛK fluctuation is most important, while the |ΣK and |Σ * K fluctuations are suppressed due to a small coupling and the larger masses involved respectively, see Table I. Once the starting distributions have been obtained from the convolution model at a particular starting scale Q 2 0 , the PDFs are obtained for higher Q 2 by DGLAP evolution. The DGLAP evolution is performed at next-toleading order (NLO) using the QCDNUM package [34]. III. GENERIC MODEL FOR PARTON DISTRIBUTIONS IN A HADRON For any bare hadron we are considering the parton momentum distributions for its valence quarks/antiquarks and a gluon component. This applies for both the above considered bare proton as well as for the baryons and mesons in a hadronic fluctuation. In the rest frame of such a generic bare hadron there is no preferred direction. Therefore the spherical symmetry motivates the assumption that the parton's momentum distributions in k x , k y and k z are the same. Assuming a Gaussian momentum distribution for these components provides a convenient mathematical form which suppresses large momenta that should correspond to rare momentum fluctuations. It can furthermore be motivated as the natural distribution arising from many soft interactions within the bound-state hadron that produce an additive effect such that, via the law of large numbers, a Gaussian distribution results. Based on the above discussion, the fourmomentum distribution for a parton of type i and mass m i is assumed to be given by [8,10] where N is a normalization factor. The width σ of this Gaussian is expected to be physically given by the uncertainty relation ∆x∆p ∼ /2 that enforces increasing momentum fluctuations for a particle confined in a smaller spatial range. Thus, for a hadron of size D (diameter) one expects σ ∼ /(2D) and therefore being typically of order 0.1 GeV. Using light-cone momenta, x = k + /p + H will be the convenient energy-momentum fraction (independent of longitudinal boosts) carried by a parton in a hadron. The PDF for a parton i = q,q, g of mass m i in the hadron H is given by The physical conditions of having a kinematically allowed final state impose the constraint m 2 i ≤ j 2 = (k+q) 2 < (p H +q) 2 for the scattered parton to be on-shell or have a timelike virtuality (causing final-state QCD radiation) limited by the mass of the hadronic system. Likewise, the hadron remnant must have a four-vector Fig. 1a. These constraints ensure that 0 < x < 1 and f (x) → 0 for x → 1. The normalizations N q/H (σ q , m q ) are fixed by the flavor sum rules, i.e. the integrals giving the correct numbers of different valence quark flavors. N g/H (σ g , 0) is fixed by the momentum sum rule, i.e. to get the sum of x-weighted integrals to be unity. Thus, the only free parameters are the Gaussian widths σ g , σ 1 , σ 2 , where the indices refer to the widths of the distributions for the gluon and for the quark flavors represented by one quark ( With this model we have chosen a minimalistic approach with the same Gaussian distributions for all partons, having a width that only depends on the number of same-flavor quarks, but not on the particular hadron considered. Of course, one could introduce more complexity requiring more parameters, but we find it more interesting to see what insights this minimal physics-motivated model can give. The above parametrization automatically conserves isospin (e.g. f bare u/P (x) = f bare d/n (x) and similarly for the other hadrons). With the above widths for all possible hadrons, the distributions only depend on mass effects via the mentioned kinematical constraints. It should be noted that these PDFs can be analytically evaluated in terms of error functions [10], but in practice it is more convenient to evaluate them numerically. As discussed, these bare distributions will only contain valence quarks and gluons, whereas the sea distributions will be entirely generated by hadronic fluctuations. All the resulting PDFs are at the low hadronic scale to be used as starting distributions at Q 2 0 for DGLAP evolution to large scales Q 2 . IV. MODEL RESULTS BASED ON DATA COMPARISON A. The few adjustable parameters The model introduced above has few parameters which are expected to lie in a limited range in order for the model to make sense. The description of hadronic fluctuations is controlled by three coupling strengths with values already fixed by data from various hadronic processes. As discussed in connection with Table I above, the coupling g A = F + D = 1.26 is constrained to the 1% level from the beta decay of the neutron [32] whereas D = 0.80 and F = 0.46 may vary independently by ∼ ±5% as long as their sum is fixed [29]. Since it is their sum that appears in the most probable fluctuations, a variation in D and F has a negligible effect on the results. For the decuplet coupling we take h A = 2.7 ± 0.3. Since h A /m R , with m R the resonance mass (basically m ∆ ), appears as the effective coupling in the decuplet Lagrangian (9), we vary the ratio to see the resulting sensitivity on this uncertainty (see Appendix A for details). The only newly introduced parameter in the hadron fluctuation model is the regulator for the highmomentum suppression, Λ H . This parameter is constrained to have a value large enough to allow hadronic fluctuations of some baryon-meson configurations, i.e. energy fluctuations of at least a few hundred MeV. On the other hand, it must be small enough to ensure a separation between the hadronic and partonic degrees of freedom. Thus, a reasonable expectation is a value in the range 0. The proton structure function F2 as a function of Q 2 for different x-bins. Our model curve compared to data on fixed target µP scattering from the New Muon Collaboration (NMC) [35] and BCDMS [36]. value on the order of 1 GeV. The former is given, as discussed above, by the inverse size of hadrons and the latter by the factorization scale of non-perturbative bound hadron state dynamics from the pQCD description using DGLAP equations for the Q 2 -evolution of parton density functions. In addition, it is reasonable to expect that Q 0 ∼ Λ H . But since Q 0 and Λ H are defined in two different formalisms, the partonic and hadronic respectively, and there is no theoretically well-defined link between these two descriptions, one cannot a priori take them as being the same parameter. Still, as will be seen below they do come out to have the same value within their uncertainties. B. Comparison with proton structure function data The values of the just discussed parameters are obtained from inclusive deep inelastic lepton-proton scattering giving the proton structure functions F 2 and xF 3 . Figures 3-5 show µP data from NMC [35] and BCDMS [36], neutrino data from CDHSW, NuTeV and CHORUS [37][38][39] and eP data from H1 [40] in comparison to our model results. Not all parameters affect the fit to all data sets. The parameters σ 1 , σ 2 , Λ H , and Q 0 can be nailed down using F 2 and xF 3 data. Q 0 and σ g are given by the small-x F 2 data: With Q 0 given, it's always possible to fit data by varying σ g . We find that the following parameter values give the best overall result σ 1 = 0.11 GeV, σ 2 = 0.22 GeV, σ g = 0.028 GeV, Λ H = 0.87 GeV, Q 0 = 0.88 GeV. (22) Notice that the fit results in Λ H and Q 0 being practically the same, confirming our expectation that this scale constitutes the transition from hadron to parton degrees FIG. 4. The proton structure functions (a) xF3 and (b) F ν 2 as function of Q 2 for different x-bins with data from neutrinoscattering experiments CDHS [37], NuTeV [38] and CHORUS [39] compared to our model curves. of freedom in the model. Moreover, the Gaussian widths are found to be of the expected magnitude ∼ 0.1 GeV. The gluon distribution is particularly soft, which may seem surprising. However, the above argument based on the uncertainty relation gives σ ∼ /(2D) = 56 MeV for the proton charge radius 0.875 fm [32]. In view of the symmetry properties of two-particle wave functions of indistinguishable states it should not be surprising that the momentum distribution for quark flavors that appear singly in the hadron differ from the one for quark flavors that appear pairwise. Considering the fact that the model has effectively only four parameters, which are also constrained by the physics assumptions of the model, it is remarkable that such a large amount of structure function data can be x H1 x H1 x H1 x H1 1 FIG. 5. The proton structure function F2 as a function of x for various Q 2 -bins. Our model curve compared to data from the H1 eP collider experiment [40]. reasonably well described. Admittedly, there are some kinematical regions of some experimental data sets where deviations do occur, but the general behavior is reproduced and substantial (x, Q 2 ) ranges are well fitted. It is therefore of interest to look into some details on the x-shapes of individual parton densities as they emerge from the model including both the hadronic fluctuations and the probed hadron's generic parton density description, but without any pQCD evolution. This is shown in the top panel of Fig. 6, where the overall shape of the valence quark distributions is quite similar to conventional PDF parametrizations. The gluon is quite large for smaller x. The sea quarks are suppressed, but not at all negligible. So there is a non-trivial contribution of non-perturbatively generated sea quarks in the boundstate proton. Examining the sea quark distributions one notes the different distributions forū andd, on the one hand, and for s ands, on the other. This is the basis for asymmetries in the light sea and strange sea, as will be further discussed below. The effect on the PDFs from pQCD evolution using the DGLAP equations is shown in the middle and lower panels of Fig. 6. Due to the log Q 2 -dependent evolution there is a quick increase from Q 2 0 so that already at Q 2 = 1.3 GeV 2 the perturbatively generated sea quarks and gluons dominate at small x over the originally nonperturbative sea. The PDFs obtained at the starting scale Q 2 0 are evaluated numerically. However, for illustrative purposes the starting distributions for a parton i can be parametrized in the convenient form xf i (x) = a x b (1 − x) c . The fitted coefficients for the various distributions are given in Table II. C. Thed-ū asymmetry From a pQCD point of view, the momentum distribution of thed andū sea in the proton should be similar since m u , m d Λ QCD , Q 0 . This is, however, not the case as seen in data from e.g. [19] where a clear asymmetry is seen (cf. Fig. 7). Such an asymmetry arises naturally from hadronic fluctuations of the proton where the nonperturbative sea distributions are dominantly generated by the pions [8,10,[15][16][17][18]. The energy-wise lowest fluctuations are P π 0 and nπ + , where the former does not contribute to thed-ū asymmetry since the π 0 is symmetric indd andūu. Taking only these nucleonic fluctuations into account gives already decent agreement with data on the difference xd − xū as shown by the dotted curve in Fig. 7 (upper panel). However, these nucleonic fluctuations are not sufficient to explain the ratiod(x)/ū(x) as shown by the dotted curve in the lower panel of Fig. 7. The results become better when also including fluctuations with other baryons. In particular the |∆ ++ π − state, having the largest decuplet coupling (see Table I) and havingū in the π − , contributes significantly to bring the curves down to the data points. The full octet and decuplet contribution is shown in Fig. 7 where the band represents a variation in the decuplet coupling h A /m ∆ , with the largest (smallest) value in Eq. (21) corresponding to the solid (dashed) curve. As seen in the figure an n% variation in the coupling results in an n% variation in the difference xd − xū for small x 0.15. The variation has a slightly smaller impact on the ratiod/ū, but the variation is essentially of the same order of magnitude as that of xd − xū. Due to the possibility of the proton to fluctuate into |ΛK + , |ΣK and |Σ * K states a non-perturbative strange sea will arise, as shown in Fig. 6. It is suppressed relative to the light-quark sea partly due to the kinematical suppression of these fluctuations with higher-mass hadrons, but also due to the smaller hadronic couplings shown in Table I. Moreover, the x-distributions of s ands are not the same, but s has a harder momentum distribution thans [9]. This is a kinematical effect arising from the fact that the s quark is in the baryon which, due to its higher mass, gets a harder y-spectrum in the hadronic fluctuation than the lighter meson containing thes. The dominance of kaons in the low-x region and similarly the dominance of strange baryons in the higher-x region is clearly seen in the ratio (s −s)/(s +s) in Fig. 8. Here one can also see how the additional symmetric ss from g → ss in pQCD reduces this ratio with increasing Q 2 . Since pQCD fills up the low-x region to a higher degree, the kaon effect is more depleted than the 'baryon peak', which is, however, shifted to lower x. The symmetric ss sea from the log Q 2 DGLAP evolution builds up quickly and dominates at small-x already for Q 2 = 1.3 GeV 2 , as shown in the middle panel of Fig. 8. Thus, the asymmetry is only expected to be visible at quite low Q 2 and therefore hard to observe experimentally. The extraction of the strange sea from data is not at all trivial since it requires some additional observable to signal that an s ors has been probed. In Fig. 9 our model is compared to data on the total strange sea (xs(x) + xs(x)) /2. The CCFR data [41] are obtained from neutrino-nucleon scattering producing a charm quark decaying semileptonically giving an opposite sign dimuon signature, i.e. ν µ + N → µ − + c + X where c → s + µ + + ν µ orν µ + N → µ + +c + X wherec →s + µ − +ν µ . The charged-current subprocess W + s → c or W −s →c is here the essential point. Other sources of charm production, such as W + g → cs or W − g →cs, or other sources of dimuon production from other decays must be taken into account to extract a proper measure of the strange sea, as discussed in [41]. The result shows that although the shape difference between the xs(x) and xs(x) distributions is consistent with zero, it has large uncertainties. CCFR assumed xs(x) = xs(x) for extracting the data points shown in [41,42]. The CCFR analysis assumes xs(x) = xs(x). Fig. 9. The more recent result of HERMES [42] is obtained from data on the multiplicities of charged kaons in semiinclusive deep-inelastic electron-proton scattering. This requires a detailed and non-trivial analysis of the fragmentation function into kaons to extract the contribution from initial-state strange quarks in the basic DIS process γs → s or γs →s. As seen in Fig. 9 the CCFR and HERMES results differ substantially and do not provide a clear result on the strange sea. Our model result agrees reasonably well with the HERMES result, but compared to CCFR it has a too small strange sea at low Q 2 . Since the strange-quark sea is not yet well determined, we contribute with some further investigations. The strange-quark content of the proton can be characterized by the momentum fraction carried by the strange sea relative to the light-quark sea or the non-strange quark content [41] where κ = 1 would mean a flavor SU(3) symmetric sea. These ratios are shown in Fig. 10 versus Q 2 , where the qualitative behavior is understandable within our model. At Q 2 0 there is, as discussed, only a small nonperturbative strange-quark sea from hadron fluctuations. With increasing Q 2 the perturbative log Q 2 evolution first builds up the ss sea quickly and then flattens off at larger scales (note the logarithmic Q 2 scale in the figure). The proton sea is, however, not flavor SU(3) symmetric as indicated by the value of κ and quantified by the strange-sea suppression factor Our results for this quantity are shown in Fig. 11 together with ATLAS data [43,44]. As seen for Q 2 slightly larger than the starting value for the QCD evolution Q 2 0 , the suppression factor is constant and near unity for x 0.01. For low x this is in agreement with the eP W Z-fit of [43]. For larger x our model gives r s 0.023, 1.9 GeV 2 ≈ 0.62, which is consistent within uncertainties of experimental observations: r s (0.023, 1.9 GeV 2 ) = 0.56 ± 0.04 [45], r s (0.023, 1.9 GeV 2 ) = 1.00 +0. 25 −0.28 [43] and r s (0.023, 1.9 GeV 2 ) = 0.96 +0. 26 −0.30 [44]. As seen in Fig. 11, r s → 1 as x → 0, this supports the hypothesis that the quark sea at low x is flavor symmetric. For completeness we show in Fig. 12 (top panel) the dependence of the strange-sea ratios κ and η on the hadron fluctuation regulator Λ H . Whereas κ strongly depends on Λ H , η is almost independent of Λ H . This can be understood from the plot in the lower panel of the same figure which compares the non-strange fluctuation probability P ns (e.g. for |N π and |∆π ) and the probability P s that the proton fluctuates into a hadron pair that does contain strangeness. Not only is P s P ns , but also its slope is much smaller implying that for increasing Λ H the rate of population growth is much larger for those fluctuations that containū andd quarks, than those containing s ands quarks (∆P s /∆Λ H ≈ 5% GeV −1 and ∆P ns /∆Λ H ≈ 90% GeV −1 between 0.5 GeV ≤ Λ H ≤ 1.0 GeV). Hence κ depends much more strongly on Λ H than does η due to appearance ofū andd distributions in its definition. As shown in the lower panel of Fig. 12, at the regulator value of Λ H = 0.87 GeV, roughly 1% of the fluctuations contain strangeness. This can be compared to the result obtained in Ref. [10], where the strangeness fluctuations had to constitute 5% in order to reproduce the then available CCFR data. If it turns out to be a need for a larger non-perturbative strange-quark sea than in our present model, this might be remedied by a minor modification of the model. One option could be a flavor-dependent momentum cutoff Λ H , but to keep our model as simple as possible we refrained from introducing more parameters. An alternative explanation might come from the importance of additional degrees of freedom not considered so far. In the strangeness S = −1 sector there are four baryonic states below the antikaon-nucleon threshold: Λ, Σ, Σ * (1385) and Λ * (1405). The first three have been taken into account in our approach as the strangeness counterparts of the nucleon and the ∆(1232) considered in the pion-baryon fluctuations. But we have not included the Λ * (1405) in our framework. On the one hand, we found that the comparatively heavy KΣ * (1385) fluctuation is much less important than the lighter KΛ. This suggests that also KΛ * (1405) is negligible. On the other hand, the negative-parity Λ * (1405) couples with an swave to nucleon-antikaon while all our interactions are of p-wave nature. This can enhance the importance of the Λ * (1405). The ultimate reason why we have not explored its influence in the present work is the absence of unambiguous experimental information about the coupling strength between a nucleon and KΛ * (1405). This is related to the long-standing question about the nature of the Λ * (1405). Being lighter than all non-strange baryons with negative parity, it has been speculated since a long time [46] that the Λ * (1405) is merely an antikaon-nucleon bound state instead of a three-quark state; see, for instance [47] for further discussion and references. This would point to a relatively large coupling strength. Yet in view of these theoretical uncertainties we have not pursued a detailed analysis of the KΛ * (1405) fluctuation as long as there is no clear need for an enhancement of the strange sea. V. CONCLUSIONS This study has demonstrated that the momentum distribution of partons in the proton, and thereby the observed proton structure functions, can be understood in terms of basic physical processes. We thereby obtain new knowledge regarding the poorly understood nonperturbative dynamics of the bound-state proton. Using the well-established pQCD DGLAP equations for the Q 2 -dependence above the scale Q 2 0 , our model developed here addresses the basic shape in the distribution of the energy-momentum fraction x carried by different parton species at Q 2 0 . Thus, the model treats the physics at the transition from bound-state hadron degrees of freedom to the internal parton degrees of freedom. It does so by convoluting hadronic quantum fluctuations with partonic fluctuations. To describe the former we use the leading-order Lagrangian of chiral perturbation theory, the low-energy effective theory that respects the symmetries of QCD as the underlying theory. The partonic fluctuations arise quantum mechanically due to confinement within the small size of a hadron, as given by the uncertainty relation in position and momentum. Interestingly the fit that gives best agreement with the structure functions F 2 and xF 3 data yields a value where the hadronic language ends and QCD evolution begins to be Λ H = Q 0 = 0.87 GeV. Thus, having a model with effectively only four dimensionful parameters with physically meaningful values, it is highly non-trivial that we obtain a very satisfying reproduction of a large amount of data. In particular we find that the nπ + and the ∆π fluctuations generate the flavor asymmetry xd−xū > 0 in the proton sea, to a large extent consistent with experimental data. This shows that the model captures the essential physics observed. Regarding the strange-quark sea of the proton which arises from proton fluctuations into strange hadrons, we find that it is substantially suppressed due to the larger masses of strange hadrons. An asymmetry in terms of different x-distribution for s ands, with s(x) being harder, is found. However, this effect is reduced at larger Q 2 due to the development of the symmetric ss sea from g → ss in pQCD. The remaining asymmetry at observed Q 2 is too small to be seen in present data. Further details of the non-perturbative strange sea of our model are given to promote future studies, including the potentially interesting inclusion of the Λ * (1405) in the hadronic fluctuations. We have here considered PDFs of the proton where most experimental information is available for testing our model. The model is, however, quite general and can give the parton momentum distributions in any hadron. Based on the phenomenological success of the model and its theoretical basis, spin degrees of freedom and the proton spin puzzle are studied in another paper [48]. ACKNOWLEDGMENTS We acknowledge helpful discussions with C. G. Granados and U. Aydemir at an early stage of this project. This work was supported by the Swedish Research Council under contract 621-2011-5107. Appendix A: The relevant Lagrangians For the metric we use g = diag(+1, −1, −1, −1) and 0123 = +1. The relevant part of the leading-order chiral Lagrangian describing the interaction of Goldstone bosons with nucleons and spin 3/2 baryons is given by [20][21][22][23] (A1) Relativistic Rarita-Schwinger fields exhibit some problematic features related to how to handle its spin-1/2 components. Apart from exchanging spin-3/2 resonances, the Lagrangian (A1) induces an additional unphysical contact interaction. This can be cured by subscribing to the Pascalutsa prescription, which in our case means making the substitution [21,22] where m R refers to the resonance mass (m R = m ∆ , m Σ * ). Note that this substitution induces an explicit flavor breaking but these effects are beyond leading order. In (A1) B ab is the entry in the ath row, bth column of the matrix representing the octet baryons The Goldstone bosons are contained in and u µ is essentially given by (A5) Finally, the decuplet is represented by a totally symmetric flavor tensor (A6) The couplings we use [28] are F π = 92.4 MeV, D = 0.80, F = 0.46 [29] and h A can be determined from the partial decay width Σ * → Λπ or from ∆ → N π to be h Σ * →Λπ A = 2.4 and h ∆→N π A = 2.88. (A7) In the large-N C limit [30,31], one also gets (N C = number of colors) where g A = F + D = 1.26. We will explore the range h ± A = 2.7 ± 0.3. Notice that after the substitution (A2) it is the ratio h A /m R that appears with each decuplet-baryon-meson term [cf. Eq. (7)]. That is, on the probability level one has schematically T Σ * (Λ H ) + · · · (A10) so that one could vary h A by ∼ 10% for each of the separate decuplet terms keeping the masses as shown but numerically it makes not much of a difference to instead use m R = m ∆ for both terms and vary the ratio between its smallest and largest values 1. (A11) The effect of this variation is shown in Fig. 7 and its effects on the probabilities are studied in Ref. [49]. Appendix B: The vertex functions The functions S λ (y, k ⊥ ) are the amplitudes for a particular hadronic fluctuation of a proton with positive helicity and we calculate them using (on-shell) light-front spinors and the Lagrangian of Eq. (7). These amplitudes were calculated for a wide variety of hadronic fluctuations in [15]. Our results are basically the same with minor differences due to a different choice of Lagrangian. Apart from different normalizations, our results for the functions S λ (y, k ⊥ ) agree with those found in [33]. We now present the vertex functions for both choices of the meson's 'derivative momentum'.
2018-08-26T17:00:41.000Z
2018-07-17T00:00:00.000
{ "year": 2018, "sha1": "75356819b8741d0c5f0a42362fa0b4645896ea18", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.100.034003", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "0fd39db73b78825ef3743be9ca8f89c23edd21c7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10785032
pes2o/s2orc
v3-fos-license
Improving the Neighborhood Environment for Urban Older Adults: Social Context and Self-Rated Health Objective: By 2030, older adults will account for 20% of the U.S. population. Over 80% of older adults live in urban areas. This study examines associations between neighborhood environment and self-rated health (SRH) among urban older adults. Methods: We selected 217 individuals aged 65+ living in a deindustrialized Midwestern city who answered questions on the 2009 Speak to Your Health survey. The relationship between neighborhood environment and self-rated health (SRH) was analyzed using regression and GIS models. Neighborhood variables included social support and participation, perceived racism and crime. Additional models included actual crime indices to compare differences between perceived and actual crime. Results: Seniors who have poor SRH are 21% more likely to report fear of crime than seniors with excellent SRH (p = 0.01). Additional analyses revealed Black seniors are 7% less likely to participate in social activities (p = 0.005) and 4% more likely to report experiencing racism (p < 0.001). Discussion: Given the increasing numbers of older adults living in urban neighborhoods, studies such as this one are important for well-being among seniors. Mitigating environmental influences in the neighborhood which are associated with poor SRH may allow urban older adults to maintain health and reduce disability. Introduction During the last decade, there has been a resurgence of interest in the impact of neighborhoods on health. Growing epidemiological and sociological evidence link the residential environment to an individual's health [1,2]. The effect of neighborhood on health is particularly salient among older adults because older individuals are most likely the longest dwelling residents in the community and they have increased reliance on resources in their immediate neighborhoods [3]. Poor neighborhood conditions, which include a lack of social support, social networks, social cohesion and low perceptions of safety [4], may contribute to physical inactivity [5], obesity [6], and mental health disorders [1]. The geographic area mostly commonly referred to as the neighborhood is not precise. In health research, the terms neighborhood and community are often used interchangeably to refer to a person's immediate residential environment which is hypothesized to have material and social characteristics related to health [3]. Administratively defined areas, such as census tracts, block groups, and zip (postal) codes have been used as rough proxies for neighborhoods. Other criteria used to define a neighborhood can be historical, based on residential characteristics, or based on Methods This study uses secondary data extracted from the 2009 Speak to Your Health Community survey [14]. Speak to Your Health is a telephone survey conducted by the Prevention Research Center of Michigan to collect demographic, environmental (i.e., neighborhood characteristics), services, and health information from a cross-section of individuals living in Genesee County, Michigan (USA). The survey uses random digit dialing to select a sample of households throughout the county. The Prevention Research Center of Michigan is a community-university partnership which includes the University of Michigan, School of Public Health, the Genesee County Health Department and the Greater Flint Health Coalition. Additional details on the Speak to Your Health survey were published in an earlier article [15]. The study also used crime statistics from Location, Inc. (Worcester, MA, USA) which is a provider of location-based statistical data that includes crime statistics, lifestyle and demographic data on neighborhoods across the United States. The actual crime indices for each neighborhood are based on data from the Federal Bureau of Investigations (FBI) and the U.S. Justice Department. The crime indices used in this study are the same as the FBI defined crime index which is composed of the eight offenses the FBI combines to produce its annual index. These offenses include willful homicide, forcible rape, robbery, burglary, aggravated assault, larceny, motor vehicle theft, and arson. Setting Flint, Michigan, the urban center of Genesee County, is a de-industrialized city whose economy and population declined during the latter part of the twentieth century. Flint has high unemployment and based on local crime rates, was recently ranked in the top five most dangerous cities in the United States [15]. Subjects From 1698 participants who answered questions on the 2009 Speak to Your Health survey, we focused on the 217 individuals over 65 years of age who lived within the city of Flint. Because of the low number of survey participants from other races (<5), we selected only White and Black participants and stratified the study subjects by racial categories. The terms Black and African American are used interchangeably and refer to the same group. Basic demographic characteristics collected as background information on participants included age, gender, education, marital status, and health status (see Table 1). The proposal for this study was submitted for review to the university's IRB and was determined to be exempt because of its use of de-identified survey data. The survey committee of the Prevention Research Center of Michigan also reviewed the manuscript to evaluate appropriate use of the data. Self-Rated Health Individuals were assessed on health status using self-report indicators (See Table 1). The Speak to Your Health survey asked subjects to self-rate their health. The indicators excellent, very good, good, fair, and poor were then converted to numeric values 1 through 5 with higher values indicating excellent health. For our analysis, we combined subjects who rated their health as excellent or very good into a single category labelled excellent self-rated health (SRH). We also merged subjects who rated their health as fair or poor into one category labelled poor self-rated health (SRH). Social Capital To measure individual and collective social capital, the researchers selected eleven items from the Speak to Your Health survey. Social support (individual social capital) was measured by using six items from the Speak to Your Health survey. Individuals were asked about their relationships with relatives, friends, community members, and the religious community [16]. Social participation (collective social capital) was measured with five items. Survey respondents were asked if they were "involved in neighborhood clean-up, beautification, or community garden project," "involved in meeting of a block or neighborhood group," "took action with neighbors to do something about a neighborhood problem," and "volunteer in a program at a local school" [16]. Actual Neighborhood Crime The crime rate in the neighborhood was measured using an index which ranged from 1 to 100 with 1 being the most dangerous. Crime indices for each neighborhood were based on data from the Federal Bureau of Investigations (FBI) and the U.S. Justice Department. The crime indices used in this study gathered from Location, Inc. are the same as the FBI defined crime index composed of eight offenses the FBI combines to produce its annual index. Perception of Neighborhood Crime Perceptions of neighborhood crime and safety were assessed with an item collected from the survey. Survey responses were collected from the following question: "How fearful are you about crime in your neighborhood?"(very fearful, somewhat fearful, not very fearful, and not at all fearful); "How safe is it to walk around alone in your neighborhood during the daytime?" (extremely dangerous, somewhat dangerous, fairly safe, completely safe); and "How safe is it to walk around alone in your neighborhood after dark?" (extremely dangerous, somewhat dangerous, fairly safe, completely safe). The response indicators were then converted to numeric values 1 through 4 with high values indicating very fearful or extremely dangerous. For the final item, "Compared to other neighborhoods, the crime rate in my neighborhood is" (very high, high, about the same, low, and very low), the indicators were converted to numeric values 1 through 5 with higher values indicating very high or high crime. While using crime rates in a neighborhood is a more objective measure of neighborhood safety, subjective experiences and perceptions are more directly related to health [17] and are highly correlated with objective measures [4,[18][19][20]. Perceived Racism We assessed perceived racism using multiple items [21]. Respondents indicated the degree to which they were ignored, overlooked, or not given services, were treated rudely or disrespectfully, and were treated as if they were "stupid" or "talked down to" because of their race (never, rarely, sometimes, or often). Health Status The study included two additional physical and mental health outcome measures because of their relationship to SRH. Subjects were also asked whether or not they had been diagnosed with high blood pressure, heart disease, stroke, cancer, and diabetes (yes or no). For our analysis, each subject received a score based on the number of chronic conditions reported. For the final assessment of health status, we measured our study population on psychological conditions. Subjects reported (yes or no) whether they had been diagnosed with depression, anxiety or sleep disorders. As with our assessment of chronic conditions, each subject received a score based on the number of psychological conditions reported. Demographic Variables Demographic variables collected for our study population included race, gender, age, education, and marital status. Race included only Black and White participants because of the limited number of other races (<5). Age was examined as both a continuous and a categorical variable. The researchers subdivided the older adults into three groups-younger old (ages 65-74), older old (ages 75-84) and oldest old (ages 85+). In addition to looking at variations among populations by gender and race, researchers also examined differences between older adults from youngest old to oldest old. Education was collected as a categorical variable. For the purpose of this study, it was collapsed into four categories: less than high school, high school graduate, some college/technical school/associate's degree, and bachelors' degree or above. The study also included marital status-single (includes divorced, or widowed) and married or in a committed relationship. Statistical Analyses We stratified our study population by race (White and African American) and calculated the average age for each group. Then we assessed the proportions of each group based on gender, education level and marital status (see Table 1). For the analysis of the neighborhood environment on SRH, we used a multinomial logistic regression model which included SRH as our variable of interest with demographic, socio-economic, neighborhood and health status variables. We used psychological and chronic conditions as health status variables in the model to control for differences between groups in addition to controlling for their effect on self-rated health (see Table 2). Because results of our analysis showed a strong association between poor SRH and fear of crime, we conducted additional analyses to examine this relationship. We used ArcGIS software to map the relationship between SRH and fear of crime by neighborhood (see Figure 1). Neighborhoods are shown at the census tract level. The neighborhood representation for fear of crime takes the average of all older survey participants living within the neighborhood. The average score of each neighborhood was assigned a category of very low, low, average, high, or very high. The neighborhood then received a symbol which was sized according to the category. Larger dots symbolized greater fear of crime. An additional GIS layer shows SRH. SRH takes the average of all older survey participants living within the neighborhood. The neighborhood then receives a color based on the average SRH. Darker colors represent poorer health. For further statistical analyses of fear of crime in our study population, we also used a Poisson regression model (see Table 3). The model included fear of crime as our variable of interest. Similar to our previous model, we included demographic, socio-economic, neighborhood and health status variables. Because this analysis revealed differences based on race, we stratified the subjects by race and re-analyzed the data for each racial group. We reported adjusted odds ratios, confidence intervals, and p-values for each group. For our final analysis, we wanted to evaluate whether actual crime in the community had the same relationship to SRH as fear of crime (see Table 4). To analyze this data, we use a multinomial logistic regression model which included all variables in our initial analysis, except we replaced fear of crime with actual crime rate categories (low, medium, and high). All regression models were analyzed using SPSS, version 19 (IBM, Armonk, NY, USA). Results Our study population consisted of 217 individuals ranging in age from 65 to 91 years of age. The average age was similar for Whites and Blacks at 74.26 and 74.18 years respectively. Among White seniors, 70% were female and 73% among Blacks. Seven percent did not complete high school or GED compared with 27% of Blacks. Of those with the highest levels of education, 22% of Whites held a bachelor's degree or higher, while 15% of Blacks had an equivalent level of education. Only 32% of Whites were married or in a committed relationship compared with 40% of Blacks. There were no significant differences between White and Black seniors on the previous demographic variables. The mean scores for SRH were not significantly different between Whites (2.10) and Blacks (2.04). For psychological conditions, 37.5% of White seniors reported one or more of the listed conditions, but only 24% of Black seniors did the same. Finally, the average number of chronic conditions was not significantly different (2.22 for Whites and 2.08 for Blacks). (2.04). For psychological conditions, 37.5% of White seniors reported one or more of the listed conditions, but only 24% of Black seniors did the same. Finally, the average number of chronic conditions was not significantly different (2.22 for Whites and 2.08 for Blacks). Figure 1 shows the relationship between neighborhood SRH and fear of crime. For the GIS analysis, we combined scores of individuals living within each census tract and then divided by the number of participants within that neighborhood to get the average (mean) SRH for the neighborhood. This created a standardized unit for each neighborhood. We summarized SRH into three categories. The neighborhoods with excellent SRH had scores between 3.6 and 4.4. The scores for average SRH were between 2.7 to 3.5. The neighborhoods with scores from 1.5 to 2.6 were labelled as poor SRH. For the fear of crime data we also combined the scores of the individuals living within each of the census tracts and divided by the number of participants within that tract to get a score for the tract. Categories were created to summarize the data. The very low category scores fell between 5.8 and 7.0. The low scores were between 7.0 and 8.6. The average scores were 8.7 to 9.7. The high scores were 9.8 to 11.0. The very high scores were 11.1 to 12.0. There were no average neighborhood scores lower than 5.8 or above 12.0. We used Jenks natural breaks to determine the cut points for both SRH and fear of crime categories. Table 2 summarizes our analysis of neighborhood environment and self-rated health among older adults. Seniors with poor SRH were 21% more likely to report fear of crime compared with seniors with excellent SRH (p = 0.01). They were also twice as likely to have chronic conditions and three times more likely to report psychological conditions such as depression, anxiety, or sleep disorders. Seniors with poor SRH were also 4 to 5 times more likely to have a high school education or less. Because of the significant relationship between fear of crime and poor SRH, we conducted further analysis of this effect. Results are shown in Table 3. Notes: * Single also includes widowed, separated and divorced participants. Table 3 shows fear of crime as our outcome measure. For seniors overall, social participation (p < 0.005) and racism (p < 0.001) are strongly associated with fear of crime. Seniors who report fear of crime are 6% less likely to participate in social activities in the neighborhood and 3% more likely to experience racism. Also, these seniors are 18% more likely to have lower levels of education (less than high school). Results also show there is also a racial difference between Black and White seniors (p = 0.01) reporting fear of crime. Black seniors are 7% less likely to engage in social activities in their neighborhood (p = 0.005) and 4% more likely to report racism (p < 0.001). Although White seniors are less likely to participate in neighborhood activities, the difference is not significant. However, like Black seniors, they are more likely to report racism (p = 0.04). Furthermore, Black seniors reporting fear of crime are 38% more likely to have less than a high school education. Our final model examined associations between actual neighborhood crime and SRH. Again, we used SRH as our outcome measure. When we added actual neighborhood crime indices to our model, we found no significant relationship between actual crime and SRH. However, as expected, we found a relationship between SRH, chronic and psychological conditions and level of education. Seniors with poor SRH health were twice as likely to have chronic conditions and 3 times more likely to report psychological conditions. In addition, seniors with poor SRH were 5 to 6 times more likely to have a high school education or less. Discussion In summary, fear of crime was strongly related to poor SRH among older adults. This study supports the conceptual framework for understanding social inequalities in health and aging proposed by House [13]. This framework, based on a stress and adaptation model from social epidemiology, theorizes that socioeconomic position and race/ethnicity shape individuals exposure to and experience of virtually all known psychosocial and environmental risk factors. These risk factors explain the size and persistence of social disparities in health. Our study also supports similar findings between neighborhood factors and psychological distress [19]. Booth et al. found that neighborhood factors are associated with mental health outcomes, but concluded that more research was needed. Both studies support the social stress theory that chronic stressors outside the individual become internalized. Our findings provide additional information to previous studies by showing a relationship between fear of crime and poor SRH among older adults. Our study built on these previous studies by also comparing fear of crime and actual crime indices to poor SRH among seniors. Although actual rates of crime victimization among older adults is much lower than younger people [18], older individuals express higher levels of fear of crime and lower levels of perceived safety [19]. Among both White and Black seniors, this results in lower social participation and perceptions of higher levels of racism. The relationship was even stronger among Black seniors. Furthermore, Black seniors with less than a high school education are 38% more likely (p = 0.01) to fear crime in their community. This study also shows although women are more likely than men to experience fear of crime, the differences were not significant. We are unable to say from these results whether increased fear causes lack of participation or whether increased fear is the result of lack of involvement in neighborhood activities. Individuals who are involved in neighborhood activities are more likely to meet their neighbors and establish relationships. Previous studies have shown that knowing one's neighbors can decrease vulnerability to health risk by increasing social capital. The strength of this study is that it uses subjective and objective measures of the neighborhood's environment, particularly for crime. However, several limitations of the study should be noted. The first limitation is that the data are cross-sectional. Because of the cross-sectional data, we cannot infer that the findings of the study are causal in nature. However, the evidence is consistent with the conceptual frameworks of Diez-Roux [3], which suggests that a disadvantaged neighborhood environment is related to poor health, and House [13], which suggests that neighborhood environment can and does "get under the skin" causing biological changes which increase poor health. More studies are needed to further analyze the reciprocal influences of neighborhood as a place of residence, perceptions of the neighborhood, and self-rated health. The second limitation of the study is that although our sample is population based, our analysis is focused on only Black and White urban residents over the age of 65 living in Flint, Michigan. The study could be strengthened by examining additional races or ethnicities in urban centers in other regions. However, because of the characteristics of this geographic area, the numbers of participants of other races or ethnicities was too small to analyze. But, metropolitan areas, especially in the Midwest, are similar in population characteristics and socioeconomic structure. Also, the contextual neighborhood effects have been examined in other settings [8,22,23]. Therefore, we believe that this study may provide information for other urban settings with inequities in neighborhood environments. In addition, self-rated health rather than actual measures of health as an outcome variable may have introduced response bias. People in poor health may feel more negative about their neighborhoods [8]. Although these findings give insight into the relationship between fear of crime and SRH, it still raises questions that should be explored-mainly is there a causal link between fear of crime and poor SRH. Future research should assess what creates fear of crime and whether this fear substantially changes an individual's health behaviors or health status. It is reasonable to assume that fear is caused by objective measures such as actual neighborhood crime, but it appears that other influences such as the media, neighborhood history, and/or history of victimization may play greater role. Implications for Neighborhood Environment and Health Aging populations within urban neighborhoods will create a series of challenges to the provision of health and social care. As the population ages, the total amount of ill health and disability in the population will increase unless there is considerable improvement in the health of current and future urban seniors [24]. These changes are expected to occur because of the shift from acute infectious disease to complex chronic long-term illness and disability. This shift is expected to cause dramatic changes in the allocation of health care resources and the configuration of services [25]. It has also been predicted that even if increases in the urban older adult population does not exert pressure for additional resources in the health care system, it may create the need for the development and improvement of community services for seniors with complex health needs. Mitigating environmental influences in the neighborhood which are associated with poor SRH may allow urban older adults to maintain health and reduce disability. Extending healthy lives within this population will reduce costs associated with long-term health and social care [24]. Conclusions A growing body of literature has reported associations between neighborhoods and health [26]. As the literature expands, it is worthwhile to consider populations such as older adults because the proportion of people aged 65 and older is growing. In the U.S., the number of people over the age of 65 is expected to reach 72 million-which will account for roughly 20% of the population [6]. Previous research relating neighborhoods to health in older adults examined mortality [9], mental health [27], and health behaviors [28]. This study adds to existing literature by examining perceived vs. actual effects of neighborhood environment among urban older adults. We were able to show a relationship between fear of crime and poor SRH among urban seniors. In addition, we were able to show that poor SRH is not related to objective measures of actual crime but is related to perception of crime. This suggests that self-rated health is more affected by perception of the neighborhood. This finding supports the suggestion that self-rated health may be improved by improving senior adults' attitudes about their neighborhood environment. Specific strategies may include reducing fear by creating activities which focus on meeting other individuals in the neighborhood. We also found that race is a determinant in older peoples' perceptions. Understanding specific neighborhood influences on health will enable us to improve the lives of older adults, many of who are aging in place [29], and is crucial in addressing growing populations of urban older adults.
2016-03-14T22:51:50.573Z
2015-12-22T00:00:00.000
{ "year": 2015, "sha1": "2b695251610f7b947d327d093a63b0a139715b9b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/13/1/3/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b695251610f7b947d327d093a63b0a139715b9b", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
54702115
pes2o/s2orc
v3-fos-license
Regional Demographic Differences: the Effect of Laestadians Laestadianism, a conservative revival movement inside the Lutheran church, has an estimated 100,000 followers in Finland. Laestadians have characteristics differing from the followers of the mainstream state church in areas such as religious activity, regional concentration, fertility and family planning, but these are generally not quantified due to lack of easily accessible data. This study highlights the importance of including location and religiosity, and not only religious affiliation in the study of fertility behaviour. The research uses statistical tools to study the correlations between such variables as religious density and total fertility rate. It is found that on the regional level, the total fertility rate and the increasing number of small children in the family is positively associated with the proportion of Laestadians. The regional variation of religiousness, and the subsequent effects on population structure and socioeconomics are discussed. Background Communities that share a tradition of faith and values may differ significantly in their demographic histories.This is attested by a wealth of empirics on religious differences in fertility (Frejka and Westoff 2006, Derosas and van Poppel 2006, McQuillan 2006).This study addresses regional fertility differences, and the role of the religious minority of Laestadians in creating it.Location and religiousness are shown to be influential elements in discussing the evolving patterns of the demography of regions. Finland is still rather homogeneous in terms of its population composition.For example in 2007, by religious denomination over 80 percent of the 5.3 million population were Lutheran (Kääriäinen at al. 2008).The next biggest denomination is Orthodox Christian, with some 58,000 followers, and Jehova's witnesses with under 20,000 followers.In terms of nationalities, Finland is also one of the most homogeneous countries in Europe, with only 2.5 percent of its population being of foreign origin at the end of 2007 (Kääriäinen at al. 2008). Location and religious affiliation are often coupled.Countries have been made and destroyed based on religious beliefs, not to mention provinces, municipalities, cities and communities.People belonging to minorities form spatial sub-communities in order to support one another on the one hand, and to decrease outside influence on the other.To survive, they need successful biological investment (fertility-survival) and the capacity to transfer enough symbolic capital (such as language, religion, and customs) from generation to generation (Oris et al. 2005). There has recently been a decline in the study of religious affiliation as a determinant of demographic behaviour, as a result of the smaller religious differences in parity (Mosher et al. 1992, McQuillan 2004, Philipov and Berghammer 2007).With the levelling off of the old Catholic-Protestant fertility differentials, a new era of religion inspired population research is surfacing, namely that of cultural beliefs and practices (Goujon et al. 2007, Philipov and Berghamer 2007, Lutz et al. 2006). As the main religious groups throughout Europe have come closer to each other in terms of their fertility behaviour, other distinctively differing groups have begun to emerge.One is the increasing minority of Muslims, who, in Finland, have immigrated to a great extent as asylum seekers, and are but a few.The other important groups are religious fundamentalists and revivalists.In addition to movements and cultural groups, frequently attending followers of any mainstream religion could be classified as minorities based on their beliefs, intensity, and presumably distinctive fertility behaviour. Minorities and their fertility behaviour have also been studied for several decades.Goldscheider and Uhlenberg conclude already in 1969 that a minority group status will operate to enhance the fertility differential with respect to the majority group when committed to a religious ideology or socio-cultural norm that encourages large families and/or restricts the choice and use of contraception (Goldscheider et al. 1969).Otherwise, they find that the fertility levels of a minority group are depressed, perhaps as a result of the insecurities associated with minority group status.Voas (2007) and Zhang (2008) find that in fact religiosity rather than religious identity may be the key in discovering the modern connections between population studies and religion.Similarly, Jewish studies in the past have shown a remarkable difference in fertility depending on religiosity, rather than religious affiliation.Mott and Abma (1992) conclude that Orthodox Jews have substantially more children than the Jewish majority despite belonging to the same church.Philipov and Berghammer (2007) found in their international study that the attendance of religious services has a strong impact on fertility ideals, and confirm that the measure is a slightly more relevant predictor of such ideals than affiliation alone.In addition, a higher intensity of selfassessed religiosity is positively associated with the measures of fertility (Philipov and Berghammer 2007).With the progressive loss of influence of religious institutions in the society, the degree of church attendance has become a more salient predictor of family norms, particularly for women (Adsera 2006). Although religious minorities, e.g.revivalists are often public and commonly known, the number of followers and the intensity of their religiousness is a different question.Surveys and census information on such groups are scarce, in part due to the very fact that the branches may still be affiliated with their church of origin, like is the case with the Laestadians.This study attempts to combine fertility differentials with not only religious affiliation of the population, but sub-affiliation (revival movement), religiousness and spatial distribution. Laestadians of Finland A historical account for how Laestadianism has evolved has been documented earlier (Lohi 1997).However, in the context of religious minorities, little has been written about Laestadianism.This is surprising, since in terms of estimated numbers of followers, Laestadianism is a minority much bigger than other (registered) denominations, after the state church.It is generally believed that the different branches of Laestadianism have differing trends in their stand and recommendations concerning e.g.family planning.This article focuses on one of several branches, namely the Conservative Laestadians, who also make up the majority of Laestadians in Finland.Perhaps unsurprisingly, they are also known as the most conservative branch of the movement.Without plunging deeper into the ideology of the Laestadians, it is assumed that compared to the followers of the mainstream Lutheran denomination, the Laestadians are a religious people.They have chosen a revival movement to guide their path, and with it, a set of more rigid guidelines than imposed upon the followers of the mainstream church.Abiding to such guidelines will undoubtedly show in demographic analysis, and as this study points out, even in a regional analysis provided the concentration of the population with distinctive fertility behaviour is substantial.In the absence of a better measure, the density of (Laestadian) churches is assumed a cue for high fertility in a region.For the purpose of this article I refer to the Conservative Laestadians as simply Laestadians.A Laestadian is someone who belongs to the Lutheran church and practises Laestadian faith. The revival movements inside the state church are not documented separately in the population registry.Since the Laestadians belong to the Lutheran church, detailed information about the demographic composition of those participating in the movement is not available.Perhaps due to data restriction, this revival movement has been much overlooked in studies of religious fertility and minority fertility differentials in the past.The people of Laestadian faith can nevertheless have a strong impact on the local development and demography of their town, municipality, even region.Their religion is known for its natural fertility emphasizing family values and guidelines including pre-marital abstinence and refraining from the use of contraception.However, belonging to a religious group no longer equals adhering to all the teachings and recommendations provided by the group.For a measure of how religious a person is, we need to quantify religious intensity, or religiosity.Quantifying religiosity, however, is challenging, e.g. when coupled to the study of (marital) fertility, including inter-faith couples (Neuman 2007). The concept of religiosity and adhering to religious guidelines is particularly interesting in contemporary Finland, where increasingly secular and non-religious values lead to more and more people leaving the church (Kääriäinen et al. 2008).It has been shown that religious affiliation alone is an insufficient measure in explaining current fertility patterns and norms.Laestadians are an example, belonging to the state church, but having characteristicly different fertility behaviour from the majority. Local fertility behaviour has been under scrutiny in Finland before.In 1991 an article was published in Population Studies that highlighted the demographic development of a small town of some 3500 inhabitants, namely Larsmo -Luoto (Finnäs 1991).In this article, Finnäs describes a population of pro-natalists living in a rather closed community.He describes a town with an approximated 40% Laestadian population, and a total fertility rate (TFR) as high as 3.68 around 1980 (cf. average TFR of 1.63 for Finland that year).Based on his research of historical church records, he concludes that whereas the TFR of Finland started to decline steeply in early 1960s, the pattern did not apply to Larsmo.Before the 1960s, the total fertility rate was well above 2.5, and plummeted to 1.5 in 1973, from which it has slowly recovered to the current levels of about 1.85 (Statistics Finland 2010).Comparing to Hutterites, the small Protestant group in North America often quoted for their natural fertility (e.g.Espenshade 1971), Finnäs finds a differing pattern of parity among the Laestadians.The Hutterite distribution of women by parity is unimodal with the modal number between nine and eleven.For the Laestadian women the distribution is bimodal with peaks at four and ten.That is to say, the Hutterite women produce on average nine to eleven children whereas the Laestadians may have a smaller family of four, or conversely a larger family of ten children.Laestadians therefore do not exhibit a fully natural fertility pattern, but have nevertheless a much higher fertility than the average Finn. Other religious minorities and locations Well-known examples of closed communities are religious ones, like the Amish, in the United States.The Amish often appear in studies of religious fertility differentials.A comparison is provided by the Hutterites, as mentioned above.Ericksen et al. publish a (by now) historical account of Amish and Hutterite fertility, and make comparisons to the American average (Ericksen et al. 1979).They, too set out to investigate whether there had been changes in the fertility behaviour of the Amish (cf.Finnäs, above).His-torically, it can be seen that the maintenance of high fertility norms of the Amish results in a high rate of population increase, of which one manifestation could be the steep increase in number of church districts in Lancaster County (Ericksen et al. 1979).Socioeconomic status does not necessarily have a negative impact on fertility, argues Heaton (1986) in the case of Mormons, and goes on to say that the acceptance of the Mormon theology of marriage, contact with other Mormons as a reference group, and socialization in a Mormon subculture all have a positive influence on their fertility.The pattern of high fertility depicted by Mormons in e.g.Utah, USA, strengthens the local interdependence between religion and family much in the same way as is investigated here in the case of Laestadians. Other religious communities, such as the Catholic, have been both leaders and laggards in the demographic transition, depending on location (van Heek 1956, McQuillan 2004).The same is true in the Muslim world.Islam is widely associated with persistently high fertility, however in certain countries, such as Turkey, Tunisia and Lebanon, it has not stood in the way of fast declining fertility in the latter part of the 20 th century (Obermeyer 1992(Obermeyer , 1994)).Muñoz (2009) studies the geographies of religion and ethnicity in the context of three different kinds of Indian faith in Scotland.She concludes that religion, not only ethnicity, plays a role in determining contemporary residential patterns and levels of segregation (Muñoz 2009).She finds that there is inherent value in incorporating religious affiliation into the study of geographies of population. Data and methods As the exact numbers of followers of the Laestadian revival movement are not known, much less so by region or municipality, it is difficult to formulate a "bigger picture" of how a community with preferences for restricting family planning and use of contraceptives affect local demography.Based on municipality and regional level population data, as well as surveys on beliefs, the effect of the fertility behaviour that was observed by Finnäs (1991) is investigated to this day.Regional level religiousness is coupled to fertility, and it is proposed that the mere existence of several religious places of worship is a sign of concentrated religious intensity and population growth, as introduced by Ericksen et al. (1979). To obtain some location specific measure of Laestadian activity, a list of the membership totals of 188 current local Laestadian congregations was obtained.One of the data sources is thus a membership listing from 2008 by the Central Committee of Conservative Laestadian Congregations1 (SRK 2010).The total number of members of each local congregation only includes the number of adults, in this case, over 15-year-olds. In addition, this study is based on several other data sources.One is a survey conducted in 2004, namely the "Church Monitor" (Kääriäinen 2007).This is a survey with N = 2569 adult respondents, ages 15 and above, depicting the decline, change and transformation of Finnish religiosity, and involves the Research Institute of the Evangelical Lutheran Church of Finland.Another source is a survey conducted by the Evangelical Lutheran Church periodically every four years, directed at the clergy and the congregations (Kääriäinen et al. 2008).The final data source is derived from Statistics Finland2 , the only Finnish public authority for statistics.Statistics Finland provides data on population on various geographical levels and measures, as well as on economic indicators.This data is used to compare and highlight regional demographic and socioeconomic differences.The data sources are together chosen to establish a base for estimating the intensity and locating the activity of the Laestadian church in Finland, and to compare to regional level population and socioeconomic data. The applied methodology is essentially descriptive statistics.Hypothesis testing and statistical significance were applied to evaluate the momentousness of the survey results for the case of Laestadians, who represented a minority in the sample(s).Simple linear regression is used to investigate correlations between relevant datasets, hereby providing evidence of concurrence.The dependent variables are total fertility rate and two measures of family size, namely families with 4+ minor children, and families with 3+ children under the age of seven.These data are available in 2008 for all NUTS 3 regions in Finland.The main independent variable is the share of active Laestadians of the total local adult (age > 15) population. Results In terms of the Nomenclature of Territorial Units for Statistics, or NUTS, mainland Finland is divided into nineteen NUTS 3-size regions, as shown in Figure 1 Figure 1.Finland with regional borders. Hereafter the term 'region' refers to this size index.The nineteen regions vary a great deal in terms of population size, density and structure.Figure 2 a-d shows the population pyramids of four regions, selected based on different population structure, geographic location and socioeconomic affluence, see e.g.Heikkilä and Pikkarainen (2010) for a discussion of economically competitive regions and causes in Finland.Note the especially large difference at the base of the pyramids (In order to be able to focus on the younger age groups and their variance the 75+ age groups were omitted.) Figure 3 shows the map of Finland with Laestadian activity in the congregations of the principal Lutheran church.An active region is defined by the congregation of the Lutheran church reporting collaboration activities with local Laestadians on a yearly to monthly basis, as published in the four-year report.The Laestadian movement had in 2007 reported activity in over two-thirds of all congregations, the same as in 2003 (Kääriäinen 2008).The map shows, however, that the frequency of the activities varies regionally.It is strongest in the less densely populated Northern parts of the country. The black color indicates activity on a monthly basis, and is condensed to the North-West of Finland.This coincides with the higher fertility levels throughout (North) Ostrobothnia and Lapland. Fertility The total fertility rate (TFR) in Finland is not in the lowest low levels.In 2008 the reported TFR for the whole of Finland was 1.85 (Statistics Finland 2010).However, there are considerable variations in TFR across the nation.In addition to the very different population structures by region (cf. Figure 2), this adds to the regionally differing sizes of (future) cohorts.The total fertility rate exceeds replacement level, i.e. is above 2 in four of the nineteen regions.The capital region of Uusimaa precedes by one the region with the lowest TFR in Finland, Varsinais-Suomi.The difference between the lowest and highest regional TFR's is as high as 0.7. Religious beliefs The "Church Monitor" survey (Kääriäinen 2007) included respondents from different branches of Laestadianism, in addition to others.The structure of the respondents was the following: N = 76 for all Laestadians out of which N = 48 Conservative Laestadians, and N = 2 515 others, i.e. non-Laestadians.The proportion of Laestadians partaking in the survey was between two and three percent depending on the definition (Conservative or all Laestadians).This is in line with the estimated numbers of Laestadians in the whole Finnish population.The sample was thus considered representative, albeit the total numbers were small. Descriptive statistical analysis reveals for the sample of (Conservative, unless otherwise indicated) Laestadians a significantly higher average number of family members compared to all Laestadians and rest of the respondents, namely 4.02, compared to 3.47 and 2.48 respectively.A null-hypothesis testing on the result confirmed that the difference is statistically significant. When asked about believing in God, 44 out of 48, i.e. 92 percent of Laestadian respondents said that they "believe in the God that the Christian faith has taught them" (Kääriäinen 2007).It is concluded that by Christian faith is here meant the teachings of the Lutheran church in general, including in this case, the branch of Laestadianism. Of all respondents, 39 percent believe in God.87.5 percent of the Laestadians considered themselves as being religious, compared to 10.8 percent of all respondents.To summarize, based on the recent survey on beliefs and religiousness, the Laestadians stand out as being more religious and having, on average, bigger families.In general, as a population's overall level of education rises, fertility declines, and income becomes more equally distributed (Sato et al. 2008).In order to witness possible regional variance, selected socioeconomic figures were obtained for all the regions in mainland Finland.Especially, the differences between the regions of highest and lowest TFR were compared, namely North Ostrobothnia and Varsinais-Suomi respectively. Socioeconomic factors North Ostrobothnia was recorded to suffer in 2007 a slightly higher unemployment rate than Varsinais-Suomi.However, the region roughly equals Varsinais-Suomi in levels of education attainment and self-sufficiency in the job market (Statistics Finland 2010). It is said that asset disposal and acquirement is jointly determined with reproduction decisions (Cigno and Rosati 1996).Considering financial indicators at household level, the Disposable Income (DI) is chosen as the most relevant measure for the purpose of this study, in order to have a measure for relating income with fertility (see Table 1, sorted in order of descending DI per capita).It can be noted that in terms of DI per capita, the two regions with greatest difference in TFR are quite far from each other.However, at the same time, three other Finnish regions lowest in DI per capita share below replacement level fertility.Sato et al. (2008) find that parents with a lower level of human capital decide to have more children and invest less in education.In this study, the expectation that low DI per capita couples to higher parity, is not true for the Finnish regions in general. It should also be noted that family size has a definite impact on DI per capita.The other way around, personal income may be less dependent on family size than disposable income (see Shields and Tracy 1986).However, both personal and disposable income are, to different extents, endogenous with family size because women's market earnings might fall with increased family size (Shields and Tracy 1986), and definitely do so in the case of many Laestadian families.The relatively low expected economic return on education to women may also be relevant, where women anticipate staying at home rather than working outside it (Voas 2007).Women may thus advance the statistics on level of human capital, but not contribute fully to the family's disposable income.In terms of regional differences, one would expect the single bread-winner communities to stand out but not necessarily to couple to high parity.In Figure 4, the family measures of the nineteen regions are correlated against estimated Laestadian ratio in the adult population.This is quantified as the proportion of (active) adults in local congregations with respect to all adults (age > 15) in the region.It is found that the regression gives progressively better results moving from TFR (Figure 4 activity in a region is strongly correlated with higher TFR, big families, and many young children.Changing the dependent variable to unemployment or (lack of) education, no clear connection is found with Laestadian-rich regions, as witnessed by insignificant R-squared values (< 0.1) as well as weak Pearson and 1-tailed Sig.correlations (figures not shown).The variables thus have no linear relationship, unlike the family size and TFR statistics analyzed above.Simple socioeconomic correlations can thus be ruled out as the main explanatory factor for the demographic findings. Discussion The study of Finnish regional demography has shown strong correlations between Conservative Laestadians and fertility on the one hand and family size on the other. The results support the fact that the Laestadian religious activity is abundant, but not confined to just a few regions in the country, North Ostrobothnia being the region of most activity.The same regions are shown to exhibit distinctive demographic variance compared to the rest of Finland.It is concluded that the geographical proximity of the community may emphasize the individual's tendency to comply to the standards and recommendations provided by the community, as observed earlier by Finnäs (1991).High fertility behaviour is given as an example of such compliance. It is seen that the geographical concentration of a high-fertility population can have an effect on the demographics of the area.Based on this particular case study of a religious minority, religion can still have a major impact on the local level in terms of population structure, and therefore long-term viability.Small-scale, local level trends can be visible even on the regional level in terms of demography, and that religion, when coupled to religiosity and high fertility, can be a predictive factor in regional development.Here religiosity does not imply, nor correlate with a distinctively (lower) socioeconomic status, nor does the fertility behaviour result from such a status.Shields and Tracy (1986) analyse the effect of gender roles and income on family size as follows: "…the expanded opportunities for women in markets and education may permanently alter the effects of changing income and age structure on fertility.If there is to be a dominant new theme, it may concern the effect of changing gender roles on the composition of household activities.These activities may be becoming more diversified partly because the implicit cost of time rises with income inducing families to substitute goods for time in their activities.These substitutions may be altering the role of the family from a producer of child-rearing activities to a producer and consumer of a more diversified menu of activities.A rise in relative income may not result in a rise in family size if larger families do not enhance the family's enjoyment of this more diversified set of activities.The family might have to shift its focus back towards child rearing in order to benefit from more children." The high emphasis on family values of the Laestadian community can be seen as a counter-trend to the ongoing "de-genderization" of breadwinning and household roles. It is unsurprising that in order to in fact enjoy having bigger families, a lot of emphasis is placed on child rearing in the Laestadian community, resulting in one parent staying at home.Concomitantly, religious values are known to matter most when the religious institutions have the mechanisms to promote compliance and punish nonconformity (McQuillan 2004).This could, in the case of Laestadians, be seen as a finger pointing at the practise of natural fertility, a topic much debated in incognito chatrooms as well as pages of newspapers and magazines in Finland.However, the conclusion is mostly drawn that common guide-lines are as significant a part of religiousness, as adhering to them is to being a member. Religious affiliation is widely accepted as a demographic variable, still the quality and reliability of data on religious commitment is questionable (Voas 2007).In this study, the applied survey on religious beliefs was of good quality overall, however the sample size applied to Laestadians was small.Statistics on religious beliefs and activities are often far apart and difficult to find, even more so on small spatial scales.Therefore it is unsurprising that little research has been done based on religiousness and regional, much less local differences in demographic trends, in addition to religious minority status.The study of fertility differentials and minorities thus calls for innovative approaches, and an open-minded classification of players. The study of beliefs and attitudes in demography can benefit immensely from detailed, geographically sensitive studies, and the inclusion of religion as a source of common views, and consequently distinctive demographic features.Religion, and religiosity in particular play a role in life style choices, and can thus have a strong effect on fertility events.By nature, religion seems to be a localized attribute, and thus very well applicable to small scale population studies.Further work on the topic may include small-scale population projections, and comparative studies of the contemporary fertility behaviour of minority communities in different parts of the world. Figure 2 a Figure 2 a-d.Population by gender and five-year age groups of selected NUTS 3 regions in 2008 (Statistics Finland). Figure 2 a Figure 3 . Figure 3. map of Finnish municipalities showing Laestadian activity in (Lutheran) congregations by (sub)denomination.Conservative Lutherans top left.Darker color indicates more frequent activity: black -once a month; yellow -once a year; white -no activity (Kääriäinen 2007). Figure 4 a Figure 4 a-c.regression plots of the regional proportion of Laestadians versus Total Fertility rate (a), share of families with 4+ underaged children (b) and share of families with 3+ children under 7 years old (c). a) to the occurrence of big families (Figure 4 b), and along to the occurrence of 3+ children under the age of seven in the same family (Figure 4 c).Thus, Laestadian
2018-12-11T00:34:33.745Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "e84936ec22ab422e5d16c79dc9197fb28e683432", "oa_license": "CCBY", "oa_url": "https://journal.fi/fypr/article/download/45057/11335", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e84936ec22ab422e5d16c79dc9197fb28e683432", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
119607361
pes2o/s2orc
v3-fos-license
Planar and spherical stick indices of knots The stick index of a knot is the least number of line segments required to build the knot in space. We define two analogous 2-dimensional invariants, the planar stick index, which is the least number of line segments in the plane to build a projection, and the spherical stick index, which is the least number of great circle arcs to build a projection on the sphere. We find bounds on these quantities in terms of other knot invariants, and give planar stick and spherical stick constructions for torus knots and for compositions of trefoils. In particular, unlike most knot invariants,we show that the spherical stick index distinguishes between the granny and square knots, and that composing a nontrivial knot with a second nontrivial knot need not increase its spherical stick index. Introduction The stick index s[K] of a knot type [K] is the smallest number of straight line segments required to create a polygonal conformation of [K] in space. The stick index is generally difficult to compute. However, stick indices of small crossing knots are known, and stick indices for certain infinite categories of knots have been determined: Theorem 1.1 ( [Jin97]). If T p,q is a (p, q)-torus knot with p < q < 2p, s[T p,q ] = 2q. Despite the interest in stick index, two-dimensional analogues have not been studied in depth. In a recent paper, Adams and Shayler [AS09] defined a new invariant, the projective stick index. We modify their definition slightly: Definition 1.3. A planar stick diagram of a knot type [K] is a closed polygonal curve in the plane, with crossing information assigned to self-intersections, that represents [K]. The planar stick index pl[K] of a knot type is the smallest number of edges in any planar stick diagram of [K]. Remark 1.5. We could define the spherical stick index of the unknot to be either 1 or 2, depending on whether we allow entire great circles in spherical stick diagrams. As such, we leave ss [Unknot] undefined. If we were to consider the spherical stick indices of links, the choice would become important. Figure 1b shows a spherical stick diagram of a trefoil. A spherical stick diagram can be obtained via radial projection of a stick knot in space onto a sphere from some point in space, or via radial projection of a planar stick diagram from some point not in the plane. In Section 2, we establish bounds for the planar stick index in terms of other invariants, including crossing number, stick index, and bridge index. Section 3 establishes similar bounds for the spherical stick index. In Section 4, we construct planar stick diagrams and spherical stick diagrams for torus knots and compositions of trefoils, providing upper bounds for the planar stick index and spherical stick index of these knot types. In some cases, the bounds from Sections 2 and 3 show that the constructions are minimal. Our results are as follows. Let T p,q denote the (p, q)-torus knot. Let nT denote a composition of n trefoils (of any combination of handedness), and aT L #bT R denote the composition of a left-handed trefoils with b right-handed trefoils. Because composition of knots is commutative and associative (see [Ada94]), aT L #bT R is well-defined. The difference between Theorems 1.8 and 1.9 is striking: the planar stick index of a composition of trefoils is independent of the handedness of the trefoils composed, while our construction of a spherical stick diagram depends heavily on handedness. It would be interesting to know if the bounds in Theorem 1.9 are sharp, and whether the spherical stick index of a composition of trefoils depends on handedness in general. This seems difficult to prove, since most invariants that we could use to obtain lower bounds do not detect handedness of composites. However, by classifying all knots with ss[K] ≤ 4 (as we do in Section 5), we can prove that the bound in Theorem 1.9 is sharp in the case of composing two trefoils. We also see a very unusual characteristic for a naturally defined physical knot invariant: Corollary 1.12. There exist nontrivial knots K 1 and K 2 so that ss[K 1 #K 2 ] = ss[K 1 ]. Planar Stick Index In general, the planar stick index of a knot is difficult to compute. It is straightforward to construct a planar stick diagram, but hard to prove that it is minimal. In this section, we establish bounds on the planar stick index of a knot in terms of other invariants. These bounds enable us to compute exact values for planar stick index for certain categories of knots in Section 4. Proof. Consider a polygonal conformation of [K] that realizes the stick index. If we project the knot onto a plane normal to one stick, that stick projects to a single point. In the "generic case," the resulting polygonal curve in the plane is a diagram of [K] with at most s[K] − 1 edges. The diagram fails to be generic if three edges intersect at the same point, or if a vertex overlaps an edge. In such a case, however, we can tweak the original conformation slightly so that after projecting, we obtain a generic ( Theorem 2.3. If K 1 #K 2 is the composition of two knots K 1 , K 2 , then Proof. Consider planar stick diagrams for K 1 and K 2 that realize planar stick index. Since any two adjacent sticks in the diagram of K 1 are non-parallel, we can perform an orientation-preserving linear transformation that makes two adjacent sticks of K 1 perpendicular. We do likewise for the diagram of K 2 . Once we have these diagrams, we can rotate and attach them at the right angles such that the incident sticks line up. The point at which the corners were attached then becomes a crossing. If K 1 and K 2 do not overlap, the resultant diagram is a (pl[K 1 ] + pl[K 2 ] − 2)-stick representation of K 1 #K 2 . If K 1 and K 2 do overlap, first note that if necessary we can tweak the diagrams slightly so that the new diagram is generic. We can then choose the new crossings so that sticks from K 1 always cross over sticks from K 2 . It is clear that the diagram represents K 1 #K 2 . We get another bound on the planar stick index in terms of the bridge index. Let b(K, p) be the number of local maxima of a knot conformation K relative to a direction (taken to be a vector p on the 2-sphere S 2 ). The bridge index is given by This definition is similar to Milnor's definition of crookedness (see [Mil50]). However, if an extremum occurs at an interval of constant height, we count it as one extremum rather than infinitely many. Proof. For a planar stick diagram, the total curvature is the sum of the exterior angles. There are pl[K] vertices in a minimal planar stick diagram of a knot [K]. Since each vertex has an exterior angle strictly less than π, the total curvature of such a diagram is less than π pl[K]. We view this diagram as a curve in a plane in 3-space. Bending the sticks slightly out of the plane at each crossing yields a conformation of the knot (as opposed to a diagram). Since we can bend the sticks by an arbitrarily small amount, the final total curvature can be made arbitrarily close to the original total curvature. Since the original total curvature was strictly less than π pl[K], the final total curvature can be made to be less than π pl[K]. Milnor showed in [Mil50] that for any conformation K with total curvature tc(K), Since tc(K) < π pl[K], we find 2 b[K] < pl[K]. Since both quantities are integers, the result follows. Spherical Stick Index When studying spherical stick diagrams, it is helpful to consider their stereographic projections. Given a diagram of a knot on a sphere, we choose a point on the sphere not on the diagram to label ∞. Consider this the north pole. The stereographic projection relative to this point maps S 2 \{∞} homeomorphically to R 2 , which is the plane though the equator, and transfers the diagram into R 2 . Moreover, stereographic projection preserves the knot type of a diagram. See Figure 2 for an example. The following fact can be proved relatively easily. Fact 3.1. Stereographic projection gives a one-to-one correspondence between great circles on the sphere that do not pass through infinity, and circles in the plane that have a diameter with endpoints p, q that contains the origin, and satisfies |p|·|q| = 1. In some situations, it is more convenient to think about circles in the plane rather than great circles on the sphere. Most of our figures of spherical stick diagrams will be stereographic projections for clarity. We prove bounds on spherical stick index, many of which are analogous to those proven in Section 2 for planar stick index. Proof. Observe that given a polygonal curve in space, radial projection onto a sphere maps each edge to a great circle arc. Consider a planar stick diagram that realizes planar stick index for [K]. We put it in space in a plane not containing the origin and radially project to the unit sphere. The diagram projects to a spherical stick diagram of [K] with pl[K] great circle arcs. Proof. Consider a minimal stick realization of a knot in space. Using a similar trick as appears in [Cal01], we choose a vertex v of the knot, and radially project the knot (minus v) onto a sphere centered at v. Radial sticks project to points, and non-radial sticks project to great circle arcs. Since the two sticks adjacent to v are radial, the projection has at most s[K] − 2 arcs. However, it is no longer a closed curve, as there are two "loose ends" corresponding to the sticks incident at v. As projections of line segments, the great circle arcs must be strictly smaller than π radians. Since any pair of distinct great circles intersect at two antipodal points, and each arc traverses less than half of a great circle, no two arcs can intersect more than once. In particular, the arcs with "loose ends" intersect at most once. We extend these arcs until they meet, making the extended arcs understrands at any Figure 3. Composition of spherical stick diagrams. Great circle arcs appear locally as straight lines. newly-created crossings (to preserve the knot type Proof. Suppose we have minimal spherical stick diagrams of K 1 and K 2 on the sphere. We position them so that two vertices of the diagrams overlap as shown in Figure 3a. Note that the diagrams may overlap in many other places. As before, we move K 2 to ensure that the diagram is generic, and choose the new crossings so that arcs in K 1 cross above arcs in K 2 . We then change the diagrams as shown in Figure 3b to obtain a diagram of K 1 #K 2 . Our new diagram has ss[K 1 ] + ss[K 2 ] great circle arcs. We can bound spherical stick index in terms of an invariant related to the bridge index. Using b(K, p) as previously defined, we let the superbridge index of a knot holds for all knot types, as proven in [Kui87]. . Proof. Consider a spherical stick diagram of [K] with n = ss[K] great circle arcs. We modify it as follows to obtain a conformation K in space. For any crossing of the diagram, there is an "overstrand" and an "understrand", relative to the outside of the sphere. We remove a small portion of the understrand, replacing it with a straight line. We do this for all crossings, and obtain a conformation of [K] that radially projects to our diagram. The conformation consists of n almost-circular arcs, like those in Figure 4. Given a direction p ∈ S 2 , we want an upper bound on the number of extrema in the direction p. Each of the n points connecting two almost-circular arcs can be an extremum. Other extrema must occur on the interiors of the arcs, and each arc can have at most two interior extrema (see Figure 4). Thus K has at most 3n extrema in the direction p. Since b(K, p) counts the number of maxima, b(K, p) ≤ 3n/2. Therefore, Rearranging gives 2 3 sb[K] ≤ ss[K]. To improve the bound by 1/3, we use a small trick that guarantees an arc of length less than π. We stereographically project our original spherical stick diagram from a point not in the diagram whose antipode is in the diagram. We get a diagram in the plane consisting of circles and one line segment through the origin. We scale the diagram so the line segment is contained in the unit disc, and then stereographically project back to the sphere. The result is a spherical diagram of [K] with one great circle arc of length less than π, and n − 1 other circular arcs (not necessarily great circle arcs). We change these into almost-circular arcs to obtain a conformation K, and apply the same counting argument as before. Because the great circle arc of length less than π can have at most one interior extremum, we find which rearranges to the desired inequality. Since bridge number is known for many more knots than is superbridge number, we note that Kuiper Armed with bounds on planar and spherical stick indices, we examine some classes of knots, and prove Theorems 1.6, 1.7, 1.8, and 1.9 stated in the introduction. We begin with the torus knots, one of the most easily described and exhaustively studied classes of knots. We need a few well-known properties of torus knots. It is known that for any p and q, the (p, q)-torus knot is equivalent to the (q, p)-torus knot (see [Ada94]). So, without loss of generality, we always assume p < q. Also, we require p and q to be coprime (otherwise we get a torus link with gcd(p, q) components). We need the values of some invariants of T p,q (for p < q). First, the bridge index has been shown (see [Sch54] or [Kui87]) to be b[T p,q ] = p. It was shown in [Mur91] that the crossing number is cr[T p,q ] = (p − 1)q. We prove Theorems 1.6 and 1.7, which pertain to planar and spherical stick indices of torus knots. Proof of Theorem 1.6. Two of the inequalities follow directly from theorems established in Section 2 and the facts above. We can apply Theorem 2.1 to show that when q < 2p, Similarly, Theorem 2.4 gives pl[T p,q ] ≥ 2 b[T p,q ] + 1 = 2p + 1. It remains to show that for 2p < q we can construct a planar stick diagram of T p,q with q sticks. We consider q evenly spaced points on a circle, z 1 . . . z q , labeled counterclockwise. We then draw q line segments, connecting z n to z n+p for each n. The result is a q-pointed star, as in Figure 5a. We label the stick from z n to z n+p as stick n (and take these labels modulo q). We label the midpoint of stick n as n . We can see that the middle of the diagram is a regular q-gon for which the midpoint of each side is some n . By construction, stick n attaches to stick n + p, which attaches to stick n + 2p, and so on. Since p and q are coprime for torus knots, the q sticks form a single In this case, we let I m,n = I n,m denote the point of intersection. For each stick, we define directions of "clockwise" and "counterclockwise" relative to the origin. On stick n, the intersections I n,n−1 , . . . , I n,n−p+1 are clockwise from n , and I n,n+1 , . . . , I n,n+p−1 are counterclockwise from n (see Figure 5). This means our diagram has (p − 1)q intersections, which is equal to the crossing number of the standard projection of T p,q . We specify the crossings by letting stick n be the overstrand for the p−1 crossings clockwise from n , and the understrand for the p−1 crossings counterclockwise from n (see Figure 5b). From Figure 5c, we can see our diagram is a projection of a knot on the standard torus in R 3 . This knot winds around the torus p times in one direction and q times in the other, so it is T p,q . Proof of Theorem 1.7. To show ss[T p,q ] ≤ q, we use a similar construction as we used in the previous proof. We begin with a regular q-gon on the sphere, centered at the north pole. We extend the sides to great circles. The stereographic projection is shown for q = 7 in Figure 6. We label the circles counterclockwise from C 1 to C q . We let the basepoint of each circle be the farthest point on the circle from the origin (under the stereographic projection) and label it 1 through q as in the figure, and label the point antipodal to the basepoint n as n . As before, we consider these labels modulo q. Next, we establish some facts about intersections of the circles. Any two great circles in the sphere intersect twice, at antipodal points. (The intersections are no longer antipodal under stereographic projection.) If we fix a circle C n in our diagram, any other circle C m intersects it once clockwise from n and once counterclockwise from n. Furthermore, an intersection point of circles C m and C n that is clockwise from n must be counterclockwise from m. Hence for any m = n, there is a unique intersection i m,n of circle C m and circle C n so that i m,n is counterclockwise from m and clockwise from n. We construct a diagram by using an arc a n from each great circle C n . For circle C n , we use the arc that starts at i n,n−p and goes counterclockwise to i n+p,n . The diagram for T 6,7 is shown in Figure 7a. We can connect these arcs to form a closed loop by the same argument as in the previous proof, using the fact that p and q are coprime. Furthermore, for each n, the 2(p − 1) points contained in the interior of arc a n , i n,n−p+1 , . . . , i n,n−1 , i n+1,n , . . . , i n+p−1,n , are all intersections of this curve with itself. To see this, note that an intersection of two circles C n and C m is equidistant from n and m. Again, we get a diagram with (p − 1)q intersections. We choose the crossings for our diagram by letting arc a m be the overstrand for the p − 1 crossings counterclockwise from m and the understrand for the p − 1 crossings counterclockwise from m (see Figure 7b). We can then construct a conformation of T p,q that projects to our diagram, as in Figure 7c. To show that ss[T q−1,q ] ≥ q (and hence ss[T q−1,q ] = q), we note that cr[T q−1,q ] = (q − 2)q, and apply the lower bound of Theorem 3.4. Compositions of Trefoil Knots. Another class of knots that we will examine is the set of compositions of trefoil knots. Adams et al. (see [ABGW97]) computed the stick index of such knots to be s[nT ] = 2n + 4. We use this result to compute the planar stick index of compositions of trefoils. Proof of Theorem 1.8. By [ABGW97] and Theorem 2.1, we know pl[nT ] ≤ s[nT ] − 1 = 2n + 3. Furthermore, it is known (see [Sch54] or [Sch03]) that the bridge index of a composition of knots is given by When working with compositions of trefoils, we must be aware of a caveat. The trefoil knot is invertible, so there is a unique composition of two given trefoil knots. However, since the trefoil is chiral, we must make a distinction between left-handed and right-handed trefoil knots in compositions. For example, the square and granny knots are the two distinct compositions of two trefoils (see Figure 8). The 2n + 4stick construction of nT is independent of handedness (as discussed in [ABGW97]), so this distinction does not affect Theorem 1.8. The bulk of the proof of Theorem 1.9 is in the following lemma: Proof. We use the same conventions and notation as in the proof of Theorem 1.7 and demonstrated in Figure 6. We start with q great circles spaced symmetrically around the north pole, and label them from C 1 to C q counterclockwise. We assume throughout that q > 2m + 1. We define the basepoint of circle C r as its farthest point from the origin, labelled r and the point r as its closest point to the origin. Let i r,s denote the intersection of circles C r and C s that is counterclockwise from r and clockwise from s. Again, consider all labels modulo q. Let k be an integer, and m = k/2 . Then, we will prove the following set of statements for all k: For odd k, we have: (1) There is a diagram using k+2 arcs of circles that represents (m+1)T L #mT R . (2) The diagram uses one arc from each circle corresponding to −m, . . . , m + 2, and we can pick an orientation on the diagram so that the circles are traversed in this order. For even k, we have: (1 ) There is a diagram using k + 2 arcs of circles that represents mT L #mT R . (2 ) The diagram uses one arc from each circle corresponding to −m, . . . , m + 1, and we can pick an orientation so that the circles are traversed in this order. (3 ) If an arc of circle C r is in the diagram, it contains point r but not the basepoint. We start with the base case, k = 1. We connect the three vertices i 1,2 , i 2,q , and i 2,1 via arcs on circles corresponding to 1, 2, q that pass through points 1 , 2 , q respectively. This yields a three-crossing diagram, and by choosing crossings appropriately we obtain a left-handed trefoil that satisfies the conditions of the k = 1 case of our induction hypothesis. The q = 7 case is shown in Figure 9, and the picture looks similar for other q. Suppose that the inductive hypotheses hold for some odd k < q. By assumption, we have a diagram of (m + 1)T L #mT R using arcs corresponding to −m, . . . , m + 2. We modify our diagram by extending arc a m+2 clockwise from i m+1,m+2 to i m+2,−m−1 and arc a −m clockwise from i −m+1,−m to i −m−1,−m . We then add an arc of circle C −m−1 , which goes clockwise from i m+2,−m−1 to i −m−1,−m . Conditions (2) and (3) guarantee the new arcs of circles C m+2 and C −m are extensions of the old ones. Conditions (2 ), (3 ), and (4 ) of the k + 1 case immediately follow (see Figure 9). We must specify the crossings for our new diagram. We keep all crossings from the original diagram. We let arc a −m be the overstrand at i m+2,−m , arc a m+2 be the overstrand at i −m−1,m+2 , and arc a −m−1 be the overstrand at all of its other Figure 10. Demonstrating how the newly added arc can be modified via isotopy to reveal a composed trefoil. Figure 11. Adding a trefoil at a vertex using two great circle arcs, which locally appear as line segments. crossings (see Figure 9). By construction, condition (5 ) is satisfied for the k + 1 case. It remains to show that condition (1 ) holds. Because arc a −m−1 is the overstrand at all crossings except its crossing with arc a m+2 , and arc a m+2 is the overstrand at all crossings except those with arc a −m , we can move arc a −m−2 as in Figure 10. The result is the composition of a right-handed trefoil and the original diagram. By (1), this is [(m + 1)T L #mT R ]#T R . This proves the inductive hypothesis for k + 1. Finally, consider the case when k is even. The argument is nearly identical to the previous one. This time, we extend arcs a m+1 and a −m−1 to the points i m+1,m+2 and i m+2,−m−1 , and connect these points by adding an arc of circle C m+2 between them. We set arc a m+2 as the overstrand in all of its crossings except that with arc a −m−1 , and make a m+1 the overstrand at i m+1,−m−1 (see Figure 9). We can prove conditions (1)-(5) by the same arguments as above. This completes the induction. Since q is arbitrary, we have shown that ss[mT L #mT R ] ≤ 2m + 2 and ss[(m + 1)T L #mT R ] ≤ 2m + 3 hold for all m. Reflecting a 2m + 3-arc diagram of mT L #(m + 1)T R yields a 2m + 3-arc diagram of (m + 1)T L #mT R , proving the last inequality. Proof of Theorem 1.9. It remains to show that for n > m, ss[nT L #mT R ] = ss[mT L #nT R ] ≤ 2n + 1. We prove this by induction on n. The base case n = m + 1 was proven in Lemma 4.3. For the inductive step, it suffices to show that given a knot, we can compose it with a trefoil by slightly extending two arcs past a vertex and adding two great circle arcs (see Figure 11). Our construction in Theorem 1.9 depends on the handedness of the composed trefoils, but this does not imply that spherical stick index depends on handedness. However, we will show that T L #T R and T L #T L have different spherical stick indices (see Figure 12) Proof of Theorem 1.10. To construct an ss-4 diagram, we start with a configuration of four great circles on the sphere, as in Figure 13. It is not difficult to show that any generic configuration divides the sphere into triangular and quadrilateral regions, and that no two triangles or quadrilaterals can share a side. The only way to do this on a sphere (up to isotopy) is the arrangement shown in Figure 13. Thus, this is the only configuration that we need to consider. Given this diagram, we find all ways to form a closed loop out of one arc from each great circle G i . We note that the complement of such a closed loop, which consists of all arcs removed from the diagram to obtain the first loop, is itself a closed loop with one arc from each great circle. It will be convenient to describe loops by their complements, because the loops with the most crossings have simple complements. We choose a closed curve in the diagram. Suppose the complement contains n edges, and hence n vertices. Four of these vertices are the "turning vertices" where the complement switches from one circle to another, and the other n − 4 are "passing vertices" where the complement passes straight through an intersection. Note that the complement may pass through a single vertex twice, removing all four edges. Thus, at least (n − 4)/2 vertices are passed through. These vertices, along with the 4 turning vertices, are not crossings of the original curve, so the number of remaining crossings is at most 12 − (4 + (n − 4)/2 ) = 10 − n/2 . Note that for n > 8, there will be at most five crossings. We will see that we pick up all knots of five or fewer crossings from the complements with n ≤ 6. Figure 13. The stereographic projection of an arrangement of four great circles on a sphere. Figure 14. The three possible loops with at least 6 crossings. By symmetry, it is not hard to check that all pairs of an edge e and an adjacent vertex v are combinatorially equivalent. Thus, we need only consider complementary loops including v as a turning vertex and containing e, as in Figure 13. By explicitly considering complements constructed from 7 or 8 edges, we can see that they only produce knots of four or fewer crossings. Hence, we can limit consideration to complements of 4, 5, or 6 edges. There are none with 5 edges as any such would need to have two edges on one great circle and one on each of the others, and we cannot close such a complement up. Up to equivalence of diagrams we obtain one complement with four edges and two with six edges, as appear in Figure 14. We consider all possible crossing choices for the diagrams in Figure 14 and identify what knots result. This list of knots includes all knots of five or fewer crossings. We conclude that the nontrivial knots with ss[K] ≤ 4 are 3 1 , 4 1 , 5 1 , 5 2 , 6 1 , 6 2 , 6 3 , Using a computer (with the help of the program Knotscape), we carried out a similar process to classify knots with ss[K] = 5. We found that there are 666 prime knots and 17 composite knots with ss[K] = 5. In particular, we found that all knots of eight or fewer crossings (prime or composite) have ss[K] ≤ 5. The list also includes nine-crossing knots except for 9 2 , 9 3 , 9 4 , 9 15 , 9 18 , 9 23 , 9 36 , 4 1 #5 1 , 4 1 #5 2 , and T L #T L #T L and all ten-crossing nonalternating prime knots except for 10 152 and 10 154 . Since ss[T L #T L #T L ] > 5, spherical stick index distinguishes between the two distinct types of compositions of three trefoils, one consisiting of all left or all right trefoils and one with a mixture of the two. We found a few torus knots with ss[T p,q ] strictly less than q: ss[T 2,5 ] = 4 and ss[T 2,7 ] = ss[T 2,9 ] = 5. Note that these knots still satisfy sb[K] ≤ ss[K], as sb[T p,q ] = min{2p, q} is 4 in these cases.
2011-08-29T18:51:13.000Z
2011-05-01T00:00:00.000
{ "year": 2011, "sha1": "4b3922cbca7b80d11a9845483e9084b6d8262b9a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1108.5700", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4b3922cbca7b80d11a9845483e9084b6d8262b9a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
227127683
pes2o/s2orc
v3-fos-license
Weak cuspidality and Howe correspondence We study the effect of the Howe correspondence on Harish-Chandra series for type I dual pairs over finite fields with odd characteristic. We define a bijection obtained from this correspondence, and enjoying the property of ``having minimal unipotent support''. Finally, we examine the interaction between the Howe correspondence and weak cuspidality. Introduction Let F q be a finite field with q elements and odd characteristic. A pair of reductive subgroups of Sp 2n (q), where each one is the centralizer of the other, is called reductive dual pair. We focus our attention on irreducible dual pairs (cf. [15]). One such pair can be linear (GL m (q), GL m ′ (q)), unitary (U m (q), U m ′ (q)) or symplectic-orthogonal (Sp 2m (q), O m ′ (q)), with n = mm ′ in all cases. The last two are also called type I dual pairs, and the groups belonging to them are called type I groups. For a reductive dual pair (G m , G ′ m ′ ), Roger Howe defined the a correspondence Θ m,m ′ : R(G m ) → R(G ′ m ′ ) between their categories of complex representations. Known as Howe correspondance, it arises from the restriction to G m × G ′ m ′ of the Weil representation ω of the symplectic group Sp 2n (q) (cf. [13]). Let G be a connected reductive group defined over F q , and G * its dual group. Denote by G and G * their groups of rational points. In [17] Lusztig defined a partition of the set E (G) of irreducible representations of G into Lusztig series E (G, (s)). These series are parametrized by rational conjugacy classes of semisimple elements s of G * . The elements of E (G, (1)) are called unipotent representations of G. In general, the Howe correspondence is not compatible with unipotent representations. Therefore, we make use of a similar correspondance Θ ♭ m,m ′ : R(G m ) → R(G ′ m ′ ) arising from a Weil representation ω ♭ introduced by Gérardin in [10]. This correspondence preserves unipotent representations (cf. Proposition 2.3 in [2]). Since in this paper we only work with this modified Howe correspondence we will drop the superscript and denote Θ ♭ m,m ′ , and ω ♭ by Θ m,m ′ , and ω respectively. In [16] Lusztig found that, for every irreducible representation π of a connected reductive group G, there is a unique rational unipotent class O π in G which has the property that x∈Oπ(q) π(x) is non trivial, and that has maximal dimension among classes with this property. This class is called the unipotent support of π. Such classes are ordered by the relation given by O ′ O, if and only if O ′ ⊂ O, referred to as the closure order. In [4] we defined a bijective mapping θ G,G ′ : E (G, 1) → E (G ′ , 1) between the unipotent series of the members of a type I dual pair (G, G ′ ); in such a way that, for a unipotent representation π of G -The representation θ G,G ′ (π) occurs in Θ G,G ′ (π). We want to extend this definition to the whole set of irreducible representations. Naturally, any attempt to do this must make use of the Lusztig bijection. This bijection, known as well as the Lusztig correspondence is a one-to-one map between the series E (G, (s)) and the series E (C G * (s), (1)), of unipotent representations of the centralizer C G * (s). For classical groups, this centralizer can be expressed as a product of smaller reductive groups. For instance, when G is a unitary group C G * (s) ≃ G # × G (1) where G # is a product of linear or unitary groups, and G (1) is a unitary group. We obtain in this way a modified Lusztig correspondence Ξ s sending π ∈ E (G, (s)) to π # ⊗ π (1) ∈ E (G # × G (1) , 1). For a unitary dual pair (G, G ′ ) this bijection fits in the commutative diagram This motivates us to define θ G,G ′ (π), for π in E (G), to be the unique irreducible representation of G ′ such that We also define such a mapping θ G,G ′ : E (G) → E (G ′ ), for symplecticorthogonal pairs. As we could expect, this bijection also selects representations with smallest unipotent supports. This is our first result (Theorem 4). In a recent paper [12], Howe and Gurevich presented the notion of rank for representations of finite symplectic groups (see Section 4). This conduced to the introduction of the eta correspondence. Consider a dual pair (G, G ′ ) formed by one orthogonal and one symplectic group; and suppose the pair is in the stable range. Howe and Gurevich show that for ρ in E (G), there is a unique irreducible representation in Θ G,G ′ (ρ) of maximal rank, it is denoted by η(ρ). The correspondences θ and η are defined in different ways. The former chooses a subrepresentation of Θ with smallest unipotent support, whereas the latter selects one with greatest rank. In [20], Shu-Yen Pan shows that these two agree on their common domain of definition (the stable range), i.e. among the irreducible constituents of Θ(π), the representation with the smallest unipotent support is the one having the greatest rank. This points out to an inverse relation between these two features. We discuss this in Section 4. In [11], Gerber, Hiss and jacon introduced the notion of weak cuspidality for modular representations in non-defining (including zero) characteristic. The spirit of the definition is the same as for cuspidal representations: the vanishing of parabolic restriction functors. For weakly cuspidal representations we nonetheless restrict our attention to certain kind of parabolic subgroups, called pure by the authors. This yields a weak Harish-Chandra theory that refines the usual one. Moreover, for representations of unitary groups in prime characteristic, this new definition provides a natural partition of the set of unipotent representations (see Proposition 5), in the same way usual series do for ordinary unipotent representations. A natural question arises: how does the Howe correspondence behaves with respect to this weak Harish-Chandra theory? We provide an answer to this question for ordinary representations. We first prove that the Howe correspondence respects weak cuspidality in case of first occurrence. This is our second result. Theorem. Let (G m , G ′ m ′ ) be a type I dual pair, and π be an irreducible weakly cuspidal representation of G m , and let m ′ (π) be its first occurrence index. 1. If m ′ < m ′ (π), then Θ m,m ′ (π) is empty 2. The representation Θ m,m ′ (π) (π) is irreducible and weakly cuspidal 3. If m ′ > m ′ (π), then none of the constituents of Θ m,m ′ (π) is weakly cuspidal The proof of this result can divided in two independent parts: existence and uniqueness, each making use of different tools. The former relies on the computation of coinvariants of the Weil representation presented in Section 1. The latter, on the study of orbits and stabilizers for the action of a block diagonal subgroup of a type I group on the set of maximal isotropic subspaces of the underlying module of this group. We need to mention that the proof is inspired by the equivalent result for cuspidal representations, but that it is more refined. Indeed, for cuspidal representations, the uniqueness proof used the fact that the stabilizers above are contained in a direct product of parabolic subgroups. For weakly cuspidal representations this is not enough, we had to calculate this stabilizers explicitly. In order to do so we had first to find explicit representatives for the orbits (which is also not needed in the cuspidal setting). From the last theorem above, the work required to establish the agreement between the Howe correspondence and weak cuspidal support is very much the same required in the cuspidal setting. Howe correspondance and cuspidal support Theorem 3.7 of [2] states that the Howe correspondence is compatible with unipotent Harish-Chandra series. In this section, we generalize this theorem to arbitrary Harish-Chandra series. The proof follows that of Theorem 2.5 in [15]. We fix two Witt towers, T and T ′ , such that (G m , G ′ m ′ ) is a type I dual pair for any G m ∈ T and G ′ m ′ ∈ T ′ . Let D be a field equal to F q when the dual pair is symplecticorthogonal, and equal to F q 2 when the pair is unitary. Let W m be the underlying D-vector space of G m . Let P k be the stabilizer in G m of the totally isotropic subspace of W m , spanned by the k first vectors of a hermitian base, k ≤ m. Denote by N k its unipotent radical, GL k = GL(D) and M k = GL k ×G m−k the standard Levi subgroup of P k . Denote by GL ′ k ′ , P ′ k ′ , N ′ k ′ , and M ′ k ′ the analogous groups for G ′ m ′ . Finally, denote by R G , the natural representation of the group G × G on the space S (G). This representation is isomorphic to the one obtained by inducing the trivial representation of G to G × G (diagonal inclusion). It decomposes as whereπ denotes the contragredient representation of π. a) The representation ( * R k ⊗ 1)(ω m,m ′ ) decomposes as : Let G m be a type I group in the Witt tower T . The set of standard Levi subgroups of G m can be parametrized by sequences t = (t 1 , . . . , t r ), such that |t| = r i=1 t i is not greater than m. The corresponding Levi subgroup is equal to GL t 1 × · · · × GL tr ×G m−|t| . Proposition 1 is a key result in the proof of the following. In [5] we showed how this result implies Theorem 3.7 in [2]. The latter basically says that the Howe correspondence preserves unipotent Harish-Chandra series. We also used Theorem 1 to obtain the bijective correspondence θ we present in Section 3. However, this was not necesary, we can define θ using only the Lusztig correspondence (as done below). Howe correspondence and Lusztig correspondence The purpose of this section is to see the effect of the Lusztig correspondence on the Howe correspondence for type I dual pairs. 2.1. Centralizers of rational semisimple elements. Let G be a reductive group defined over F q , and C G (x) be the centralizer of a rational element x in G. Denote by G and C G (x) their groups of rational elements. Assume that G is also connected. In Proposition 5.1 of [18], Lusztig found a bijection where s is a rational semisimple element of G * . Aubert, Michel and Rouquier extended this bijection to even orthogonal groups (Proposition 1.7 of [2]). It is known as the Lusztig bijection or Lusztig correspondence. Taking s = 1 yields a bijection between the series of unipotent representations of G with that of its dual : We can also extend (1) by linearity in order to obtain an isometry between the categories R(G, (s)) and R(C G * (s), (1)) spanned by the Lusztig series E (G, (s)) and E (C G * (s), (1)) respectively. Following Section 1.B in [2], let G be a classical group of rank n, and T n = F n q . Let s be a rational semisimple element of its Langlands dual G * . By definition, a rational semisimple element is conjugate to an element (λ 1 , . . . , λ n ) in T n . Let ν λ (s) the number of times λ appears in this list. There is a decomposition where [λ] is the orbit of λ by the action of the Frobenius endomorphism, (s) is a unitary or general linear group (possibly over some finite extension of F q ). Additionally : In all cases we see that G [1] (s) is a group of the same kind as G, but of smaller rank. Weil representation and Lusztig correspondence. Consider a type I dual pair (G, G ′ ). Let m (resp. m ′ ) be the Witt index of G (resp. G ′ ). According to Proposition 2.3 in [2], if s is a rational semisimple element in G * , then there is a rational semisimple element s ′ in G ′ * , such that the Howe correspondence relates E (G, (s)) and E (G ′ , (s ′ )). Moreover, in this case, s ′ = (s, 1) if m ≤ m ′ , and s = (s ′ , 1) otherwise. In particular, there is some l ≤ min(m, m ′ ), and t in T l with eigenvalues different from 1, such that s = (t, 1), and s ′ = (t, 1). Let ω G,G ′ ,t denote the projection of the Weil representation ω G,G ′ onto R(G, (s)) ⊗ R(G ′ , (s ′ )), and T l,0 the subset of T l whose elements have all their eigenvalues different from 1. Proposition 2.4 in [2] asserts that We now endeavour to study the effect of the Lusztig correspondence on the Weil representation ω G,G ′ ,t . We treat unitary and symplecticorthogonal groups independently. A. Unitary pairs. Suppose that G is a unitary group. Let s be a rational semisimple element in G * , let G # denote the product of G [λ] (s) for λ = 1, and G (1) be the Langlands dual of G [1] (s). The groups of rational elements of G # and G (1) will be denoted by G # and G (1) respectively. Considering the decomposition of centralizers discussed above, and since by (2) the unipotent Lusztig series of G (1) and G [1] can be identified, we obtain a modified Lusztig bijection For π in E (G, (s)) we will denote by π # and π (1) the (unipotent) representations of G # and G (1) such that Let (G, G ′ ) be a unitary dual pair, and s (resp. s ′ ) be a semisimple element of G * (resp. G ′ * ). Proposition 2. The groups G # and G ′ # are isomorphic. Moreover, the pair (G (1) , G ′ (1) ) can be identified with a unitary dual pair. Proof. Both assertions in the statement above follow from the explicit decomposition of centralizers given in the Section 2.1. The isomorphism between G # and G ′ # is a consequence of the fact that s and s ′ have the same eigenvalues different from 1 (with same multiplicities). Concerning the second assertion, it suffices to state that G [1] and G ′ [1] are unitary groups. Finally, the Weil representation ω G,G ′ ,t can be described in terms of a correspondence between unipotent characters, defined either by R G # ,1 , or by the unipotent projection of the Weil representation of the smaller unitary dual pair (G (1) and G (1) be the Langlands dual of G [1] (s). Again, considering the decomposition of centralizers discussed above, and that since by (2) the unipotent Lusztig series of G (1) and G [1] (s) can be identified, we obtain a modified Lusztig bijection For π in E (G, (s)) we will denote by π # , π (−1) and π (1) the (unipotent) representations of G # , G (−1) and G (1) such that Ξ s (π) = π # ⊗ π (−1) ⊗ π (1) . Proof. The proof of the first assertion is the same as that of Proposition 2, with the difference that the groups For the second assertion, since G is symplectic, the group G [1] (s) is special odd orthogonal and hence its dual is again symplectic. Likewise, the group G ′ [1] (s ′ ) is even orthogonal, and so is its dual. As for unitary pairs, we now describe the Weil representation ω G,G ′ ,t for symplectic-orthogonal pairs. Theorem 3. [19, Theorem 6.9 and Remark 6.10] Let t belong to T l,0 . For a symplectic orthogonal dual pair (G, G ′ ) the representation ω G,G ′ ,t is the image by the Lusztig correspondence Minimal representations Let G be a reductive group defined over F q , and P = LU be a Levi decomposition of the rational parabolic subgroup P. For a cuspidal representation ρ of L set By Corollary 5.4 in [14] and Corollary 2 in [8], there is an isomorphism In particular, irreducible representations in the Harish-Chandra series E (G, ρ) L are indexed by irreducible representations of W G (ρ). When G is a type I group and ρ is a cuspidal unipotent representation, the group W G (ρ) above is a type B Weyl group. It is known that irreducible representations of these groups are parametrized by bipartitions. We denote by ρ µ,λ the representation in E (W r ) corresponding to the bipartition (µ, λ) of r. For every irreducible representation π of a connected reductive group G, there is a unique rational unipotent class O π in G which has the property that x∈Oπ(q) π(x) is non trivial, and that has maximal dimension among classes with this property. This class, introduced by Lusztig in [16] is called the unipotent support of π. We now introduce a partial order on the set of unipotent conjugacy classes. It is crucial for results below. Let (G, G ′ ) be a type I dual pair. In [4] we defined a bijective correspondence θ G,G ′ : E (G, 1) → E (G ′ , 1) between the unipotent series of G and G ′ , in such a way that, for a unipotent representation π of G -The representation θ(π) occurs in Θ(π). Before extending this to arbitrary representations, we recall the definition for unipotent representations. We work with symplectic orthogonal and unitary pairs independently. 3.1. Unitary pairs. Unipotent representations of the unitary groups U n (q) are known to be indexed by partitions of n. Moreover, those belonging to the same Harish-Chandra series share a common 2-core and are therefore determined by their 2-quotient (of parameter one) [2]. If we let R µ be the representation of U n (q) indexed by the partition µ of n, then the bijection issued from (5) relates R µ to ρ µ(0),µ(1) , where (µ(0), µ(1)) is the 2-quotient (of parameter 1) of µ. Let (G m , G ′ m ′ ) denote a unitary dual pair. According to Theorem 3.7 in [2] the Howe correspondance relates the unipotent series E (G m ) ϕ to the series R(G ′ m ′ ) ϕ ′ , where ϕ ′ is the first occurrence of ϕ. Moreover, since ϕ ∈ E (G l ) and ϕ ′ ∈ E (G ′ l ′ ) are cuspidal and unipotent, the integers l and l ′ are triangular, i.e. l = k(k + 1)/2 and l ′ = k ′ (k ′ + 1)/2 for some k and k ′ . From the discussion in the previous paragraph, the theta correspondence between the series above can be identified to a correspondence between E (W r ) and R(W r ′ ), for suitable r and r ′ . We can now define a bijection θ : E (W r ) → E (W r ′ ) issued from the one above : -θ(λ, µ) = ((r ′ − r) ∪ µ, λ), if k is odd or zero. -θ(λ, µ) = (µ, (r ′ − r) ∪ λ), otherwise Let O µ be the unipotent support of the unipotent character R µ . The closure order among unipotent supports agrees with the dominance order on the indexing partition [22], i.e. O µ O ν if and only if µ ≤ ν. The definition of θ made above is done so that the unipotent support is at its smallest. Indeed, if θ(µ) is the partition whose 2-quotient is θ(µ(0), µ(1)), then for all representations R µ ′ in Θ(R µ ) we have θ(µ) ≤ µ ′ . We obtain in this way a bijection θ G,G ′ between the set of unipotent characters of G and G ′ . Let ι be an involution of sending a representation π to its dualπ. For the definition in the general case we use Theorem 2, which we can express as a commutative diagram 1). (1) (π (1) ) has alredy been defined since π (1) is unipotent. We are therefore extending the definition of θ G,G ′ to the whole set of irreducible representations, so that it is congruent with the Lusztig correspondence. It is only natural to ask if this extension also selects representations with smallest possible unipotent support. The answer is provided in a subsequent section. 3.2. Symplectic-orthogonal pairs. Let (G m , G ′ m ′ ) be a dual pair (Sp 2m (q), O ǫ 2m ′ ). Again, according to Theorem 3.7 in [2], the Howe correspondence Θ m,m ′ relates the unipotent Harish-Chandra series E (G m ) ϕ to the series E (G ′ m ′ ) ϕ ′ , where ϕ ′ is the first occurrence of ϕ. Moreover, since ϕ ∈ E (G l ) and ϕ ′ ∈ E (G ′ l ′ ) are cuspidal and unipotent, l = k(k + 1) and l ′ = k ′ 2 for some k and k ′ . Again, thanks to (5), both these series can be identified to the set of representations of certain Weyl groups of type B. Hence, the correspondence between unipotent representations becomes a correspondence between the set of irreducible representations of a pair (W r , W r ′ ) of type B Weyl groups, for suitable r and r ′ . We now define the one-to-one correspondence θ : Irr(W r ) → Irr(W r ′ ) as follows : -θ(λ, µ) = (λ, (r ′ − r) ∪ µ), when k = 1 2 (ǫ − 1) mod 2. -θ(λ, µ) = ((r ′ − r) ∪ λ, µ), otherwise. Using the Springer correspondence to calculate the unipotent support, in [4] we are able to prove that the choice above is made so that the support of θ(π) it the smallest in Θ(π) for unipotent π. We obtain in this way a bijection θ G,G ′ between the set of unipotent characters of G and G ′ . Again, Let ι be an involution of sending a representation to its dual. For the general case, we use Theorem 3, which again we express as a commutative diagram This moves us to define θ G,G ′ (π) to be the representation such that Ξ s (θ G,G ′ (π)) =π # ⊗π (−1) ⊗ θ G (1) ,G ′ (1) (π (1) ). We stress that the representation θ G (1) ,G ′ (1) (π (1) ) has already been defined since π (1) is unipotent. We are therefore extending the definition of θ from unipotent to arbitrary representations making sure it is compatible with the Lusztig correspondence. Again, it seems reasonable to ask if this extension also selects a representation with smallest unipotent support. We provide the answer in the following section. 3.3. Lusztig correspondence and unipotent support. Assume G is a connected reductive group. Let P be a parabolic subgroup of G with Levi decomposition P = LU. Following [21] we introduce an induction functor on unipotent classes from L to G as follows : for a unipotent class O in L, there exists a unique unipotent classÕ of G such thatÕ ∩ OU is dense in OU. We say thatÕ is the class obtained inducing O from L to G, and we writeÕ = Ind G L (O). This definition does not depend on the parabolic P containing L. Moreover, according to Proposition II.3.2 in [21] Ind Let s be a rational semisimple element of G * , and let G(s) be the Langlands dual of C G * (s). Since the unipotent series of these two groups can be identified, we have a Lusztig bijection between the series of G defined by s and the unipotent series of G(s). Denote by ρ u the unipotent representation of G(s) corresponding to ρ in E (G, (s)). If the group C G * (s) can be identified to a Levi subgroup of G * then, according to Proposition 4.1 in [9] O ρ = Ind G G(s) (ρ u ), i.e. the unipotent supports of corresponding characters are related by the induction of classes defined above. For both symplectic-orthogonal and unitary dual pairs (G, G ′ ), we have first defined the bijection θ G,G ′ on the set of unipotent representations combinatorially. We have then used the Lusztig correspondence to extend this definition to all irreducible representations. In the unipotent case the definition was made so as to minimize the unipotent support. If we aim at proving that this also holds for arbitrary representations, we must study the effect of the Lusztig correspondence on the unipotent support. The following result addresses this issue. As in (1), we consider the Lusztig correspondence between the series E (G, (s)) and E (C G * (s), 1). Theorem 4. Let π, τ belong to E (G, (s)), and π u , τ u denote the corresponding unipotent representations of C G * (s). If O πu O τu then O π O τ . In particular, for π in E (G), the subrepresentation θ G,G ′ (π) of Θ G,G ′ (π) has the smallest unipotent support. Proof. We consider first the case where the centralizer of s in G * is a Levi subgroup of G * . From the discussion of centralizers in Section 2.1, this is the case for type A groups. Let L be a Levi subgroup contained in the parabolic P = LU. Take two unipotent classes O and O ′ in L, such that O ′ is contained in the closure O. This implies that O ′ U is contained in OU . Hence, due to (6) above : Since in this case, as discussed above, the supports of π, and τ are obtained inducing those of π u , and τ u respectively, the result holds. In the general case, the unipotent support of characters corresponding by the Lusztig bijection are related by the generalized induction defined in [21]; as proven in Proposition 4.5 of [9]. The result follows from the fact that this induction is increasing (Remarque III.12.4.2 in [21]). Unipotent support and rank In a recent paper [12], Howe and Gurevich presented the notion of rank for representations of finite symplectic groups. This conduced to the introduction of the eta correspondence, defined by the property of "having maximal rank"; as explained below. Let L n (q) the group of symmetric matrices of size n with coefficients in F q ; or equivalently, of symmetric bilinear forms over a F q -vector space of dimension n . Consider the Siegel parabolic P of Sp 2n (q), with Levi decomposition P = LN, where L ≃ GL n (q) and We identify the group N to the group L n (q). Being abelian, all its irreducible representations are one-dimensional. Moreover, fixing a nontrivial additive character ψ of F q , we can define a bijection between N and E (N) relating a symmetric A to ψ A ; the latter being defined by ψ A (B) = ψ(tr(AB)), for B symmetric. Let ρ be a representation of Sp 2n (q). The restriction of ρ to N decomposes as a weighted sum of representations ψ A , for A symmetric. The orbits of the action of L on N can be identified with the orbits of GL n (q) on L n (q), i.e. with equivalence classes of symmetric matrices. The coefficients on the sum above are therefore constant on these classes. The first major invariant of a symmetric bilinear form is its rank. It is well known that, over finite fields of odd characteristic, there are just two isomorphism classes of symmetric bilinear forms of a given rank r. We denote by O + r and O − r , the two equivalence classes of symmetric matrices of rank r. The restriction of ρ to N decomposes as : -The rank of the character ψ A is defined as the rank of A. -The rank of ρ, denoted by rk(ρ), is defined as the greatest k such that the restriction to N contains characters of rank k, but of no higher rank. Consider the dual pair (G, G ′ ) = (O ± 2k , Sp 2n ′ ) where 2k ≤ n ′ , i.e., the dual pair is in stable range. Howe and Gurevich show that for ρ in E (O ± 2k (q)), there is a unique irreducible representation in Θ G,G ′ (ρ) of rank 2k; all other constituents having smaller rank. This gives rise to a mapping called the eta correspondence : . We now have two one-to-one correspondences θ and η between the set of irreducible representations E (G) of G, and the corresponding set E (G ′ ) of G ′ . They are defined in different ways. The former chooses a subrepresentation of Θ with smallest unipotent support, whereas the latter selects one with greatest rank. In [20], Shu-Yen Pan shows that these two "theta relations" agree on their common domain of definition (the stable range). This amounts to say that, among the irreducible constituents of Θ(π), the representation with the smallest unipotent support is the one having the greatest rank. This points out to an inverse relation between these two features. We could ask if this also holds for all representations. This is indeed the case for unipotent representations, the proof is based on results yet to be published by Shu Yen Pan (he has managed to calculate the rank of a unipotent representation in terms of its associated symbol). The general case still needs to be settled. The statement in the previous theorem is the best we can hope for. That is to say, we cannot ask the reverse implication to hold as well. The unipotent support is a geometrical object attached to a representation, whereas its rank is just a number. Since the closure order is a partial order, the reverse implication would tell us that the rank of the latter determines the former. 5. Howe correspondence and weak Harish-Chandra theory 5.1. Weak Harish-Chandra theory. Let G = G n a type I group. For an integer 0 ≤ r ≤ n, the pure standard Levi subgroup M r,n−r of G is a subgroup of the form where the linear group appears r times. A Levi subgroup of G is called pure if it is conjugated to a pure standard Levi. By the very definition, the set of pure Levi subgroups is stable by G-conjugation. It can also be proven [11,Proposition 2.2] that if M and M ′ are pure Levi then the intersection x M ∩ M ′ is pure as well. This property is crucial in showing that Harish-Chandra philosophy holds when we focus on set of pure Levi subgroups. Let π be a representation of G, we say that π is weakly cuspidal if the parabolic restriction of π to a proper pure Levi subgroup is trivial, i.e. * R G M (π) = 0 for every pure Levi subgroup M. A pair (M, π), where M is a pure Levi subgroup and π is a weakly cuspidal representation of M, is called a weakly cuspidal pair. As in the usual cuspidal setting, these pairs provide a partition of the set of irreducible representations of G. Indeed, defining the weak Harish-Chandra series corresponding by (M, π), as the set of irreducible subrepresentations of the parabolic induced R G M (π), we have. Proposition 4. [11,Proposition 2.3] a) The weak Harish-Chandra series partition the set of (isomorphism classes of ) irreducible G-representations. b) Every weak Harish-Chandra series is contained in some usual Harish-Chandra series. Item (b) implies that every usual Harish-Chandra series is partitioned into weak Harish-Chandra series, and hence shows that weak series refine the usual theory. These definitions are made for characteristic zero representations. However, the same applies to non-defining prime characteristic. In the non-zero characteristic case, weak Harish-Chandra prove better suited for studying unipotent representations. Let l be a prime different from p, and G n denote the unitary group U 2n (q) or U 2n+1 (q). From the work of Geck [7] we know that irreducible unipotent representations in characteristic l of G n are (just as in trivial characteristic) labelled by partitions of n. Calling π µ the unipotent representation corresponding to the partition µ we get a result analogous to [6, Appendice, proposition p. 224]. Proposition 5. If the unipotent representations π µ and π ν of G n lie in the same weak Harish-Chandra series, then µ and ν have the same 2-core. This result was originally part of a series of conjectures stated in [11], the main of which (Conjecture 5.7) is now a theorem [3]. Let X k (W ) denote the set of isotropic k-dimensional subspaces of a symplectic space W . Fix the maximal isotropic space X n spanned by {e 1 , . . . , e n 1 , f 1 , . . . , f n 2 }, and let P n be the parabolic subgroup of Sp 2n , formed by those matrices stabilizing X n . Using this Lagrangian we identify the quotient Sp 2n /P n with the set X n (W n ) of maximal isotropic subpaces of W n . Basic linear algebra shows that the set of maximal isotropic subspaces of W n can be identified to the the set of triplets (U 1 , U 2 , φ) where U 1 belongs to X d 1 (W n 1 ), U 2 belongs to X d 2 (W n 2 ), φ : U ⊥ 1 /U 1 → U ⊥ 2 /U 2 is an isomorphism, and d 1 − d 2 = n 1 − n 2 . Moreover, the action of (x 1 , x 2 ) in Sp 2n 1 × Sp 2n 2 on X n (W n ) corresponds to the following action on the set of triplets. Let i = 1, . . . , min{n 1 , n 2 }, and suppose that n 2 ≤ n 1 . Let K n be a matrix of size n with 1's on the antidiagonal and 0's elsewhere. Finally, let [15] asserts that the different V i P n for i = 1, . . . , n 2 form a set of representatives for the action of Sp 2n 1 × Sp 2n 2 on Sp 2n /P n . We are interested in calculating their stabilizers. The coset V i P n corresponds to the maximal isotropic subspace V i X n . The isotropic spaces in the triplet (U 1 , U 2 , φ) corresponding to the latter are An easy calculation shows that . . , f ′ n 2 −i+1 . Using coordinates in these basis, we define the isomorphism where K is of size 2i. Using the description (7) of the action of Sp 2n 1 × Sp 2n 2 on the set of triplets, we see that (x 1 , x 2 ) belongs to the stabilizer (Sp 2n 1 × Sp 2n 2 ) V i Pn of V i P n , if and only if As before, let P k , k = 1, . . . , m be the stabilizer in Sp 2m of the totally isotropic space spanned by the first k vectors of a symplectic base. The first two equalities on (8) above tell us that x 1 ∈ P n 1 −i ⊂ Sp 2n 1 , and x 2 ∈ P n 2 −i ⊂ Sp 2n 2 . Elements x in the parabolic P k factorize as a product x = m(a, A)u, for a ∈ GL k , A ∈ Sp 2(m−k) , u in the unipotent radical N k of P k , and m(a, A) = diag(a, A, K t a −1 K). Hence, we can express x 1 , x 2 as a product x 1 = m(a 1 , A 1 )u 1 , x 2 = m(a 2 , A 2 )u 2 , for suitable a 1 , a 2 , A 1 , A 2 , u 1 , and u 2 . The last equality on (8) becomes the identity The same discussion can be put forward for even-orthogonal groups, in this case the orbit representatives are Let G m be Sp 2m or O ± 2m . We summarize the above results in the following proposition. Proposition 6. The matrices V i P m , for i = 1, . . . , min{m 1 , m 2 } form a set of representatives for the orbits of G m 1 × G m 2 in G m /P m . Moreover the stabilizer of V i P m in G m 1 ×G m 2 is the subgroup of P m 1 −i ×P m 2 −i given by First occurrence of weakly cuspidal representations. In this section we endeavour to study the effect the Howe correspondence has on weakly cuspidal representations in the case of first occurrence. We also comment on the relation between the correspondence and weak series. Let G m and P m be as in Proposition 6. (1). be a type I dual pair, and π be an irreducible weakly cuspidal representation of G m , and let m ′ (π) be its first occurrence index. 1. If m ′ < m ′ (π), then Θ m,m ′ (π) is empty 2. The representation Θ m,m ′ (π) (π) is irreducible and weakly cuspidal 3. If m ′ > m ′ (π), then none of the constituents of Θ m,m ′ (π) is weakly cuspidal Proof. To avoid the excessive use or apostrophes we swap the groups in the dual pair and consider the pair (G ′ m ′ , G m ) instead. Consider a weakly cuspidal representation π ′ of G ′ m ′ . The proof of the item 1 in the statement is the definition of the first occurrence index. The proof of the other two items can be divided in two independent parts : existence and uniqueness. The methods used in each are different. Proposition 1 implies that the last term above is bounded by Since π ′ is weakly cuspidal the second term in this sum must be trivial. The first term yields ϕ 1 |Θ m ′ ,m(π ′ )−1 (π ′ ), which contradicts the minimality of m(π ′ ). B. Uniqueness. In order to establish uniqueness we prove that at most one weakly cuspidal irreducible representation can appear in the union of Θ m ′ ,m (π ′ ) for all non negative m. Let π 1 , and π 2 belong to Θ m ′ ,m 1 (π ′ ), and Θ m ′ ,m 2 (π ′ ) respectively. Following the arguments in Section 3 of [15], it can be shown that the representation π 1 ⊗π 2 is a constituent of the space of G ′ m ′ -coinvariants S G ′ m ′ of the Weil representation. Therefore, from Lemma 1 the representation π 1 ⊗π 2 is a constituent of where the V i P m , for i = 1, . . . , min{m 1 , m 2 }, are the representatives of the orbits of G m 1 × G m 2 in G m /P m , described above. By transitivity Ind Gm 1 ×Gm 2 (Gm 1 ×Gm 2 ) V i Pm (1) = Ind Moreover, from the explicit description of stabilizers already established, Since 1 GL k is a subrepresentation of R GL k GL k 1 (1), the character on the righthand side of the last equality is a constituent of We must now distinguish the following two cases. (a) If m 1 = m 2 then, for all i = 1 . . . , min{m 1 , m 2 }, one of the Levi subgroups M m 1 −i,i or M m 2 −i,i is going to be proper in G m 1 or G m 2 respectively. In this case Ind Gm Pm (1)| Gm 1 ×Gm 2 , π 1 ⊗π 2 = 0 since π 1 and π 2 are both weakly cuspidal. (b) If m 1 = m 2 = k, then for all i = 1 . . . , k, Ind Gm Pm (1)| Gm 1 ×Gm 2 , π 1 ⊗π 2 ≤ 1. Indeed, -In case 0 ≤ i < k, the Levi subgroups M k−i,i and M k−i,i are proper, and again Ind G k ×G k (G k ×G k ) V i Pm (1), π 1 ⊗π 2 = 0 -In case i = k, we get Ind G k ×G k (G k ×G k ) V i P m ′ (1), π 1 ⊗π 2 ≤ R G k , π 1 ⊗π 2 = π 1 , π 2 . Uniqueness follows from Items (a) and (b). Now that we have proven that the Howe correspondence preserves weak cuspidality on the first occurrence, we can ask whether it also behaves nicely with respect to weak Harish-Chandra series. It turns out that this is indeed the case. The statement and proof of this result is very much the same as Theorem 1 and it will be therefore ommited.
2020-11-24T02:01:27.707Z
2020-11-23T00:00:00.000
{ "year": 2022, "sha1": "429b80b122509caeb67fd235b11aa3cbf94ace6f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "429b80b122509caeb67fd235b11aa3cbf94ace6f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
250525782
pes2o/s2orc
v3-fos-license
REAL POWER LOSS MINIMIZATION BY MUTUAL MAMMAL BEHAVIOR ALGORITHM POWER LOSS MINIMIZATION BY MUTUAL MAMMAL BEHAVIOR ALGORITHM.” This paper presents a Mutual Mammal Behavior (MM) algorithm for solving Reactive power problem in power system. Modal analysis of the system is used for static voltage stability assessment. Loss minimization is taken is taken as main objective. Generator terminal voltages, reactive power generation of the capacitor banks and tap changing transformer setting are taken as the optimization variables. A Meta heuristic algorithm for global optimization called the Mutual Mammal Behavior (MM) is introduced. Mammal groups like Carnivores, African lion, Cheetah, Dingo Fennec Fox, Moose, Polar Bear, Sea Otter, Blue Whale, Bottlenose Dolphin exhibit a variety of behaviors including swarming about a food source, milling around a central location, or migrating over large distances in aligned groups. These Mutual behaviors are often advantageous to groups, allowing them to increase their harvesting efficiency, to follow better migration routes, to improve their aerodynamic, and to avoid predation. In the proposed algorithm, the searcher agents emulate a group of Mammals which interact with each other based on the biological laws of Mutual motion. MM powerful stochastic optimization technique has been utilized to solve the reactive power optimization problem. In order to evaluate up the performance of the proposed algorithm, it has been tested on Standard IEEE 57,118 bus systems. Proposed MM algorithm out performs other reported standard algorithm’s in reducing real power loss. Introduction Main objective is to operate the system in secure mode and also to improve the economy of the system. The sources of the reactive power are the generators, synchronous condensers, capacitors, static compensators and tap changing transformers. Various mathematical techniques Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [89] have been utilized to solve this optimal reactive power problem like the gradient method [1,2], Newton method [3] and linear programming [4][5][6][7]. The gradient and Newton methods failed to handle inequality constraints. In last few years several biological and natural processes have been utilized in the methodologies of science and technology in an increasing manner. Among the most popular nature inspired approaches are Particle Swarm Optimization [8], the artificial immune systems [9], the Ant Colony Optimization [10], etc. Also, a number of swarm intelligence algorithms, based on the behaviour of the bees have been presented [11]. Just recently, the concept of individual-organization has been widely referenced to understand Mutual behavior of Mammals. The central principle of individual organization is that simple repeating interactions between individuals can produce complex behavioral patterns at group level . Such inspiration comes from behavioral patterns previously seen in several Mammal groups. On the other hand, new studies have also shown the existence of Mutual memory in Mammal groups. The presence of such memory establishes that the previous history of the group structure influences the Mutual behavior exhibited in future stages. According to such principle, it is possible to model complex Mutual behaviors by using simple individual rules and configuring a general memory. In this paper, a new optimization algorithm inspired by the Mutual Mammal Behavior (MM) is proposed. In this algorithm, the searcher agents emulate a group of Mammals that interact with each other based on simple behavioral rules which are modeled as mathematical operators. Such operations are applied to each agent considering that the complete group has a memory storing their own best positions seen so far, by using a competition principle. The proposed approach has been compared to other well-known optimization methods. The performance of MM algorithm has been evaluated in standard IEEE 57,118 Bus test systems and the results analysis shows that our proposed approach performs well when compared to other reported algorithms. Problem Formulation The objectives of the reactive power problem is to minimize the real power loss. Active Power Loss The objective of the reactive power dispatch problem is to minimize the active power loss and can be written in equations as follows: Where F-objective function, P Lpower loss, g k -conductance of branch,V i and V j are voltages at buses i,j, Nbr-total number of transmission lines in power systems. Voltage Profile Improvement To minimize the voltage deviation in PQ buses, the objective function (F) can be written as: Where VD -voltage deviation, ω v -is a weighting factor of voltage deviation. And the Voltage deviation given by: Where Npq-number of load buses Equality Constraint The equality constraint of the problem is indicated by the power balance equation as follows: Where P G -total power generation, P D -total power demand. Inequality Constraints The inequality constraint implies the limits on components in the power system in addition to the limits created to make sure system security. Upper and lower bounds on the active power of slack bus (P g ), and reactive power of generators (Q g ) are written as follows: Upper and lower bounds on the bus voltage magnitudes (V i ) is given by: Upper and lower bounds on the transformers tap ratios (T i ) is given by: Upper and lower bounds on the compensators (Q c ) is given by: Where N is the total number of buses, N g is the total number of generators, N T is the total number of Transformers, N c is the total number of shunt reactive compensators. Mutual Mammal Behavior Algorithm (MM) The MM algorithm assumes the existence of a set of operations that resembles the interaction rules that model the Mutual Mammal behavior. In the approach, each solution within the search space represents a Mammal position. The "fitness value" refers to the Mammal dominance with respect to the group. The complete process mimics the Mutual Mammal behavior. The approach Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [91] in this paper implements a memory for storing best solutions (Mammal positions) mimicking the aforementioned biologic process. Such memory is divided into two different elements, one for maintaining the best locations at each generation (M g ) and the other for storing the best historical positions during the complete evolutionary process (M h ). Description of the MM Algorithm Following other Meta heuristic approaches, the MM algorithm is an iterative process that starts by initializing the population randomly (generated random solutions or Mammal positions). Then, the following four operations are applied until a termination criterion is met (i.e., the iteration number NI). 1) Keep the position of the best individuals. 2) Move from or to nearby neighbors (local attraction and repulsion). 3) Move randomly. 4) Compete for the space within a determined distance (update the memory). Initialization of Population The With j and i being the parameter and individual indexes, respectively. Hence, aj,i is the jth parameter of the i th individual. All the initial positions A are sorted according to the fitness function (dominance) to form a new individual set X = {x1, x2, . . . , xN p }, so that we can choose the best B positions and store them in the memory M g and M h . The fact that both memories share the same information is only allowed at this initial stage. Preserve the Position of the Best Individuals Analogous to the biological metaphor, this behavioral rule, typical from Mammal groups, is implemented as an evolutionary operation in our approach. In this operation, the first B elements ({a1, a2, . . . , a B }), of the new Mammal position set A, are generated. Such positions are computed by the values contained inside the historical memory M h , considering a slight random perturbation around them. This operation can be modeled as follows: While represents the l-element of the historical memory M h . v is a random vector with a small enough length random vector with a small enough length. Transfer to Close Neighbors From the biological inspiration, Mammals experiment a random local attraction or repulsion according to an internal motivation. Therefore, we have implemented new evolutionary operators that mimic such biological pattern. For this operation, a uniform random number r m is generated within the range [0, 1]. If r m is less than a threshold H, a determined individual position is attracted/repelled considering the nearest best historical position within the group (i.e., the nearest position in M h ); otherwise, it is attracted/repelled to/from the nearest best location within the group for the current generation (i.e., the nearest position in M g ). Therefore such operation can be modeled as follows: Where i ∈ {B+1, B +2, . . . , N p }, ℎ and represent the nearest elements of M h and M g to x i , while r is a random number between [−1, 1]. Therefore, if r > 0, the individual position xi is attracted to the position ℎ or otherwise such movement is considered as a repulsion. Transfer Arbitrarily Following the biological model, under some probability P, one Mammal randomly changes its position. Such behavioral rule is implemented considering the next expression: With i ∈ {B +1, B+ 2, . . . , Np} r a random vector defined in the search space. Contend for the Space within a Resolute Distance (Update the Memory) Once the operations to keep the position of the best individuals, such as moving from or to nearby neighbors and moving randomly, have been applied to all N p Mammal positions, generating N p new positions, it is necessary to update the memory M h . In order to update memory M h , the concept of dominance is used. Mammals that interact within the group maintain a minimum distance among them. Such distance, which is defined as ρ in the context of the MM algorithm, depends on how aggressive the Mammal behaves. Hence, when two Mammals confront each other inside such distance, the most dominant individual prevails meanwhile other withdraw. In the proposed algorithm, the historical memory M h is updated considering the following procedure. 1 The use of the dominance principle in MM allows considering as memory elements those solutions that hold the best fitness value within the region which has been defined by the ρ distance. The procedure improves the exploration ability by incorporating information regarding previously found potential solutions during the algorithm's evolution. In general, the value of ρ depends on the size of the search space. A big value of ρ improves the exploration ability of the algorithm although it yields a lower convergence rate. In order to calculate the ρ value, an empirical model has been developed after considering several conducted experiments. Such model is defined by following equation: Where and ℎ ℎ represent the pre specified lower and upper bound of the j-parameter respectively, within a D-dimensional space. Computation Process The computational procedure for the proposed MM algorithm can be summarized as follows: Step 1. Set the parameters Np, B, H, P, and NI. Step 4. Choose the first B positions of X and store them into the memory M g . Step 5. Update M h according to Section 4.2.5 (during the first iteration: M h = M g ). Step 6. Generate the first B positions of the new solution set A ({a1, a2, . . . , a B }).Such positions correspond to the elements of M h making a slight random perturbation around them. Step 7. Generate the rest of the A elements using the attraction, repulsion, and random movements. Step 8. If NI is completed, the process is finished; otherwise, go back to Step 3. The best value in M h represents the global solution for the optimization problem Simulation Results Proposed Mutual Mammal Behavior (MM) algorithm is tested in standard IEEE-57 bus power system. The reactive power compensation buses are 18, 25 and 53. Bus 2, 3, 6, 8, 9 and 12 are PV buses and bus 1 is selected as slack-bus. The system variable limits are given in Table 1. The preliminary conditions for the IEEE-57 bus power system are given as follows: P load = 12.229 p.u. Q load = 3.015 p.u. The total initial generations and power losses are obtained as follows: ∑ = 12.5611 p.u. ∑ = 3.3312 p.u. P loss = 0.25828 p.u. Q loss = -1.2039 p.u. Table 2 shows the various system control variables i.e. generator bus voltages, shunt capacitances and transformer tap settings obtained after MM based optimization which are within the acceptable limits. In Table 3, shows the comparison of optimum results obtained from proposed MM with other optimization techniques. These results indicate the robustness of proposed MM approach for providing better optimal solution in case of IEEE-57 bus system. The statistical comparison results of 50 trial runs have been list in Table 5 and the results clearly show the better performance of proposed MM algorithm. Conclusion In this paper an innovative approach MM algorithm used to solve reactive power problem. This article recommends a novel metaheuristic optimization algorithm that is called Mutual Mammal Behavior algorithm (MM). The MM algorithm presents two important characteristics: a. MM operators allow a better trade-off between exploration and exploitation of the search space; b. the use of its embedded memory incorporates information regarding previously found local minima (potential solutions) during the evolution process. The performance of the proposed algorithm has been demonstrated by testing it on standard IEEE 57,118 test bus systems. Proposed MM algorithm out performs other reported standard algorithm's in reducing real power loss.
2020-10-28T19:09:41.374Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "95c4a67b450ab96e1b8a20e374c9598990c30acf", "oa_license": "CCBY", "oa_url": "https://www.granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/IJRG17_A05_275/1725", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "0bca7e3ba9acd204e8afe6c6cf0e7ea06ef4a1e9", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
234010899
pes2o/s2orc
v3-fos-license
The Changing Pattern of the Quantum of Biomedical Waste Generated from a Tertiary Care Hospital in Delhi Background  As a consequence of growth and advancement in health care, production of health care waste has seen an exponential upward trend. Waste from individual health care facilities can vary based on the nature and scope of health care services they provide. Objectives  To analyze the amount of biomedical waste generated by a tertiary care hospital. Methods  Biomedical waste generated by the hospital from 2005 to 2019 was quantified and analyzed to calculate the total amount of incinerable waste, recyclable plastic waste, and sharp and glass waste. The amount of waste generated per bed per day and the compound annual growth rate (CAGR) were also calculated. Results  The total amount of biomedical waste generated in 2005 was 65,658 kg, which has substantially increased to 374,712 kg in 2019, with a CAGR of 12.5%. The hospital was producing average biomedical waste of 0.179 kg/bed/day in 2005, which has increased four times in 2019 to reach 0.709 kg/bed/day. The overall estimated plastic waste was 31% of the total biomedical waste in 2005 and 53% in 2019. Conclusion  The generation of biomedical waste is likely to see significant upward trends unless diligent deliberations are held between different stakeholders in regard to the reintroduction of reusable materials and waste reduction strategies. Introduction Over the years, there has been immense growth and advancement in health care facilities. As a consequence of this betterment and expansion, production of health care waste has seen an exponential upward trend. Waste generated by a health care facility can be infectious or noninfectious. The infectious waste is hazardous and poses serious threat to patients, health care workers, public health, and the environment. As per the World Health Organization (WHO), approximately 75 to 90% of the total health care waste generated by the health facilities is nonhazardous. The remaining 10 to 25% waste is dangerous, infectious, toxic, or with radioactive components. 1 Waste generation from individual health care facilities can vary based on the type or level of health care facility and location of health care facilities, rural or urban. It may reflect upon the differences in the services provided, scale, organizational complexity, availability of resources, and the number of medical and other staff. Quantification of waste generation can be used to establish baseline data on the rates of production in different medical areas. It also helps in planning, budgeting, calculating revenues from recycling, optimizing waste-management systems, and assessing environmental impact. We have attempted to analyze the amount of biomedical waste (BMW) generated by a tertiary care hospital in New Delhi. Materials and Methods This is a retrospective study conducted in a tertiary care hospital in New Delhi from 2005 to 2019. The BMW generated and collected from various parts of the hospital was quantified and analyzed further. The waste generated before the Biomedical Waste Management Rules 2016 was segregated according to the provisions of Biomedical Waste Management and Handling Rules 1998. After the notification of the new rules in 2016, the hospital started complying with the requirements of the revised rules. For the ease of description, waste is classified under three categories: incinerable waste, recyclable plastic waste, and sharp and glass waste. Quantification is done in terms of the total amount of waste generated annually, the amount of waste generated per bed per day, and compound annual growth rate (CAGR) for total BMW and for all three types of waste categories. Results The total amount of BMW generated in 2005 was 65,658 kg, which substantially increased to 374,712 kg in 2019, with a CAGR of 12.5% (►Table 1). CAGR was calculated to be 8.4, 16.4, and 15.4% for incinerable waste, plastic waste, and sharp and glass waste, respectively. The overall estimated plastic waste was 31% of the total BMW in 2005 and increased to 53% in 2019. The total number of beds in the hospital increased from 1,000 in 2005 to 1,447 in 2019. Waste generated per bed per day in different categories is depicted in ►Fig. 1, and overall it was 0.179 kg in 2005, which has increased fourfold to reach 0.709 kg in 2019. CAGR for per bed per day waste has been calculated to be 5.7% for incinerable waste, 13.6% for plastic, 12.6% for sharp and glass waste, and 9.8% for total waste. Discussion According to a joint study conducted by ASSOCHAM (Associated Chambers of Commerce and Industry of India) -Velocity, various health sectors in India were generating approximately 550 tonnes of BMW per day in 2018. It is expected to be 780 tonnes per day by 2022, with an estimated CAGR of 9.13%. 2 The total BMW generated in our hospital from 2005 to 2019 has recorded an increase of 470.7%, with a CAGR of 12.5%. As per the WHO estimates, average hazardous waste production by a country varies from 0.2 to 0.5 kg/bed/ day based on their per capita income. 3 A study from a tertiary care hospital in India reported an average of 0.341 kg/bed/per day of infectious waste. 4 Another study from Nigeria reported medical waste generation ranged from 0.116 to 0.561 kg/ bed/day in seven hospitals, with an average generation of approximately 0.181 kg/bed/day. 5 Our hospital was producing an average BMW of 0.179 kg/bed/day in 2005, which has increased four times in 2019 to reach 0.709 kg/bed/day. This continued increase reflects advances in the delivery of health care provided by our hospital over the years, and being a public hospital, its bed strength has always been fully occupied. As of July 2018, there were 1,478 bedded and 3,916 nonbedded health care facilities in Delhi, which produced 24,667.05 kg of BMW every day. However, there are only two common biomedical waste treatment facilities (CBWTFs) to cater to these health care facilities. 6 The way BMW is growing as seen in our hospital, the number of CBWTFs is grossly inadequate to handle the current quantum of waste, and this capital city of Delhi would need to address this issue on the immediate priority of strengthening the number of these facilities. The infectious plastic waste generated by our hospital from 2005 to 2019 has increased by 874.34%, with a CAGR of 16.4%. In comparison to incinerable waste, the quantity of plastic waste has significantly increased over these years. The CAGR for plastic waste has been 16.4%, which is almost double the CAGR for incinerable waste (8.4%). The plastic waste has also increased at a greater rate of 4% annually as compared with the total BMW and constituted 31% of the total BMW in 2005, but the figure reached to 53% in 2019. Single-use items such as disposable syringes, needles, catheters, and body fluid collection bags, have become an integral part of the health care delivery and play a significant role in the control of hospital-associated infections. But over the years, single-use variations of some medical devices have been made available, replacing the previous models that were sterilized and reused repeatedly. This replacement of reusable materials with single-use disposables has resulted in a logarithmic expansion in the generation of plastic waste as is evident by the increase in quantities of plastic waste in our hospital. The majority of plastic waste produced by health care facilities, if properly segregated, is likely to be recycled as per the Biomedical Waste Management Rules 2016. Only blood bags and waste contaminated by cytotoxic drugs is supposed to be incinerated. Improper management of plastic waste may result in adverse health and environmental effects. Combustion of plastics, especially chlorinated ones, may cause a generation of various hazardous substances such as smoke, carbon monoxide, dioxins, furans, and free radicals such as benzene. Some of these substances have negative effects on human and animal health, mainly affecting the endocrine and reproductive systems. Some of these are also well-known carcinogens. Plastic is estimated to be persisting in the environment for hundreds of thousands of years, but it is likely to be far longer in deep sea and nonsurface polar environments. Plastic debris poses a considerable threat by choking and starving wildlife. [7][8][9] This creates a sad juxtaposition, in which we are contributing to the negative health effects created by the manufacture and disposal of plastics while delivering care to our patients. With the advancement in sterilization techniques, we should consider giving a serious thought about reverting back to the use of instruments that can be easily sterilized and reused or exploring the possibilities of biodegradable/ compostable plastics in health care. 7,9,10 As the demand for plastic in health care continues to grow, it is highly imperative that manufacturers of medical supplies are encouraged to produce and supply products that have minimal impact on the environment. In addition, medical scientists need to explore the possibilities of treatment modalities that result in reduced generation of plastic and other BMW. Conclusion This analysis of BMW data over a period of 15 years provides baseline information for policy development at individual hospitals as well as the national level. Generation of BMW is likely to see significant upward trends unless diligent deliberations are held between different stakeholders in regard to the reintroduction of reusable materials and waste reduction strategies. Health care waste management would require strengthening of capacity in areas of manpower and infrastructure development. It would also require intersectoral cooperation and coordination between different organizations. Funding None. Conflicting Interest None declared.
2021-05-10T00:04:42.578Z
2021-01-25T00:00:00.000
{ "year": 2021, "sha1": "bd67df1f92d3d8c0c7972afcb863d4cb9a43db11", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0041-1723056.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6f54dcde0841e8cf7dd640fab7e59f0c0bc0af3", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
824047
pes2o/s2orc
v3-fos-license
Abnormal expression of CDK11p58 in prostate cancer Background CDK11p58 is one of the large families of p34cdc2-related kinases whose functions are linked with cell cycle progression, tumorigenesis and apoptotic signaling. Our previous investigation demonstrated that CDK11p58 repressed androgen receptor (AR) transcriptional activity and was involved in the negative regulation of AR function. Methods CDK11p58 expression was examined in the prostate cancer tissues and adjacent tissues by IHC and qRT-PCR. Cell apoptosis was detected by flow cytometry. The metastasis of cancer cells was evaluated by the Transwell Assay. Finally we further investigated the underlying molecular mechanisms by examining expression levels of relevant proteins using western blot analysis. Results We found that both RNA and protein expression of CDK11p58 were low in prostate cancer tissues compared with its adjacent noncancerous tissues. CDK11p58 promoted the prostate cancer cell apoptosis and inhibited its metastasis in a kinase dependent way. And finally CDK11p58 could inhibit the metastasis of AR positive prostate cancer cells through inhibition of integrin β3 and MMP2. Conclusions These data indicate that CDK11p58 is an anti-metastasis gene product in prostate cancer. Introduction CDK11, which is encoded by two highly homologous p34cdc2-related genes, Cdc2L1 and Cdc2L2 [1,2], and is known as PITSLRE protein kinase due to the conserved PITSLRE motif within the protein kinase domain [3]. CDK11 p58 is one isoform of CDK11 and is closely related to cell cycle arrest and apoptosis in a kinase-dependent manner [2,4]. Previous studies revealed that CDK11 p58 promotes centrosome maturation and bipolar spindle formation [5,6]. Our study has shown that cyclin D3 is vital for the kinase activity of CDK11 p58 . Cyclin D3 and CDK11 p58 were involved in the regulation of AR-mediated transactivation. Cyclin D3/CDK11 p58 holoenzyme kinase complex repress AR function through phosphorylating AR at Ser-308 [7]. Also, we found that CDK11 p58 was autophosphorylated at Thr-370. Thr-370 is responsible for the autophosphorylation, dimerization, and kinase activity of CDK11 p58 [4]. Dysregulated AR signaling is implicated in several types of tumor, including carcinomas of the prostate, breast, liver and bladder [8]. Abnormal AR expression in prostate cancer are correlated with metastasis and aggressiveness [9]. Tumor metastasis is the main cause of lethality of prostate cancer, because conventional therapies like surgery and hormone treatment rarely work at this stage [9]. Tumor cell migration, invasion and adhesion are necessary processes for metastasis [10]. Metastasis is a consequence of many biological events, during which cancer stem cells are shifted into a malignant state [9,11]. Previous reports showed that an activated Wnt/β-catenin pathway and AR expression in prostate cancer are correlated with metastasis and aggressiveness. In addition, the expression of MMP-7 protein, a target of the Wnt/ β-catenin pathway, is associated with PSA. AR is involved in the metastasis and invasiveness of prostate cancer. The overexpression of AR promotes the migration and invasion of prostate cancer cells [12]. As CDK11 p58 was involved in the negative regulation of AR, we speculated that CDK11 p58 might be involved in the regulation of prostate cancer metastasis. In the current study, we further examined the expression in the prostate cancer tissues and investigated the biological functions of CDK11 p58 in prostate cancers. Our study indicated that expression of CDK11 p58 was decreased in the prostate cancer and CDK11 p58 was involved in the negative regulation of prostate cancers. Results Expression of CDK11 in normal prostate tissues and prostate cancer tissues CDK11 p58 is located on human chromosome 1p36.33, a region frequently mutated in various cancers. To investigate the potential role of CDK11 p58 in prostate carcinogenesis, we examined the expression of CDK11 p58 protein in prostate cancer tissues and adjacent non-cancerous tissues. By western blot assay, we found that CDK11 p58 was higher in normal prostate tissues than in cancer tissues ( Figure 1A). The qRT-PCR assay showed that the mRNA level of CDK11 p58 was also higher in the normal tissues than in the cancer tissues (P < 0.05, Figure 1B). Immunohistochemical (IHC) staining of CDK11 p58 protein was performed in 20 paired tumor/non-tumor clinical tissue samples. This analysis revealed that CDK11 p58 was clearly expressed in nucleus. The expression of CDK11 p58 was decreased in prostate cancer tissues compared with paired, normal prostate epithelium, with decreased staining intensity and a lower proportion of positively stained cells in prostate cancer tissue (6/20) versus normal tissue (14/20) ( Figure 1C). A similar trend in the expression of Cyclin D3, a partner protein of CDK11 p58 , was also observed. These data indicate that CDK11 p58 is downregulated in prostate cancer. CDK11 p58 inhibited the proliferation and promoted the apoptosis of LNCap cells CDK11 p58 is one of the large family of p34cdc2-related kinases whose functions are linked with cell cycle progression, tumorigenesis, and apoptotic signaling. So we Figure 1 Expression of CDK11 in normal prostate tissues and prostate cancer tissues. (A) Prostate cancer tissues were lysed and subject to immunoblotting analysis as indicated. Protein levels were normalized to GAPDH. (B) 20 pairs of prostate cancer tissues and normal tissues were subject to qRT-PCR as described above. The relative mRNA level was showed above. P < 0.05. (C) CDK11 and Cyclin D3 immunostaining was performed in normal postate tissues (a) and its paired cancer tissues (b); All photomicrographs were amplified 400×. first examined the biological functions in prostate cancers. CCK-8 assay showed that CDK11 p58 could inhibited the proliferation of LNCap cells. To test whether the inhibition was CDK11 p58 kinase dependent, we used CDK11 p58 mutants previously constructed in our laboratory (the kinase-dead mutant T370A, D224N and the kinase-active mutant T370D). Compared with the control, T370A, D224N, wild type CDK11 p58 inhibited the proliferation of prostate cancer cells. The kinase constantly activitied mutant T370D exhibited even more inhibition effect than others (P < 0.05, Figure 2B). Also the apoptosis effect was tested. Wild type CDK11 p58 and T370D could promote the apoptosis of LNCap cells but T370A failed to promote the cancer cell apoptosis (P < 0.05, Figure 2A). These data suggested that CDK11 p58 promoted the apoptosis of prostate cancer cells and the effect was kinase dependent. CDK11 p58 inhibited the metastasis of LNCap cells CDK11 p58 , a G2/M specific protein kinase, has been shown to be associated with apoptosis in many cell lines. However its role in cancer invasion and metastasis remains unclear. We investigated the role of CDK11 p58 in the migration and invasion of prostate cancer. Scar assay showed that CDK11 p58 could inihibit the migration of AR + LNCap cells, however it failed to inhibit the migration of AR-PC3 cells ( Figure 3A). This suggested that CDK11 p58 might inhibit the cancer cell migration through AR signaling. Then the transwell assay was carried on. Invasion of LNCap cells transfected with wild type CDK11 p58 and the T370D constitutively active kinase mutant was significantly lower, compared with control cells (P < 0.05; Figure 3B). In contrast, the T370A kinase dead mutant failed to suppress metastasis of LNCap cells. These data suggest that CDK11 p58 inhibits prostate cancer metastasis in a kinase-dependent manner. Interaction between CDK11 p58 and AR Previously, we showed that CDK11 p58 interacts with AR in the postate cancer cell lines. In the present experiment, after pcDNA3.0-HA-CDK11 p58 or pcDNA3.0 vector control were transfected in LNCap cells, we carried on the IF assay. As shown in Figure 4A, AR (green) and CDK11 p58 (red) were localized mainly in nucleus. CDK11 p58 was co-localized in the nucleus with AR (yellow). However, the control vector pcDNA3.0 (red) was not localized in nucleus and showed no colocalize with AR ( Figure 4B). To further investigate whether the interaction between CDK11 p58 and its mutants, we next performed Co-IP assays. The data showed that AR interacted with CDK11 p58 , T370D and even T370A (Figure 4C). However, in our previous study, the other kinase dead mutant D224N failed to interact with AR. These experiments demonstrated that CDK11 p58 was capable of interacting with AR not necessarily dependent of its kinase activity. The exact mechanism needs further investigations. CDK11 p58 inhibited the expression of integrin β3 and mmp2 We further examined the metastasis related protein involved in the metastasis signaling. Western blot analysis revealed that overexpression of CDK11 p58 attenuated integrin β3 and MMP2 expression in a dose dependent mannar but not MMP3 ( Figure 4D). Also the expression inhibition was kinase dependent ( Figure 4E). These data indicated that CDK11 p58 may repress integrin β3 and MMP2 to inhibit the metastasis of prostate cancer. Discussion The incidence of prostate cancer (PCa), one of the most common cancers in elderly men, is increasing annually in the world. CDK11 p58 is a mitotic protein kinase, which has been shown to be required for different mitotic events such as centrosome maturation, chromatid cohesion and cytokinesis [1][2][3]. Our previous study found that Six hrs after transfection, a wound was stimulated by a pipette tip. Migration was monitored for 12 hrs. The distance was measured by phasecontrast microscopy. The data below were shown as mean ± SEM of the distances in n > 5 separate experiments areas. (B) LNCap cells were transfected with CDK11 p58 and its mutants. Six hrs after transfection, transwell assay was carried on as described above. Crystal violet dye staining of LNCap cells that invased in the transwell assays was shown. The bottom data were shown as mean ± SEM of the number of cells invasion in n > 5 separate areas. * P < 0.05 versus vector controls. CDK11 p58 repressed AR transcriptional activity and AR was phosphorylated at Ser-308 by cyclin D3/CDK11 p58 [4]. Furthermore, androgen-dependent proliferation of PCa cells was inhibited by cyclin D3/CDK11 p58 through AR repression. D3/ CDK11 p58 signaling is involved in the negative regulation of AR function. In this study, we demonstrated for the first time, that CDK11 p58 expression is involved in the negative regulation of prostate cancer invasion in a kinase dependent manner. Androgens and AR are indispensable for the development, regulation, and maintenance of male phenotype and reproductive physiology [5,6]. The AR signaling pathways play critical roles in the development and progression of PCa, a leading cause of cancer death second to lung cancer in men [7]. Metastasis is a consequence of many biological events, during which cancer stem cells are shifted into a malignant state. Dysregulated AR signaling is implicated in several types of tumor, including LNCap cells were transfected with CDK11 p58 . After 48 hrs, cells were subjected to immunoflurorescent staining assay. Cells were fixed and reacted with a mouse monoclonal anti-AR antibody and a rabbit polyclonal anti-CDK11 antibody. The secondary antibodies were anti-rabbit IgG-conjugated to fluorescein isothiocyanate and anti-mouse IgG-conjugated to rhodamine red. The images were captured with a Leica confocal microscope and software provided by Leica. (C) LNCap cells were transfected with CDK11 p58 and its mutants. 48 hrs later, cells were lysed and subjected to immunoprecipitation with an anti-CDK11 antibody, followed by Western-blot analysis with an anti-AR antibody in the top panel. The bottom panels showed the expression levels of the AR and CDK11 p58 from the prostate cancer tissue lysates. (D) LNCap cells were transfected with pcDNA3 and CDK11 p58 with increased doses. After 48 h, cells were harvested and lysates subjected to immunoblotting analysis as indicated. Protein levels were normalized to GAPDH. (E) LNCap cells were transfected with wild type CDK11 p58 or its mutants. After 48 h, cells were harvested and lysates subjected to immunoblotting analysis as indicated. Protein levels were normalized to GAPDH. carcinomas of the prostate, breast, liver and bladder. Previous studies have showed that AR is involved in the metastasis and invasiveness of prostate cancer cells. The overexpression of AR promotes the migration and invasion of BFTC 909 cells. Inhibition of AR could inhibit AR-enhanced cell migration and invasion. Another study shows that an activated Wnt/β-catenin pathway and AR expression in PCa are correlated with metastasis and aggressiveness. Also AR mRNA expression shows significantly higher in prostate cancer when compared to benign prostatic tissue [8]. One report revealed the important roles of endothelial cells within the prostate cancer microenvironment to promote the prostate cancer metastasis and provide new potential targets of IL-6--> AR-> TGFbeta-> MMP-9 signals to battle the prostate cancer metastasis [9]. Activated AR can downregulate E-cadherin expression to promote the activation of epithelial-mesenchymal transition and tumor metastasis [10]. As CDK11 p58 could inhibit the transcription activity of AR, we speculated CDK11 p58 could inhibit the metastasis of prostate cancer through AR signaling. First, we examined the expression of CDK11 p58 in prostate cancer tissues to find that the expression of CDK11 p58 was decreased in prostate cancer tissues compared with paired, normal epithelium, with decreased staining intensity and a lower proportion of positively stained cells in prostate cancer tissue. A similar trend in the expression of Cyclin D3, a partner protein of CDK11 p58 , was also observed. As we reported before, CDK11 p58 could inhibit the proliferation and promote the apoptosis of prostate cancer cells. To our surprise, as a Ser/Thr kinase, the proliferation inhibition was not fully kinase dependent. But the apoptosis promotion was dependent on it kinase activity. As we speculated, CDK11 p58 indeed inhibited the migration and invasion of AR + LNCap cells, but not AR-PC3 cell lines. These data demonstrated that CDK11 p58 could inhibit the metastasis of prostate cancer cells through the AR signaling pathway. And also it was kinase dependent. Then we examined the interaction between CDK11 p58 and AR. The data showed that T370A and T370D mutants were both capable of interacting with AR. The interaction was not totally dependent on its kinase activity. However, in our previous study, the other kinase dead mutant D224N failed to interact with AR. It suggested that the aspartic acid D224 of CDK11 p58 was necessary for the interaction with AR not just because of its kinase activity. Maybe, the mutant D224N has changed the CDK11 p58 protein configuration and finally influenced the interaction with AR. The exact mechanism needs further investigation. This result indicated both CDK11 p58 and T370A could interact with AR, but only the kinase activated CDK11 p58 could inhibit the metastasis of AR positive prostate cancer cells. Western Blot assay showed that metastasis-related genes integrin β3 and MMP2, but not MMP3 was inhibited by CDK11 p58 in a dose and kinase dependent manner. Conclusions Abnormal expression of CDK11 p58 in prostate cancer tissue led to the dysfunction of cell apoptosis and metastasis of cancer. CDK11 p58 inhibit the metastasis of AR positive prostate cancer cells through inhibition of integrin β3 and MMP2 in a kinase dependent manner. These data indicate that CDK11 p58 is an anti-metastasis gene product in prostate cancer. Taken together, we demonstrate a new role for CDK11 p58 as an anti-metastasis gene in prostate cancer. Materials Fetal bovine serum (FBS), Dulbecco modified Eagle medium (DMEM), 1640 and LipofectAMINE reagen were purchased from Invitrogen. The anti-cyclin D3 monoclonal antibody, the mouse and rabbit secondary antibody were purchased from Cell Signaling. The anti-CDK11 and anti-AR polyclonal antibody were purchased from Santa Cruz Biotechnology. HRP-conjugated goat anti-rabbit and HRP-conjugated goat anti-mouse IgG secondary antibodies were also purchased from Santa Cruz Biotechnology. The anti-MMP2 polyclonal antibody was purchased from Bioworld. The anti-MMP3 polyclonal antibody was purchased from Epitomics Company. The qRT-PCR assay system was purchased from Tiangen Company. Cell culture and cell transfections The PC3 and LNCap cell lines were obtained from cell bank of our lab. Cells were grown using 1640 medium with 10% FBS, 100 units/ml penicillin, and 100 ug/ml streptomycin at 37°C and 5% CO 2 . Transfection for immunoprecipitation was performed in 100-mm dishes with 8 ug total plasmids. CDK11 Immunohistochemistry Expression levels of CDK11 in postoperative paraffinembedded tumor specimens from prostate cancer patients were detected with immunohistochemistry (IHC). The concentrations of antibody were as follows: anti-CDK11, 1:100 dilution; Nuclear expression of both CDK11 is defined as positive. The detailed staining procedures strictly followed the supplier's recommendation. Negative controls were obtained by incubating parallel slides with the primary antibodies omitted. In addition, sections with confirmed positive staining from each run served as positive controls. Immunostaining of the whole slide area was evaluated by two independent pathologists who remained unaware of tumor characteristics and other staining results. Immunoprecipitation and western blotting Prostate cancer cells were first lysed with ultrasoud in 1 ml of coimmunoprecipitation (CoIP) buffer (50 mM Tris-HCl (pH 7.5), 150 mM NaCl, 0.1% NP-40, 5 mM EDTA, 5 mM EGTA, 15 mM MgCl2, 60 mMβ-glycerophosphate, 0.1 mM sodium orthovanadate, 0.1 mM NaF, 0.1 mM benzamide, 10 μg/ml aprotinin, 10 μg/ml leupeptin, 1 mM PMSF). Detergent insoluble materials were removed by centrifugation. Whole tissue lysates were incubated with 2 μg relevant antibody at 4°C for 2 h. Pre-equilibrated protein G-agarose beads were added and collected by centrifugation after incubation overnight and then gently washed three times with the lysis buffer. The bound proteins were eluted and analyzed using Western blots. An antibody to GAPDH was used to ensure equivalent loading. Transwell invasion Cells invasion was assayed using BD biocoat growth factor reduced (GFR) matrigel invasion chambers (BD, CA). LNCap cell suspension (0.5 ml; 10×10 4 cells/ml) was added to the inside of the inserts. Assays performed at 37°C, 5% CO2 were 24 h for transfected cells. After incubation, noninvading cells were removed from the upper surface of the membrane using cotton-tipped swabs. The cells on the lower surface of the membrane were stained with Crystal violet. Cells were counted in the central field of triplicate membranes. Confocal microscopy LNCap cells grown to 50% confluence on coverslips were transiently transfected with indicated plasmids. After 48 h of transfection, cells were washed with PBS, fixed in 4% formaldehyde, permeabilized in 0.2% Triton X-100/PBS and blocked in 1% BSA for 1 h at room temperature. The coverslips were stained with anti-CDK11 and anti-AR antibody for 2 h at room temperature followed by incubation with fluorescein isothiocyanate (FITC)-conjugated goat anti rabbit secondary antibody and rhodamineconjugated goat anti mouse for 1 h at room temperature. Cells were washed three times with PBS, stained for Hoechst 33258 (50 μg/ml) solution, respectively, in dark chamber. The coverslips were washed as described above, inverted, mounted on slides, and examined in a Zeiss or Leica TCS SP5 confocal microscope. Statistical analysis The experimental data were expressed as the mean ± standard deviation, and the statistical significance between different groups was determined using t-tests. All statistical tests were two sided and P values less than 0.05 were considered significant.
2017-07-07T10:30:05.027Z
2014-01-08T00:00:00.000
{ "year": 2014, "sha1": "06e26d40903cd1ef76f882d2a561434ba3b90629", "oa_license": "CCBY", "oa_url": "https://cancerci.biomedcentral.com/track/pdf/10.1186/1475-2867-14-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06e26d40903cd1ef76f882d2a561434ba3b90629", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234066284
pes2o/s2orc
v3-fos-license
Nasal Septal Perforation: Experience of Management Introduction Nasal septal perforation is the loss of composite tissue comprising the mucosa, bone or cartilage structures that form the nasal septum. Nasal septum perforation has many causes. Though it may be idiopathic, the most common causes are iatrogenic like nasal surgeries. Among other reasons are septal hematoma, nasal picking habit, nasal cauterization due to nosebleeds, nasotracheal intubation, cocaine use, vasculitis, inflammatory diseases such as sarcoidosis, This study aims to review the approach to management of patients with nasal septal perforation who underwent repair of the perforation in a tertiary clinic, in the light of current literature. Materials and Methods In this study, the records of 27 patients who were diagnosed with nasal septal perforation and treated surgically in a tertiary clinic, between January 2015 and June 2019 were reviewed retrospectively. Results The successful closure rate of perforations was 74%. In 4 of 7 patients whose perforations were not completely closed, the perforation size was larger than 2 cm in diameter. Conclusion Successful repair of nasal septal perforation depends largely on the cause, location, size of the perforation, cartilage bone tissue on the perforation edges, surgical technique and the surgeon’s experience. N asal septal perforation (NSP) is not common and its treatment is difficult. Trauma (nasal surgery, septal fracture, septal hematoma, nasal foreign bodies and nasal piercing, nasal picking, septal cauterization, nasotracheal intubation, etc.), longterm use of nasal spray, smoking habit, cocaine use, inflammation (vasculitis, collagen vascular diseases, sarcoidosis, Wegener granulomatous), infection (tuberculosis, syphilis, lepromatous leprosy, diphtheria, etc.), chemical irritants and neoplasm are among the causes of septal perforation. [1][2][3][4] In recent years, nasal steroid and decongestant sprays are increasingly important causes. Unfortunately, the most common cause of septal perforation are septal surgeries such as submucous resection and septoplasty. 5 In this study, 27 patients who underwent nasal septal perforation repair in a tertiary clinic were reviewed retrospectively and the approach to nasal septal perforation is presented in the light of current literature. Materials and Methods In this study, the records of 27 patients who were diagnosed with nasal septal perforation and treated surgically in a tertiary clinic, between January 2015 and June 2019 were reviewed retrospectively. Age, gender, size of nasal septal perforation, etiology of the perforation, surgical technique applied, surgical treatment results and period of follow-up were evaluated. According to size of nasal septal perforation; perforations < 0.5 cm were grouped as small, 0.5-2 cm as medium and > 2 cm as large. Perforation repair was performed in all patients with open rhinoplasty under general anesthesia with the help of mucosal flaps and interposition grafts. Surgical Technique: All cases were operated under general anesthesia. Open rhinoplasty was started with a trans columellar incision. This incision was combined with alar rim incisions on both sides and the back of the nose was elevated. Then, by entering between the medial crura of the alar cartilages, the septum was reached and the upper tunnel was carefully created superior to the perforation. An inferior tunnel was created at the lower edge of the perforation. The mucoperichondrium and then the mucoperiosteum opposite the inferior meatus were elevated on both sides. After completion of the elevation on both sides, excess septal cartilage or perpendicular laminae of the ethmoid and in some cases, tragal cartilage was harvested for the reconstruction of the perforation. After the edges of the perforation were deepithelialized, four tunnels were created on both sides by the advancement flap technique, and extended up to the top of the nose, the bottom of the nose and the inferior turbinate. Then, longitudinal incisions were made on both sides of the nasal ceiling and mucosa at the base, and the septal flaps were shifted to close the perforation and an interposition graft was inserted. The mucosal flaps were then sutured to the residual mucosa with 5/0 Vicryl® (Ethicon, Inc., Somerville, NJ) in a primer fashion, without being stretched. The dehiscences in the flap donor site at the top and bottom were left for secondary healing. After the operation, internal splints were placed on both sides of the nasal septum after the columellar incision was sutured with a 5/0 prolene suture. Oral broad-spectrum antibiotics and daily saline irrigation were used for 14 days postoperatively. In the postoperative period, patients were warned against smoking, vasoconstrictor sprays, nasal scratching, and straining. Results A total of 27 patients (15 (56%) males and 12 (44%) females) were included in the study. The average age of the patient was 32 (24-55 years old). There was a history of septal surgery in 12 patients (44%), trauma in 5 patients (19%) and chronic nasal spray usage in 2 (7%) patients (Table I). Etiology could not be determined in 8 (30%) patients. Complaints were nasal obstruction in 10 patients, crusting and bleeding in the nose in 7 patients, only bleeding in 5 patients, and noise in the nose while breathing in 5 patients (Table II). Perforation sizes ranged from 0.5 cm to 3 cm. The size of the nasal septal perforation was small (<0.5 cm) in 10 (37%) patients, medium (0.5-2 cm) in 10 (37%) patients, and large (> 2 cm) in 7 (26%) patients. The successful closure rate of perforations was 74%. (Fig. 1,2). In 4 of 7 patients whose perforations were not completely closed, the perforation was larger than 2 cm in diameter (Table III). The mean postoperative follow-up period of the patients was 4.3 years (10 months-8 years). Discussion Nasal septal perforation is the loss of composite tissue comprising the mucosa, bone or cartilage that form the nasal septum.6 Nasal septum perforation has many causes. Though it may be idiopathic, the most common causes are nasal surgeries such as submucosal resection, septoplasty, functional endoscopic sinus surgery. 7 Among other reasons, septal hematoma, nasal picking habit, nasal cauterization for nosebleeds, nasotracheal intubation, cocaine use, vasculitis, inflammatory diseases such as sarcoidosis, Wegener granulomatosis and infections such as tuberculosis, leprosy, syphilis, diphtheria can be listed. 8,9 The majority of nasal septal perforations are asymptomatic and are diagnosed during examinations done for other reasons. Symptomatic patients have epistaxis, nasal obstruction, discharge, pain and whistling. 10 Most of the symptoms occur due to turbulence of nasal airflow, which is caused by the perforation. 11 Posterior perforations cause fewer symptoms than the anterior perforations because the nasal mucosa moistens the respiratory air rapidly. 12 Symptoms may vary according to location, size, and cause of perforation. A small perforation in the posterior septum can be asymptomatic, but it can cause a distinctly whistling voice when the perforation is anterior. As the size of the perforation increases, laminar air flow deteriorates and turbulence increases. This causes drying, crusting, and nasal congestion. Also, a large anterior perforation may cause saddle nose deformity with loss of structural support. In cocaine users, midto low-grade chondritis may cause pain as in infectious perforations. 1,2,13 In our study, there were complaints of nasal obstruction in 10 patients, crusting and bleeding in the nose in 7 patients, bleeding in the nose in 5 patients, and noise in the nose in 5 patients. Whistling sound from the nose 5 Depending on the location and size of the nasal septal perforation, depending on the change in the nasal airflow, patients may have nasal congestion, discharge, crusting, whistling during breathing, nosebleeds, headache, and foreign body sensation in the nose. 8 Conservative approaches, prosthesis application, and surgery are included in the treatment of nasal septal perforation. Moisturizing and softening ointment applications to the nasal cavity are conservative approaches. The use of septal buttons as a prosthesis is an option. However, ointment application and prosthesis placement approaches are not sufficient to treat symptoms in all patients. The most effective method of restoring normal physiology of the nose is surgical repair. 14 Many methods of surgical treatment of nasal septal perforation are described in the literature. However, the most important factors affecting the success of septal perforation repair, in addition to the surgeon's ability and experience, are the amount of tissue in the rest of the septum, the size, and location of the septal perforation. 7,15 The approach is as important as the technique to be preferred in surgery. Closed or open technique can be used for surgical repair of septal perforations. The advantages of the closed technique can be the absence of an external scar, causing minimal tissue damage and minimal damage to anatomical integrity. In contrast to these advantages, an insufficient surgical field of view can be counted as a disadvantage during the suturing of the mucoperichondrial and mucoperiosteal flaps created along with difficulty in placing grafts. The advantages of open technique are a better field of view, easier access to the area of perforation, and the surgeon's ability to use both hands. 8 The biggest disadvantage of the open technique is that it disrupts the supporting structures and creates an external scar. 6 In our study, an open approach was applied to all patients. Techniques commonly described include repair with bilateral septal mucosal flaps with interposition connective tissue grafts, repair with unilateral septal mucosal flaps with or without interposition graft, repair with bilateral septal mucosa flap without nasal graft, nasolabial flap and buccal mucosa flap, composite grafts or flaps. 16 In our study, perforation repair was performed in patients with the help of advancement mucosal flaps and interposition grafts. Conclusion Successful repair of nasal septal perforation depends primarily on the cause, location, size of the perforation, cartilage and bone tissue at the perforation edges, surgical technique and the surgeon's experience. Since nasal septal perforation is mostly caused by mucoperichondrial and mucoperiosteal tears in nasal surgeries, bilateral mucosal tears should be repaired immediately after detection. In the light of available literature, it is not possible to pick a single technique that provides guaranteed success in nasal septal perforation repair. The surgical method to be preferred should be determined according to the size, location of the perforation and the amount of tissue left in the remaining septum. To increase the success of surgical results attention should be paid towards creating a contralateral flap prepared from different regions, not stretching the flaps in the repaired area, suture lines not crossing each other and providing a comfortable surgical view. Whichever surgical method is applied, repair of chronic perforation is difficult. For this reason, it would be a more appropriate approach to prevent a septal perforation in the first place.
2021-05-10T00:02:57.098Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "348e510bde9eefdaf9b8a3dc2f57da31db5f58ca", "oa_license": "CCBYNC", "oa_url": "https://bjohns.in/journal3/index.php/bjohns/article/download/296/293", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "28da21bd71def5e000478b14085630e040eb483d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119183560
pes2o/s2orc
v3-fos-license
Status and Prospects for Heavy Flavour Physics at LHC The Large Hadron Collider will be a unique place to find new physics in the next decade. A huge production of b and c quarks will allow a rich programme of Heavy Flavour Physics to be carried out either by the multipurpose experiments ATLAS and CMS or by LHCb, the experiment designed for such physics. An overview of the LHC machine and experiments' performances will be given with the first 2010 data. The start-up is very bright and some first LHC heavy flavour results will be presented. The b physics program at LHC will be illustrated with three examples: the searches for rare decays such as B_s ->mu mu, the CP measurements from B_s ->J/psi phi and CP measurements of the Cabibbo Kobayashi Maskawa (CKM) angle gamma. Some prospects for an upgrade of the LHCb detector will also be given. Introduction Beside the direct search for new particles, high precision measurements in the b and c sector are a complementary way to find New Physics (NP). After many successes in heavy flavour physics over the past decade, conducted by the B factory experiments Babar and Belle, as well as the CDF and D0 experiments at the Tevaton hadronic collider, the proton-proton Large Hadronic Collider (LHC) installed at CERN will be the most intense source of b and c hadrons. Even though it has been confirmed that the CKM mechanism is the major source of CP violation observed so far and that the description in the Standard Model (SM) of flavour changing neutral current is correct, there is still room for NP. The accuracy of the tests is still limited. The b → d transition has been measured at the level of 10-20% accuracy, however NP effects can still be large in other processes like the b → s transition. LHC experiments will have a rich heavy flavour physics program thanks to a design LHC luminosity of 10 34 cm −2 s −1 and a large bb cross section of ∼ 500 µb at a centre-of-mass energy of √ s= 14 TeV, which represents about 1% of all visible interactions. As in the case of the Tevatron, all flavours of b-hadrons will be produced (B d , B u , B s , B c , Λ b , ...). At design parameters, the LHC will collide proton bunches at a frequency of 40 MHz. Three of the LHC experiments will study CP violation and rare decays in the b sector: the two general purpose detectors ATLAS and CMS, and the LHCb detector dedicated to heavy flavour physics. ATLAS and CMS are hermetic and have a full coverage up to a pseudo rapidity of 2.5. They have been designed to run at the full LHC luminosity of 10 34 cm −2 s −1 for the direct search of new particles. Nevertheless they have a b physics program in the first years of LHC when running at a luminosity of 10 33 cm −2 s −1 , when it is planed to collect around 10 fb −1 per year. Both experiments have first a fast custom-built electronics trigger which reduces the event rate from 40 MHz to below 100kHz, exploiting mainly muon detectors. Software triggers reduce the rate to an output rate of around 200Hz, b physics being accounted for 5 to 10% of total trigger resources. Beyond this luminosity first the number of interactions per crossing can reach 10 or 20 and is not appropriate for precision measurements, second triggering is even more challenging and only some specific rare b decays with two muons in the final states can still be studied. On the contrary LHCb is an optimized detector for b and c physics precision studies at LHC. It is a single-arm open spectrometer, covering the pseudo rapidity range: [1.9 − 4.9], in order to exploit the fact that the bb pair production is sharply peaked forward-backward. It incorporates precision vertexing and tracking systems, particle identification over a wide momentum spectrum and the capability to trigger down to very small momenta. LHCb will operate at a reduced luminosity of 2 × 10 32 cm −2 s −1 which will be kept locally controlled by appropriately focusing the beam. At this luminosity the majority of the events have a single pp interaction. The fast custom-built electronics trigger of LHCb will reduce the trigger rate only to 1 MHz by applying much lower p T / E T trigger thresholds on muon, electron, photon and hadron candidates. The remaining reduction is performed by software triggers that exploit the full detector readout and lead to an output rate of 2 kHz of interesting heavy flavour events across a wide spectrum of final states. LHC machine and experiments' performance A lot of records have been reached by the LHC in 2010. First, on 30 March 2010 beams col-lided for the first time at √ s=7 TeV in the LHC, marking the start of the LHC research programme. The first collisions were done with one bunch per beam, corresponding to a luminosity of 10 27 cm −2 s −1 , far from the ultimate goal of 2808 bunches. The number of particles in each bunch was increased, the beam size at the interaction point was squeezed down, then the machine ran with 13 bunches in each beam, allowing in May to set a new luminosity record of 2 × 10 29 cm −2 s −1 . LHC was able to run smoothly with bunches at the design intensity, that is, with 1.1 × 10 11 protons per bunch. A big jump in luminosity was obtained mid September when fills where delivered with 56 bunches arranged in trains of eight bunches per train, 47 bunches colliding at ATLAS, CMS and LHCb. The number of bunches and trains was increased and, on October 14 th , the main objective of reaching a luminosity of 10 32 cm −2 s −1 by the end of 2010 proton running was reached. On November 4 th , the proton running for 2010 in the LHC came to the end. Physics running was interspersed with periods of machine development and around 40 pb −1 have been collected by ATLAS, CMS and LHCb. The experiments have been running with about 90% efficiency. During the year the trigger of the detectors, which are highly configurable, evolved a lot to match the exponential increase of the LHC luminosity and the physics requirements. The next target for the machine is to collect an integrated luminosity of 1 fb −1 before the end of 2011. The detectors are performing very well. The calorimeters were among the first detectors to be calibrated. Quickly a mass resolution on the π 0 mass of 20 MeV/c 2 for ATLAS, 14 MeV/c 2 for CMS and 7.2 MeV/c 2 for LHCb was obtained, close to Monte Carlo (MC) expectations. The impact parameter resolution for 2 GeV/c tracks is 60 µm in ATLAS, 50 µm in CMS and 25 µm in LHCb, not far from MC expectations. Further improvement will be achieved by better alignment. In the LHCb VErtex LOcator detector the alignment among sensors is better than 4 µm and the fill-to-fill variations are less than this value. Over a wide range the di-muon spectrum has been measured. The J/ψ → µ µ events have been used as a standard candle for calibration of the tracking system. In the whole detector acceptance a mass resolution on J/ψ (resp. ϒ) of 70 (170) MeV/c 2 for ATLAS, 47 (100) MeV/c 2 for CMS and 14 (47) MeV/c 2 for LHCb was obtained, approaching MC expectations. Particle identification is already performing amazingly well. For instance, LHCb which has a very powerful particle identification system over a wide momentum range (2-100 GeV/c) based on two Ring Imaging CHerenkov detectors, obtained an average efficiency for K ± detection of about 95% with a π → K misidentification probability of 7%, close to MC level performance. First LHC heavy flavour results Clean charm signals reconstructed in the first nb −1 of data [1,2] already allow to firm up exciting prospects for measurements of D 0 −D 0 mixing and CP violation in the charm sector [3]. Bottom production has been measured with a few nb −1 of data at √ s= 7 TeV using the inclusive decay b → J/ψX [2,4]. In addition LHCb used a second method [5] which exploits the semileptonic channel b → D 0 µνX, with D 0 → K − π + . In this analysis the D 0 mesons from a parent b are separated from those directly produced on the basis of the impact parameter of the reconstructed D tracks measured with respect to the primary vertex. Integrating over the LHCb pseudo rapidity acceptance yields the value σ b (pp → H b X; √ s = 7 TeV; 2 < η b < 6) = 75.3 ± 5.4 ± 13 µb. Extrapolating to the full solid angle using PYTHIA 6.4 and averaging with the LHCb preliminary measurement obtained with the first method corresponds to a total bb production cross-section σ (bb; √ s= 7 TeV) = 298 ± 15 ± 43 µb [2,6]. This value agrees with the expectations and confirms that the b yield assumed in the design of LHCb was correct. Future for Heavy Flavour Physics at LHC The B 0 s → µ + µ − decay is predicted to be very rare in the SM (BR = (3.6 ± 0.3) × 10 −9 [7]) since it involves flavour-changing neutral currents and experiences a large helicity suppression, but is sensitive to NP and could be strongly enhanced in SUSY. The best current limit is achieved by CDF (BR < 3.6 × 10 −8 @ 90% CL) [8] while D0 achieved BR < 4.2 × 10 −8 @ 90% CL [9]. The final state of this mode containing only muons it is easily accessible to ATLAS, CMS and LHCb. This experimental search has to deal with the problem of an enormous level of background, dominated by random combinations of two muons originating from two distinct B decays. Since an absolute branching ratio measurement would be experimentally challenging, it is planed to measure the branching ratio of this decay relative to the control channel B + → J/ψK + which was already observed in the first LHC data. ATLAS and CMS plan to perform cut based analyses to separate signal from background using similar variables [10,11]. Assuming σ bb = 500 µb @ √ s= 14 TeV, with 10 fb −1 ATLAS expect to have 5.6 signal candidates for 14 background candidates, while CMS with 1 fb −1 expect to have 2.4 signal candidates for 6.5 background candidates. This last number can be translated in an exclusion limit of BR < 1.6 × 10 −8 @ 90% CL which corresponds to a BR < 2.1 × 10 −8 @ 90% CL by rescaling with the quoted measurement of σ bb by LHCb at √ s= 7 TeV. Limited amount of MC is a cause of large uncertainty on these estimations. LHCb will exploit its excellent tracking and vertexing capabilities and will use an approach to measure this branching ratio which is philosophically similar to Tevatron. A loose selection will be applied and then a global likelihood will be constructed, the analysis being made in a 3-parameter space [12]. Prospects from the data are encouraging. The two body hadronic decays, B → h + h − are used for control. A mass resolution of 24 MeV/c 2 has been measured, very close to MC expectations (22 MeV/c 2 ). The Impact Parameter resolution is in agreement with MC for p T > 2 GeV and the background is at the expected level. With 1 fb −1 LHCb expects 6 signal candidates for 30 background candidates, with the measured σ bb at √ s= 7 TeV. This yields a sensitivity to exclude a branching ratio down to 3.4 ×10 −8 @ 90% CL with 50 pb −1 (close to the data set collected by the end of 2010), approaching Tevatron's limit. The data set collected by the end of 2011 should be 1 fb −1 . It should allow the observation by LHCb of this decay at 5σ down to 5 times the expectations from the SM (i.e. for BR > 1.7 × 10 −8 ) or to exclude branching ratios down to 7 ×10 −9 @90% CL. β s measurements from B s → J/ψφ The interference between B 0 s decay to J/ψφ with or without mixing gives rise to a CP violating phase φ J/ψφ . In the SM this phase is very small and predicted to be -2β s -0.04, where β s is the smallest angle of the "b-s unitarity triangle". But it can receive sizable NP contributions through box diagrams. CDF and D0 have reported measurements of the B s mixing phase with a large central value of β s , but with a poor precision [13]. This analysis is complicated by the fact that the final state involves two vector mesons and so two orbital momentum states can occur. Determination of the CP violation parameter as well as the contributions from the different angular momentum states is achieved by a time-dependent, flavour-tagged, angular analysis of the decay rates. All LHC experiments have access to this decay as they can trigger on the J/ψ decaying into a muon pair, but ATLAS and CMS studies on this topics [14] are quite old and need to be revisited, so prospectives in the following will refer to LHCb only. With 2 fb −1 of data taking, LHCb should reconstruct 114k events, with S/B=2. Note that the LHCb background levels are expected to be significantly higher because LHCb uses a lifetime unbiased event selection which results in a signal sample with a higher per event sensitivity to the parameter φ J/ψφ . This will allow a precision of 0.03 rad on φ J/ψφ . First data are encouraging [3]. With the first 34 pb −1 of data taking, LHCb has reconstructed around one thousand of B s → J/ψφ candidates ( Figure 1). Already at the start-up with O(0.1 fb −1 ), LHCb should rapidly achieve better sensitivity than Tevatron and pin down whether there really is any sign of new physics. γ measurements The CKM angle γ is the least well constrained of the angles of the unitarity triangle. Many channels can contribute to γ measurements. The B 0 d,s → h + h − family of decays, where h stands for a π or K meson, have decay rates with non negligible contributions from penguin diagrams, making them sensitive to NP. The dependence on γ comes from CP time-dependent measurements from B 0 d → π + π − and B 0 s → K + K − which allow to extract γ relying on U-Spin symmetry. The decay B 0 s → D ± s K ∓ and its charge conjugate can proceed through two tree decay diagrams, the interference of which gives access to the phase γ if the B sBs mixing phase is determined otherwise (e.g. with B s → J/ψφ , see section 4.2). The charm decays of charged B mesons proceed through tree level diagrams and enable a direct SM measurement of γ. Different strategies exist for measuring γ, depending on the final states. Because all the involved decays used to measure γ have fully hadronic final states, ATLAS and CMS will not be able to contribute to these measurements, but LHCb will greatly improve the precision on γ. Crucial use of particle identification thanks to the RICH detectors and very good mass resolution will be the key ingredients for LHCb measurements. In the first sample of ∼ 3.1 pb −1 data taking LHCb has observed nice B 0 d,s → h + h − peaks, with a yield matching so far the expectations. In 2011 running LHCb will get largest world samples both in B 0 and B s . It is estimated [15] that the LHCb achievable precision on γ will be (4-5) • with 2 fb −1 of data taking. Prospects for Heavy Flavour Physics at LHC: LHCb upgrade Within few years we will need even more precisions and so statistics either to understand the nature of NP that we will have found at LHC and to elucidate its flavour structure or to probe higher mass scale if NP is not yet seen ... In both case it is very important to be able to run the LHCb detector at higher luminosity, LHC being a natural b factory, in order to collect around 100 fb −1 . The LHCb upgrade strategy consists to run at 10 times its design luminosity (i.e. 2 × 10 33 cm −2 s −1 ). A first phase, around 2016 will allow to run at 10 33 cm −2 s −1 . In order to increase efficiency, the trigger strategy has to be redesigned and based on software only allowing to use many discriminants including secondary vertex information. This is possible with a readout of the whole detector at 40 MHz. So the first trigger level is removed and High Level Trigger will reduce the output rate down to 20 kHz, a factor 10 higher than at present. This software trigger will be very flexible and able to cope with any scenarii. As a consequence, the front-end electronics have to be rebuilt in order to deal with the 40 MHz readout. This is also the case of some very front-end ASIC which are limited to a readout frequency of 1 MHz. In addition, an increasing of the luminosity will cause higher occupancy and higher radiation dose for the detectors around the beam pipe. Therefore, the inner parts of the tracking system have to be redesigned, namely the vertex detector, the trigger tracker and the inner tracker. This is the baseline of the first phase of the LHCb upgrade and R&D has started in many areas in order to achieve it. A Letter Of Intent will be published early in 2011. This upgrade will be complementary to the one for a Super B factory (SuperKEKB/Belle II or SuperB).
2010-12-02T17:25:20.000Z
2010-12-02T00:00:00.000
{ "year": 2010, "sha1": "5b5eb55d65ced853f7b6cb899475d81a70ed0614", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "047eb12f3c08fb069008a0f114bdde508b7893c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
49304890
pes2o/s2orc
v3-fos-license
Beyond Conventional Security in Sponge-Based Authenticated Encryption Modes The Sponge function is known to achieve 2c/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{c/2}$$\end{document} security, where c is its capacity. This bound was carried over to its keyed variants, such as SpongeWrap, to achieve a min{2c/2,2κ}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\min \{2^{c/2},2^\kappa \}$$\end{document} security bound, with κ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa $$\end{document} the key length. Similarly, many CAESAR competition submissions were designed to comply with the classical 2c/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{c/2}$$\end{document} security bound. We show that Sponge-based constructions for authenticated encryption can achieve the significantly higher bound of min{2b/2,2c,2κ}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\min \{2^{b/2},2^c,2^\kappa \}$$\end{document}, with b>c\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$b>c$$\end{document} the permutation size, by proving that the CAESAR submission NORX achieves this bound. The proof relies on rigorous computation of multi-collision probabilities, which may be of independent interest. We additionally derive a generic attack based on multi-collisions that matches the bound. We show how to apply the proof to five other Sponge-based CAESAR submissions: Ascon, CBEAM/STRIBOB, ICEPOLE, Keyak, and two out of the three PRIMATEs. A direct application of the result shows that the parameter choices of some of these submissions are overly conservative. Simple tweaks render the schemes considerably more efficient without sacrificing security. We finally consider the remaining one of the three PRIMATEs, APE, and derive a blockwise adaptive attack in the nonce-respecting setting with complexity 2c/2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{c/2}$$\end{document}, therewith demonstrating that the techniques cannot be applied to APE. We remark that ICEPOLE v1,v2 consists of three configurations (two with security level 128 and one with security level 256) and Keyak v1 of four configurations (one with an 800-bit state and three with a 1600-bit state) directly applies to SpongeWrap [19] and DuplexWrap [22], upon which Keyak v1 is built. Our results imply that the initial submissions of these CAESAR candidates were overly conservative in choosing their parameters, since reducing c would have lead to the same bound. For instance, Ascon-128 could take (c, r ) = (128, 192) instead of (256, 64), NORX64 (the proposed mode with 256-bit security) could increase its rate by 128 bits, and GIBBON-120 and HANUMAN-120 could increase their rate by a factor of 4, without affecting their mode security levels. These observations only concern the mode security, where characteristics of the underlying permutation are set aside. Specifically, the concrete security of the underlying permutations plays a fundamental role in the choice of parameters. For instance, the authors of Ascon [33,34], NORX [7,8], and PRIMATEs [2,3] acknowledge that non-random properties of some of the underlying primitives exist. Furthermore, the authenticity bound degrades as a function of the number of forgery attempts f : min{2 (r +c)/2 , 2 c / f, 2 κ }. In practical applications, the amount of forgery attempts may be limited, but if this is not possible, caution must be taken. We refer to [75] for a discussion. Tightness of the Result The earlier version of this article by Jovanovic et al. [53] had a security bound of the form min{2 (r +c)/2 , 2 c /r, 2 κ }, showing a security loss logarithmic relative to the rate. This loss was, however, not justified by any existing attack; it arose as an artifact of naively bounding the probability of a multi-collision occurring in the outer state, where multiple evaluations of the underlying primitive map to the same outer value. In this article, we thoroughly analyze multi-collisions and derive bounds on the size of multi-collisions for various possible choices of r and c. Most importantly, we can conclude that if r c or r c, multi-collisions have no effect on the security. If r ≈ c, the security loss approaches 1.4c log 2 c−2 , as opposed to the factor r loss from [53]. We refer to Table 2 for a comprehensive description of the bound. Note that for all schemes in Table 1, r c or r c. The rigorous analysis of multi-collisions relies on an application of Stirling's approximation and the Lambert W function. It is not only applicable to Sponge-based modes. For example, there are quite a few cryptographic schemes that have been attacked using multi-collisions, such as block-cipher-based hashing schemes [73], identification schemes [41], JH hash function [58], MDC-2 hash function [54], HMAC and ChopMD MAC [68], the LED block cipher [70], iterated Even-Mansour [32], and strengthened HMAC [88]. Multi-collisions have also influenced various security upper bounds. Typical examples are the indifferentiability proof for the ChopMD construction [27], the collision resistance proof for the Lesamnta-LW hash function [46], and the indistinguishability proof for RMAC [52], where the bound is O(2 n /n) due to the existence of n-collisions. The compression function proposed by Hirose et al. [47] has a similar type of bound. Finally, the recent line of research on the keyed Sponge and Duplex constructions [6,18,20,26,31,38,60,69] strongly relies on "multiplicities." Some of these security analyses can be improved using our rigorous analysis of multi-collisions. For r < c, the old bound of [53] is dominated by 2 (r +c)/2 and is in fact tight. The new bound improves over the one of [53] for r ≥ c, and in this work we additionally show that the new bound is tight for all possible choices of (r, c). To this end, we present a multi-collision-based adversary that meets the bound proven in our analysis. The attack is described for a generalized Sponge construction that covers CBEAM, ICEPOLE, Keyak v1, NORX, and STRIBOB. Even for variants with the additional XOR of the secret key at the end, (Ascon, GIBBON, and HANUMAN, see Fig. 4), a similar adversary with slightly higher complexity can meet the bound. A comparison of the earlier bound of [53], the new bound, and the attack complexity for the case of c = 256 and r ≥ c is given in Fig. 1. APE One of the interesting questions triggered by the publication of [53] was regarding APE, the third of the PRIMATEs. In more detail, the schemes listed in Table 1 are proven to achieve a beyond 2 c/2 security level against nonce-respecting adversaries, but the schemes are insecure against nonce-misusing adversaries. In contrast, APE is proven to achieve 2 c/2 security in the nonce-reuse scenario [4], and it is of interest to investigate what security guarantees APE offers against nonce-respecting adversaries. In this work, we include an analysis of APE in this setting and show that there exists a nonce-respecting blockwise adaptive adversary that can break the privacy with a total complexity of about 2 c/2 . In other words, while APE is more robust against nonce-misusing adversaries up to common prefix, in the nonce-respecting setting the schemes listed in Table 1 Publication History and Subsequent Work An extended abstract of this article has appeared in the proceedings of ASIACRYPT 2014 [53]. This article is the full version of [53], and additionally includes the proofs that were absent in the proceedings version. New with respect to the full version of [53] are (i) a more rigorous analysis of multi-collisions and the therewith induced improved security bound (Sect. 3), (ii) the generic attack on Sponge-based authenticated encryption schemes demonstrating tightness of the bound (Sect. 5), (iii) a proof that, unlike the schemes of Table 1, APE does not achieve beyond 2 c/2 security in the nonce-respecting setting (Sect. 7). Parts (i) and (ii) are due to Sasaki and Yasuda [90], with whom we have collaborated to combine their ideas for a complete analysis of the Sponge-based modes. In response to the observations made in [53], the designers of Ascon and NORX have reconsidered their parameter choices. The new parameter choices are also listed in Table 1 and testify of a significant security gain for Ascon v1.1 [34] without sacrificing efficiency, and a significant efficiency gain for NORX v2 [8] without sacrificing security. The adjustments will make the schemes faster and more competitive. Mihajloska et al. [61] recently generalized the analysis of [53] to CAESAR submission π -Cipher [42,43], which is structurally different from NORX in the way it maintains state: a so-called "common internal state" is used throughout the evaluation. From a more general perspective, the work has triggered analysis in the direction of high-efficiency full-state keyed Duplexes [31,60,89]. The result of Mennink et al. [60] on the full-state keyed Duplex has triggered the designers of Keyak to perform a major revision to their scheme. In more detail, Keyak v2 [23] is built on top of the "Motorist" mode, an alternative to the full-state keyed Duplex that was analyzed by Daemen et al. [31]. We remark that the results on the full-state keyed Sponges and Duplexes are more general than the target design in this work. The most important difference between [31,60] and our work is that we explicitly target nonce-based designs, and this allows for beyond 2 c/2 security. The work has, to certain extent, furthermore triggered the use of permutations for nonce-reuse secure authenticated encryption schemes [29,44,59] beyond APE. Parallel to the research on keyed Duplexes is the research on the keyed Sponges, i.e., keyed versions of the Sponge that only aim for authenticity. Bertoni et al. [18] introduced the original keyed Sponge. Chang et al. [26] suggested to put the key in the inner part of the Sponge. Andreeva et al. [6] formalized and improved the analysis of the outer-and inner-keyed sponges. The analysis was generalized to the full-state Sponge in [31,38,60,69], following upon ideas that date back to the donkeySponge [21]. Beyond authentication (and encryption), keyed versions of the Sponge have found applications in reseedable pseudorandom sequence generation [18,39]. Outline We present our security model in Sect. 2. In Sect. 3, we perform an in-depth analysis of multi-collisions with respect to Sponges. A security proof for NORX is derived in Sect. 4. Tightness of the bound is proven in Sect. 5. In Sect. 6, we show that the proof of NORX generalizes to other CAESAR submissions, as well as to SpongeWrap and DuplexWrap. We consider the security of APE against nonce-respecting adversaries in Sect. 7. The work is concluded in Sect. 8, where we also discuss possible generalizations to Artemia [1]. Security Model For n ∈ N, let Perm(n) denote the set of all permutations on n bits. When writing x $ ← − X for some finite set X , we mean that x gets sampled uniformly at random from X . For x ∈ {0, 1} n , and a, b ≤ n, we denote by [x] a and [x] b the a leftmost and b rightmost bits of x, respectively. For tuples ( j, k), ( j , k ) we use lexicographical order: Let Π be an authenticated encryption scheme, with an encryption function E and a decryption function D, where Here, N denotes a nonce value, H a header, M a message, C a ciphertext, T a trailer, and A an authentication tag. The values (H, T ) will be referred to as associated data. If verification is successful, then the decryption function D K outputs M, and ⊥ otherwise. The scheme Π is also determined by a set of parameters such as the key size, state size, and block size, but these are left implicit. In addition, we define $ to be an ideal version of E K , where $ returns (C, A) $ ← − {0, 1} |M|+τ for every query (N ; H, M, T ). We follow the convention in analyzing modes of operation for permutations by modeling the underlying permutations as being drawn uniformly at random from Perm(b), where b is a parameter determined by the scheme. An adversary A is a probabilistic algorithm that has access to one or more oracles O, denoted A O . By A O = 1 we denote the event that A, after interacting with O, outputs 1. We consider adversaries A that have unbounded computational power and whose complexity is solely measured by the number of queries made to their oracles. These adversaries have query access to (i) the underlying idealized permutations, (ii) E K or its counterpart $, and possibly (iii) D K . The key K is randomly drawn from {0, 1} κ at the beginning of the security experiment. The security definitions below follow [11,37,51,77,80]. Privacy Let p denote a list of idealized permutations, which Π may depend on. We define the advantage of an adversary A in breaking the privacy of Π as follows: where the probabilities are taken over the random choices of p, $, K , and A, if any. The fact that the adversary has access to both the forward and inverse permutations in p is denoted by p ± . We assume that adversary A is nonce-respecting, which means that it never makes two queries to E K or $ with the same nonce. By Adv priv Π (q p , q E , λ E ) we denote the maximum advantage taken over all adversaries that query p ± at most q p times, and that make at most q E queries of total length (over all queries) at most λ E blocks to E K or $. We remark that this privacy notion is also known as the indistinguishability under chosen plaintext attack (IND-CPA) security of an (authenticated) encryption scheme. Integrity As above, let p denote the list of underlying idealized permutations of Π . We define the advantage of an adversary A in breaking the integrity of Π as follows: where the probability is taken over the random choices of p, K , and A, if any. We say that "A forges" if D K ever returns a message other than ⊥ on input of (N ; H, C, T ; A) where (C, A) has never been output by E K on input of a query (N ; H, M, T ) for some M. We assume that adversary A is nonce-respecting, which means that it never makes two queries to E K with the same nonce. Nevertheless, A is allowed to repeat nonces in decryption queries. By Adv auth Π (q p , q E , λ E , q D , λ D ) we denote the maximum advantage taken over all adversaries that query p ± at most q p times, make at most q E queries of total length (over all queries) at most λ E blocks to E K , and at most q D queries of total length at most λ D blocks to D K . Multi-Collisions Consider the following game of balls and bins. Let R ≥ 1 be the number of bins and σ the number of balls. The σ balls are thrown uniformly at random into the R bins. By multcol(R, σ, ρ) we denote a ρ-collision, namely the event that there exists a bin that contains ρ or more balls after all σ balls are thrown. A folklore result [67, Theorem 3.1], [64, Lemma 5.1] states the following upper bound on the probability of a ρ-collision for ρ ≥ 2: where R ≥ 1 and σ ≥ ρ. Note that σ can be smaller or larger than R. The bound of (1) involves a binomial coefficient and hence factorials. To evaluate these factorials we rely on Stirling's approximation. Formally, Stirling's approximation can be written as an inequality as [71] x where π = 3.14 . . . and e = 2.71 . . ., which holds for all x ≥ 1. For the purpose of the paper we combine inequalities (1) and (2) in the following way. Let S be some positive number limiting the maximum value of σ , i.e., σ ≤ S. From (1) and (2), we get Remark 1. The probability that multcol(R, σ, ρ) occurs can also be bounded using the Chernoff bound [28]. Consider any fixed bin, and for i = 1, . . . , σ , denote Defining X = σ i=1 X i as the number of balls in that specific bin, the Chernoff bound states that for any t > 0 [64, Section 4.2], Pr (X ≥ ρ) ≤ Pr e t X ≥ e tρ ≤ Ex e t X e tρ . As in our case the events X i are mutually independent, One therefore finds, for any t > 0, Looking ahead, in our applications we will need an upper bound of this term of the form σ/S, where ρ is a function of R and S. The bound of (4) is more suited for that. An alternative approach to bound the probability that multcol(R, σ, ρ) occurs, is via the first and second moments, as done by Raab and Steger [74]. In detail, Raab and Steger demonstrate that Pr (multcol(R, σ, ρ(R, σ ))) = o(1) for various parameter settings and choices of ρ as a function of R and σ [74,Theorem 1]. This approach, as well as the related approaches in the field of cryptography [10,49], again does not fit our targeted upper bound. Lambert W Function Stirling's approximation contains a "self-exponential" function x x , and we will need to solve equations of the form for variable ξ . For this purpose we utilize the Lambert W function [71]. Consider the function f (w) = we w defined for complex numbers w. Then, the Lambert W function is the inverse relation of f . More precisely, Z = W (Z )e W (Z ) is the defining equation for W , and Eq. (6) can be solved, using W , as where D := ln d [30]. In this work, we can restrict the domain of W to real numbers X ≥ −1/e and the range to real numbers W (X ) ≥ −1, and we focus on the principal branch W p , which is a single-valued function. Hoornar and Hassani [50] derived the following inequality on W p (X ) for any X ≥ e: Back to (6), when ξ is restricted to real numbers, the solution (7) becomes It should be emphasized that this bound is valid only under the condition D ≥ e, or equivalently, d ≥ e e . Bounding Multi-Collision Probability We will derive Sponge-oriented bounds for ρ. In more detail, consider parameters b, r, c such that b = r + c, write R = 2 r , and S = min{2 b/2 , 2 c }. We will derive choices for ρ (depending on r and c), such that the probability of a multi-collision of (1) is bounded by σ/S. Then, where β := log 2 e + log 2 log 2 e. The proof of Lemma 1 is constructive, and the bounds for ρ are derived constructively rather than simply proven to hold. However, the reasoning is structurally different for the cases where r < c (cases (i-iv)) and for the cases where r ≥ c (cases (v-vii)). Proof of Lemma 1(i-iv). For the case r < c, our basic strategy is to bound Pr (multcol(R, σ, ρ)) by σ/S, where S = 2 b/2 , by means of setting for sufficiently large parameter θ . Note that, by the generalized pigeonhole principle, 2 (c−r )/2 is the minimum value of ρ when σ reaches S = 2 b/2 . Assume that ρ ≥ eS/R = e2 b/2 /2 r = e2 (c−r )/2 , i.e., θ ≥ e. Then, (4) becomes that is defined for real numbers ζ ∈ [0, 7.2]. It remains to show the following: Proof of claim. The derivative of ϕ is computed as Case (iv): c−2log 2 c+7.2 < r < c. The value of θ needs to increase as r approaches to c, and in general θ cannot be bounded by a constant but is rather a function of r and c. The Lambert W function can handle such a case, yielding a fairly sharp bound. Claim. Let c ≥ 13. The inequality holds for all r ∈ [c − 2 log 2 c + 7.2, c ]. (The condition c ≥ 13 is to make the range of r non-empty.) As we will show, the bound (16) "works" not only for r ≥ 2c but for all r > c. Moreover, it turns out that (16) is actually better than (15) for a large part of r ∈ (c, 2c ], except where r ≈ c. Claim. Let r > c. For ρ of (16), we have Pr (multcol(R, σ, ρ)) ≤ σ/S. Proof of claim. Define the function whose domain is the real numbers u ∈ (c, 2c ] with c ≥ 11 and β = log 2 e + log 2 log 2 e = 1.97 . . .. Then equation Δ c (u) = 0 becomes u = c + e log 2 u − eβ, whose solution we denote by u 0 . We differentiate Δ c with respect to u as Note that c + e log 2 r − eβ > c + e log 2 c − eβ, making the distinction between this case (vi) and the previous case (v) clear. NORX We introduce NORX at a level required for the understanding of the security proof and refer to Aumasson et al. [7,8] for the formal specification. Let p be a permutation on b bits. All b-bit state values are split into an outer part of r bits and an inner part of c bits. We denote the key size of NORX by κ bits, the nonce size by ν bits, and the tag size by τ bits. The header, message, and trailer can be of arbitrary length and are padded using 10 * 1-padding to a length of a multiple of r bits. Throughout, we denote the r -bit header blocks by Table 1. Although NORX starts with an initialization function init which requires the parameters (D, R, τ ) as input, as soon as our security experiment starts, we consider (D, R, τ ) fixed and constant. Hence, we can view init as a function that maps (K , N ) to where const is irrelevant to the mode security analysis of NORX, and will be ignored in the remaining analysis. After init is called, the header H is compressed into the rate, then the state is branched into D states (if necessary), the message blocks are encrypted in a streaming way, the D states are merged into one state (if necessary), the trailer is compressed, and finally the tag A is computed. All rounds are preceded with a domain separation constant XORed into the capacity: 01 for header compression, 02 for message encryption, 04 for trailer compression, and 08 for tag generation. If D = 1, domain separators 10 and 20 are used for branching and merging, along with pairwise distinct lane indices id k for k = 1, . . . , D (if D = 1 we write id 1 = 0). In Fig. 2 we depict NORX for D = 1 and D = 2. The privacy of NORX is proven in Sect. 4.1 and the integrity in Sect. 4.2. In both proofs we consider an adversary that makes q p permutation queries and q E encryption queries of total length λ E . In the proof of integrity, the adversary can additionally make q D decryption queries of total length λ D . To aid the analysis, we compute the number of permutation calls made via the q E encryption queries. The exact same computation holds for decryption queries with the parameters defined analogously. Consider a query to E K , consisting of u header blocks, v message blocks, and w trailer blocks. We denote its corresponding state values by as outlined in Fig. 2. Here, 4 We denote the number of state values by σ E, j , where the dependence on D is suppressed as D does not change during the security game. In other words, σ E, j denotes the number of primitive calls in the jth query to E K . Furthermore, we define σ E to be the total number of primitive evaluations via the encryption queries, and find that This bound is rather tight. Particularly, for D = 0 an adversary can meet this bound by only making queries without header and trailer. For queries to D K we define σ D, j and σ D analogously. where σ E is defined in (18), and where ρ = ρ(r, c) is the function defined in Lemma 1. Theorem 1 can be interpreted as implying that NORX provides privacy security as long as the total complexity q p + σ E does not exceed min{2 b/2 , 2 κ } and the total number of primitive queries q p , also known as the offline complexity, does not exceed 2 c /ρ. The presence of the term ρ makes the bound a bit unclear; in Table 2 we give the main implication of this bound for the various possible values of r and c as outlined in Lemma 1. See Table 1 for the security level of the various parameter choices of NORX: for NORX v1 [7], we are concerned with case (vi), where ρ = 2.5 = 3 for both b ∈ {512, 1024}; for NORX v2 [8], we are in case (vii), where ρ = 2. The proof is based on the observation that NORX is indistinguishable from a random scheme as long as there are no collisions among the (direct and indirect) evaluations of p. Due to uniqueness of the nonce, state values from evaluations of E K collide with probability approximately 1/2 b . Regarding collisions between direct calls to p and calls via E K : while these may happen with probability about 1/2 c , they turn out not to significantly influence the bound. The latter is demonstrated in part using the principle of multiplicities [18]: roughly stated, the maximum number of state values with the same outer part. We use Lemma 1 to bound multiplicities. The formal security proof is more detailed. Furthermore, we remark that, at the cost of readability and simplicity of the proof, the bound could be improved by a constant factor. Proof. Consider any adversary A with access to either ( p ± , E K ) or ( p ± , $) and whose goal is to distinguish these two worlds. For brevity, we write We start by replacing p ± by a random function to simplify analysis. This is done with a "URP-URF" switch [13], in which we make a transition from p ± to a primitive f ± defined as follows (as done by Andreeva et al. [4]). The primitive f ± maintains an initially empty list F of query/response tuples (x, y) where the set of domain and range values are denoted by dom(F) and rng(F), respectively. For a forward query f (x) with x ∈ dom(F), the value in {y | (x, y) ∈ F} which occurs lexicographically first is returned. For a new forward query f (x), the response y is randomly drawn from {0, 1} b , then the tuple (x, y) is added to F. The description for f −1 is similar. We let abort denote the event that a new query f (x) results in a value y where y is already in rng(F), or a new query f −1 (y) results in a value x where x is already in dom(F). By applying the triangle inequality, we have The two rightmost terms are bounded above by the maximum advantage of any adversary distinguishing p ± and f ± in at most q p + σ E queries. Since p ± and f ± are identical until abort, by the Fundamental Lemma of Game Playing [12,13] we have that the two rightmost terms are in turn bounded by We restrict our attention to A with oracle access to ( f ± , F), where F ∈ {E K , $}. Without loss of generality, we can assume that the adversary only queries full blocks and that no padding rules are involved since the padding rules are injective, allowing the proof to carry over to the case of fractional blocks with 10 * 1-padding. We introduce some terminology. Queries to f ± are denoted (x i , y i ) for i = 1, . . . , q p , while queries to F are written as elements (N j ; H j , M j , T j ; C j , A j ) for j = 1, . . . , q E . If F = E K , the state values are denoted as in (17), subscripted with a j: If the structure of (22) is irrelevant we refer to the tuple as (s j,1 , . . . , s j,σ E, j ), where we use the convention to list the elements of the matrix column-wise. In this case, we write parent(s j,k ) to denote the state value that lead to s j,k , with parent(s j,1 ) := ∅ and parent(s T j,0 ) := (s M j,1,v 1 , . . . , s M j,D,v D ). We remark that the characteristic structure of NORX, with the D parallel states, only becomes relevant in the two technical lemmas that will be used at the end of the proof. We point out that s j,1 corresponds to the initial state value of the evaluation, which requires special attention throughout the remainder of the proof. The remainder of the proof is divided as follows. In Lemma 2 we prove that ( f ± , E K ) and ( f ± , $) are identical until event occurs. In other words, by applying the Fundamental Lemma of Game Playing [12,13], Then, in Lemma 3 we bound this term by where ρ = ρ(r, c) is the function defined in Lemma 1. Noting that , this completes the proof via equations (19,21,23). Lemma 2. The outputs of ( f ± , E K ) and ( f ± , $) are identically distributed until event occurs. Proof. The outputs of f ± are sampled independently and uniformly at random in ( f ± , $). This holds in the real world as well, unless a query to f ± collides with an f ± query made via E K . Therefore, until guess occurs, the outputs of f ± are distributed identically in both worlds. Furthermore, f ± 's outputs are independent of the distinguisher's query history, hence, assuming all past queries were identically distributed across worlds, a query to f ± will not change the fact that both worlds are identically distributed, until guess occurs. Let N j be a new nonce used in the F-query (N j ; H j , M j , T j ), with corresponding ciphertext and authentication tag (C j , A j ). Denote the query's state values as in (22). Let u, v, and w denote the number of padded header blocks, padded message blocks, and padded trailer blocks, respectively. Consider the jth query. By the definition of $, in the ideal world we have (C j , A j ) $ ← − {0, 1} |M j |+τ . We will prove that (C j , A j ) is identically distributed in the real world, under the assumption that guess ∨ hit has not yet occurred. Denote the message blocks of M j by M j,k, for k = 1, . . . , D and = 1, . . . , v k . We As the state value s M j,k, −1 has not been evaluated by f before (neither directly nor indirectly via an encryption query), f (s M j,k, −1 ) outputs a uniformly random value from We remark that similar reasoning shows that a ciphertext block corresponding to a truncated message block is uniformly randomly drawn as well, yet from a smaller set. The fact that A j $ ← − {0, 1} τ follows the same reasoning, using that s tag j is a new input to f . Thus, Looking at the reasoning of the proof of Lemma 2 above, we notice that if event has not yet occurred, then each state value in an F-query is sampled independently and uniformly at random. In particular, once the adversary fixes the inputs to an F-query, each state value in that F-query is independent of the adversary's input, and independent of each other. Furthermore, the inner part of those state values are never released to the adversary, hence the adversary's future queries are independent of the inner parts of the state values. Hence, we have the following result: Lemma 3. Pr Proof. Consider the adversary interacting with ( f ± , E K ), and let Pr (guess ∨ hit) denote the probability we aim to bound. For i ∈ {1, . . . , q p }, define and key = ∨ i key(i), which corresponds to a primitive query hitting the key. Let j ∈ {1, . . . , q E } and k ∈ {1, . . . , σ E, j }, and consider any threshold ρ ≥ 1, then define Event multi( j, k) is used to bound the number of states that collide in the outer part. Note that state values s j ,1 are not considered here as they will be covered by key. We define multi = multi(q E , σ E,q E ), which is a monotone event. By basic probability theory, Pr (guess ∨ hit) ≤ Pr (guess ∨ hit | ¬(key ∨ multi)) + Pr (key ∨ multi) . (25) In the remainder of the proof, we bound these probabilities as follows (a formal explanation of the proof technique is given in "Appendix"): we consider the ith forward or inverse primitive query (for i ∈ {1, . . . , q p }) or the kth state of the jth construction query (for j ∈ {1, . . . , q E } and k ∈ {1, . . . , σ E, j }), and bound the probability that this evaluation makes guess ∨ hit satisfied, under the assumption that this query does not set key ∨ multi and also that guess ∨ hit ∨ key ∨ multi has not been set before. For the analysis of Pr (key ∨ multi) a similar technique is employed. Event guess. This event can be set in the ith primitive query (for i = 1, . . . , q p ) or in any state evaluation of the jth construction query (for j = 1, . . . , q E ). Denote the state values of the jth construction query as in (22). Consider any evaluation, assume this query does not set key ∨ multi and assume that guess ∨ hit ∨ key ∨ multi has not been set before. Firstly, note that x i = s init j for some i, j would imply key(i) and hence invalidate our assumption. Therefore, we can exclude s init j from further analysis on guess. For i = 1, . . . , q p , let j i ∈ {1, . . . , q E } be the number of encryption queries made before the ith primitive query. Similarly, for j = 1, . . . , q E , denote by i j ∈ {1, . . . , q p } the number of primitive queries made before the jth encryption query. -Consider a primitive query (x i , y i ) for i ∈ {1, . . . , q p }, which may be a forward or an inverse query, and assume it has not been queried to f ± before. Therefore, the probability that guess is set via a direct query is at most -Next, consider the probability that the jth construction query sets guess, for j ∈ {1, . . . , q E }. For simplicity, first consider D = 1, hence the message is processed in one lane and we can use state labeling (s j,1 , . . . , s j,σ E, j ). We range from s j,2 to s j,σ E, j (recall that s j,1 = s init j can be excluded) and consider the probability that this state sets guess assuming it has not been set before. Let k ∈ {2, . . . , σ E, j }. The state value s j,k equals f (s j,k−1 )⊕v, where v is some value determined by the adversarial input prior to the evaluation of f (s j,k−1 ), including input from (H j , M j , T j ) and constants serving as domain separators. By assumption, guess ∨ hit has not been set before, and f (s j,k−1 ) is thus randomly drawn from {0, 1} b . It hits any x i (i ∈ {1, . . . , i j }) with probability at most i j /2 b . Next, consider the general case D > 1. We return to the labeling of (22 where v 1 , . . . , v D are some distinct values determined by the adversarial input prior to the evaluation of the jth construction query. These are distinct by the XOR of the lane numbers id 1 , . . . , id D . Any of these nodes equals x i for i ∈ {1, . . . , q p } with probability at most i j D/2 b . Finally, for the merging node s T j,0 we can apply the same analysis, noting that it is derived from a sum of D new f -evaluations. Concluding, the jth construction query sets guess with probability at most i j σ E, j /2 b (we always have in total at most σ E, j new state values). Summing over all q E construction queries, we get Here we use that σ E, j k=1 i j σ E, j = q p σ E , which follows from a simple counting argument. Event hit. We again employ ideas of guess, and particularly that as long as guess ∨ hit is not set, we can consider all new state values (except for the initial states) to be randomly drawn from a set of size 2 b . Particularly, we can refrain from explicitly discussing the branching and merging nodes (the detailed analysis of guess applies) and label the states as (s j,1 , . . . , s j,σ E, j ). Clearly, s j,1 = s j ,1 for all j, j by uniqueness of the nonce. Any state value s j,k for k > 1 (at most σ E − q E in total) hits an initial state value s j ,1 only if [s j,k ] κ = K , which happens with probability at most σ E /2 κ , assuming s j,k is generated randomly. Finally, any two other states s j,k , s j ,k for k, k > 1 collide with probability Event key. For i ∈ {1, . . . , q p }, the query sets key(i) if [x i ] κ = K , which happens with probability 1/2 κ (assuming it did not happen in queries 1, . . . , i − 1). The adversary makes q p attempts, and hence Pr (key) ≤ q p /2 κ . Event multi. Event multi can be related to multcol of Sect. 3, in the following way. Consider any new state value s j,k−1 ; then it contributes to the bin If a threshold ρ needs to be exceeded for some α, at least ρ/2 of them are either of the first kind or of the second kind. The event multi can henceforth be seen as a balls and bins game with 2 r bins, σ E balls, and threshold ρ = ρ/2: By Lemma 1, we know that Pr multcol (2 r where ρ is the function described in Lemma 1 (parameters r, c are implicit). Note that we put ρ = 2ρ . Addition of the four bounds via (25) gives where ρ = ρ(r, c) is the function defined in Lemma 1. Theorem 2. Let Π = (E, D) be NORX based on an ideal underlying primitive p. Then, where σ E , σ D are defined in (18), and where ρ = ρ(r, c) is the function defined in Lemma 1. The bound is more complex than the one of Theorem 1, but intuitively implies that NORX offers integrity as long as it offers privacy and the number of forgery attempts σ D is limited, where the total complexity q p + σ E + σ D should not exceed 2 c /σ D . See Table 1 for the security level for the various parameter choices of NORX. Needless to say, the exact bound is more fine-grained. Proof. We consider any adversary A that has access to ( p ± , E K , D K ) and attempts to make D K output a non-⊥ value. As in the proof of Theorem 1, we apply a URP-URF switch to find Then we focus on A having oracle access to ( f ± , E K , D K ). As before, we assume without loss of generality that the adversary only makes full-block queries. We inherit terminology from Theorem 1. The state values corresponding to encryption and decryption queries will both be labeled ( j, k), where j indicates the query and k the state value within the jth query. If needed we will add another parameter δ ∈ {D, E} to indicate that a state value s δ, j,k is in the jth query to oracle δ, for δ ∈ {D, E} and j ∈ {1, . . . , q δ }. Particularly, this means we will either label the state values as in (22) with a δ appended to the subscript, or simply as (s δ, j,1 , . . . , s δ, j,σ δ, j ). Observe that from (26) we get A bound on the probability that A sets event is derived in Lemma 4. The remainder of this proof centers on the probability that A forges given that event does not happen. Such a forgery requires that [ f (s tag D, j )] τ = A j for some decryption query j. By ¬event, we know that s tag D, j is a new state value for all j ∈ {1, . . . , q D }, hence f 's output under s tag D, j is independent of all other values and uniformly distributed for all j. As a result, we know that the jth forgery attempt is successful with probability at most 1/2 τ . Summing over all q D queries, we get and the proof is completed via (26,27) and the bound of Lemma 4, where we again use that c) is the function defined in Lemma 1. Lemma 4. Pr Proof. Recall that event = guess ∨ hit ∨ Dguess ∨ Dhit. Employing events key and multi from Lemma 3, we find: The proof builds upon Lemma 3, and in particular we will use the same proof technique of running over all queries and computing the probability that a query sets event, assuming event has not been set before. The bounds on Pr (guess ∨ hit | ¬(key ∨ multi)) and Pr (key ∨ multi) carry over from Lemma 3 verbatim, where we additionally note that for a given query, the previous decryption queries are of no influence as by hypothesis Dguess ∨ Dhit was not set before the query in question. We continue with the analysis of Dguess and Dhit. Event Dguess. Note that the adversary may freely choose the outer part in decryption queries and primitive queries. Indeed, the ciphertext values that A chooses in decryption queries define the outer parts of the state values. Consequently, Dguess gets set as soon as there is a primitive state and a decryption state whose capacities are equal. This happens with probability at most Pr (Dguess | ¬(key ∨ multi)) ≤ q p σ D /2 c . Event Dhit. A technicality occurs in that the adversary can reuse nonces in decryption. To increase readability, we first state that any decryption state s satisfies [s] κ = K only with probability at most σ D /2 κ , and in the remainder we can exclude this case. Next, we define an event innerhit. Let (δ, j, k) and (δ , j , k ) be two decryption query indices, and let const ∈ {0, 01 ⊕ 02, 01 ⊕ 04, 01 ⊕ 08, 01 ⊕ 10, 02 ⊕ 04, 02 ⊕ 08, 02 ⊕ 20, 02 ⊕ 20 ⊕ id i , 04 ⊕ 08}: Note that for any choice of indices and const, we have Pr(innerhit(δ, j, k; δ , j , k ; const)) ≤ 1/2 c . We consider the general case D = 1. Consider thejth decryption query (N ; H, C, T ; A). Say it consists of u header blocks H 1 . . . H u , v ciphertext blocks C 1 . . . C v , and w trailer blocks T 1 . . . T w , and write its state values as in (17). Let (N δ, j ; H δ, j , C δ, j , T δ, j ; A δ, j ) be an older ciphertext tuple that shares the longest common blockwise prefix with (N ; H, C, T ; A). Note that this tuple may not be unique (for instance if N is new), and that it may come from an encryption or decryption query. Say that this query consists of u δ, j header blocks, v δ, j ciphertext blocks, and w δ, j trailer blocks, and write its state values as in (22). We proceed with a case distinction. (a) = ∞. Note that s T min{w,w δ, j } = s T δ, j,min{w,w δ, j } ⊕ 04 ⊕ 08. If this input to f is old, it implies innerhit(δ, j, min{w, w δ, j }; δ , j , k ; 04 ⊕ 08) for some (δ , j , k ) older than the current query (D,j, min{w, w δ, j }), which is the case with probability at most 1/2 c (for all possible index tuples). Otherwise, f generates a new value and new state value s (s T w+1 if w > w δ, j or s tag if w < w δ, j ), which sets Dhit if it sets innerhit with an older state s δ , j ,k under const = 0. This also happens with probability at most 1/2 c for any (δ , j , k ). This procedure propagates to s tag . In total, thejth decryption query sets Dhit with probability at most 5 As before, s T is a new input to f , except if innerhit(δ, j, ; δ , j , k ; 0) for some (δ , j , k ) older than the current query (D,j, ). This is the case 5 Note that if (δ, j) were not unique, then we similarly have s T −1 = s T δ , j , −1 and s T = s T δ , j , ⊕ (T 0 c ) ⊕ (T δ , j , 0 c ) = s T δ , j , for all other queries (δ , j ) with the same prefix (possibly XORed with 04 ⊕ 08). with probability at most 1/2 c for all possible older queries. The procedure propagates to s tag as before, and the same bound holds; (3) (N; H) = (N δ,j ; H δ,j ) but C = C δ,j . The analysis is similar but a special treatment is required to deal with the merging phase. Consider the ciphertext C to be divided into blocks C k, for k = 1, . . . , D and = 1, . . . , v k . Similarly for C δ, j . For We make a further distinction between whether or not ( 1 , . . . , D ) = (∞, . . . , ∞). (a) ( 1 , . . . , D ) = (∞, . . . , ∞). As C = C δ, j , there must be a k such that v k = v δ, j,k and thus that C k is a strictly smaller substring of C δ, j,k or vice versa. Consequently, and there is no merging phase, or ⊕ 02 ⊕ 08 if there is furthermore no trailer). Then, this state is new to f except if innerhit(δ, j, k, v k ; δ , j , k ; const) is set for the const described above. (We slightly misuse notation here in that v k is input to innerhit.) This means that also s T 0 will be new except if it hits a certain older state, which happens with probability 1/2 c . The reasoning propagates up to s tag as before, and the same bound holds; The reasoning of case (2b) carries over for all future state values; (4) N = N δ,j but H = H δ,j . The analysis follows fairly the same principles, albeit using const ∈ {0, 01 ⊕ 02, 01 ⊕ 04, 01 ⊕ 08, 01 ⊕ 10}; (5) N = N δ,j . The nonce N is new (hence the query shares no prefix with any older query). There has not been an earlier state s that satisfies [s] κ = K (by virtue of the analysis in hit and key, and the first step of this event Dhit). Therefore, s init is new by construction and a simplification of above analysis applies. Summing over all queries: where the last term comes from the exclusion of the event that any decryption state satisfies [s] κ = K . Together with the bound of Lemma 3 we find via (28), Fig. 3. Target structure in key recovery attack. where ρ = ρ(r, c) is the function defined in Lemma 1. Tightness of the Bound We derive a generic attack on Sponge-based authenticated encryption schemes. The attack exploits multi-collisions on the outer part of the internal state. Using the multicollision bounds of Suzuki et al. [91,92], we demonstrate that the attack actually matches the proven security bound, meaning that the bounds of Sect. 4 are tight. Therefore, we first describe our simplified target structure in Sect. 5.1. The attack is described in Sect. 5.2 and evaluated in Sect. 5.3. Target Structure We consider the simplified structure of Fig. 3. Without loss of generality, we consider a key K ∈ {0, 1} κ , nonce N ∈ {0, 1} b−κ (hence ν = b − κ), and we assume that init initializes the state as (K , N ) → K N . (The attack can be generalized to the setting where the key is absorbed in multiple evaluations of p, or where the key is XORed into the state before outputting A. See also Sect. 5.4.) We consider no associated data, or in terminology of Sect. 2, we put H, T ← Null. The message size must be at least one complete block. Note that, in many schemes, the message of one complete block will expand to two blocks by a padding procedure. We consider a general setting where the τ -bit authentication tag A may be generated in multiple extraction rounds (two in Fig. 3), and we assume that τ ≥ c. We ignore minor issues irrelevant to our attack, such as padding, frame bits, domain separation for message processing and tag generation parts, and truncation of the tag. As shown in Fig. 3, the b-bit state after the first permutation call is denoted s 1 . Its outer and inner part are denoted [s 1 ] r and [s 1 ] c , respectively. Then, an r -bit message block M 1 is XORed into [s 1 ] r and the first ciphertext block C 1 = [s 1 ] r ⊕ M 1 is output. The state is evaluated using the permutation, and the resulting state is s 2 . Note that the values M i and C i reveal the outer part of state s i as [s i ] r = M i ⊕ C i . Distinguishing Attacks via Key Recovery Let ρ ≥ 2. If 2 κ ≤ 2 c /ρ a naive key recovery attack can be performed in complexity 2 κ , and we assume that 2 κ > 2 c /ρ. We first give an overview of the attack. Once a b-bit state in the structure of Fig. 3 is recovered, the secret key K can be recovered immediately by computing the inverse of the permutation. Our attack aims to recover the internal state s 1 after the first permutation call. It consists of an online phase followed by an offline phase. In the online phase, the adversary searches for a ρ-collision on the r -bit value C 1 . It makes a certain amount of encryption oracle queries for different N and possibly different M 1 . Let q denote the total number of encryption queries needed. The online phase results in ρ pairs of (N , M 1 ) which produce the same C 1 but different [s 1 ] c . The adversary also stores the tag A for each pair. In the offline phase, the adversary recovers an inner part [s 1 ] c . Using the value C 1 , the same for all tuples, the value [s 1 ] c is exhaustively guessed. In a bit more detail, the adversary computes the authentication tag A from C 1 [s 1 ] c offline, and checks if there is a match with any stored tag. Because ρ tags are stored, the attack cost is about 2 c /ρ. The formal description of the attack is given below. Here, we denote the data D for the kth block in the jth query by D j,k . We omit the second subscript for the data where the block length is always 1, e.g., nonce N j . 1, 2, . . . , q and receive (C i,1 , A i,1 A i,2 . . .); 3. Find a ρ-collision on C ·,1 ; 4. Store ρ triplets of (N i , M i,1 , A i,1 A i,2 . . .) contributing to the ρ-collision. We denote the colliding value of C ·,1 by C, which is also stored. Offline If the resulting value matches nonce N i , output the first κ bits of the state as the recovered key K . Attack Evaluation In the online phase, the adversary does not strictly need to choose N and M 1 , a given list of q different tuples suffices. Thus, the attack is a known plaintext attack. The data complexity is q one-block messages and the memory to store q triples (N i , M i,1 , A i,1 A i,2 . . .) for i = 1, . . . , q is required. The time complexity of at least q memory access is also required. Intuitively, all the complexities in the online phase are q. In the offline phase, because ρ candidates are stored in the online phase and 2 c /ρ guesses are examined, one match is expected. If the internal state values match, the corresponding tag values also match. Thus, the right guess is identified. Due to the assumption that the tag size is at least c bits, the match likely only suggests the right guess. In addition, we can further filter out the false positive by r bits with the match of N in the last step. Thus, with a very high probability the key is successfully recovered. For the complexity, the only important factor is the time complexity of 2 c /ρ tag generation functions. What remains is to appropriately choose parameters for q and ρ so that the total complexity max{q, 2 c /ρ} is minimized. Suzuki et al. [91,92] showed that, when c ≤ r , the complexity q to find a ρ-collision with probability about 0.5 is given by c = r. We demonstrate tightness of the bound for the cases c = r = 128, c = r = 256, and c = r = 512. Note that, provided κ is large enough, the bound of Theorem 1 is dominated by 2 c /α with α = 1.4r log 2 r +r −c−2 (cf., Table 2). In Table 3 we evaluate the attack complexity so that max{q, 2 r /ρ} is minimized. This complexity is always bigger but very close to the proven bound, which shows tightness of security bound. c < r. It is common practice to enlarge the rate of Sponge-based authenticated encryption so that more data can be processed per permutation call. We demonstrate tightness of our attack for the case of c = 256 and r ∈ [257, 768]. Figure 1 depicts the evaluated attack complexity and our security bound for c = 256. For the sake of completeness, it also includes the 2 c /r bound of the original ASIACRYPT 2014 article [53], which decreases by approximately a logarithmic factor log 2 r . Note that the adversary needs to find a multi-collision on r bits with only 2 c trials. When the rate increases, and particularly when r > 2c, the adversary cannot even find an ordinary collision within 2 c trials. In this case, the multi-collision-based attack will not be influential. Due to this, our bound is getting close to 2 c when r becomes large. The advantage of the attack comes from the number of generated multi-collisions. Considering that the number of multi-collisions can only take discrete values while our bound can take sequential values, our bound is strictly tight. c > r. Note that, for c > r , the security bound of Theorem 1 is not dominated by 2 c /α but rather by 2 b/2 , omitting constants (cf., Table 2). Tightness of the bound follows by a naive attack that aims to find collisions on the b-bit state. Distinguishing Attacks Without Key Recovery As later explained in Fig. 4, several practical designs use key K for the initialization as well as for the tag generation. Those schemes cannot be distinguished with a straightforward application of the above generic procedure, yet it is still possible to distinguish them by increasing the attack complexity only by 1 bit or so. We focus on Ascon, GIBBON and HANUMAN, in which K in the tag computation prevents the adversary from computing tag A offline. This can be solved by extending the number of message blocks in each query. Instead of the tag A i,1 A i,2 . . ., outer parts of the subsequent blocks [s i,2 ] r [s i,3 ] r . . . take a role of filter to identify the correct guess. If the number of filtered bits is much bigger than c, a match suggests the correct guess with very high probability. Owing to the additional message blocks, the attack complexity increases by 1 bit or so, depending how many message blocks are added. In HANUMAN, K can be recovered from the internal state by inverting the permutation to the initial value. Meanwhile in Ascon and GIBBON, K cannot be recovered and the adversary only can mount distinguishing attacks. Other CAESAR Submissions In this section we discuss how the mode security proof of NORX generalizes to the CAESAR submissions Ascon, the BLNK mode underlying CBEAM/STRIBOB, ICE-POLE, Keyak (v1 only), and two out of the three PRIMATEs. Before doing so, we make a number of observations and note how the proof can accommodate small design differences. -NORX uses domain separation constants at all rounds, but this is not strictly necessary and other solutions exist. In the privacy and integrity proofs of NORX, and more specifically at the analysis of state collisions caused by a decryption query in Lemma 4, the domain separations are only needed at the transitions between variable-length inputs, such as header to message data or message to trailer data. This means that the proofs would equally hold if there were simpler transitions at these positions, such as in Ascon. Alternatively, the domain separation can be done by using a different primitive, as in GIBBON and HANUMAN, or a slightly more elaborated padding, as in BLNK, ICEPOLE, and Keyak; -The extra permutation evaluations at the initialization and finalization of NORX are not strictly necessary: in the proof we consider the monotone event that no state collides assuming no earlier state collision occurred. For instance, in the analysis of Dhit in the proof of Lemma 4, we necessarily have a new input to p at some point, and consequently all next inputs to p are new (except with some probability); -NORX starts by initializing the state with init(K , N ) = (K N 0 b−κ−ν ) ⊕ const for some constant const and then permuting this value. Placing the key and nonce at different positions of the state does not influence the security analysis. The proof would also work if, for instance, the header is preceded with K N or a properly padded version thereof and the starting state is 0 b ; -In a similar fashion, there is no problem in defining the tag to be a different τ bits of the final state; for instance, the rightmost τ bits; -Key additions into the inner part after the first permutation are harmless for the mode security proof. Particularly, as long as these are done at fixed positions, these have the same effect as XORing a domain separation constant. These five modifications allow one to generalize the proof of NORX to Ascon, CBEAM and STRIBOB, ICEPOLE, Keyak, and two PRIMATEs, GIBBON and HANU-MAN. The only major difference lies in the fact none of these designs accommodates a trailer, hence all are functions of the form except for one instance of ICEPOLE which accommodates a secret message number. Additionally, these designs have σ δ ≤ λ δ + q δ for δ ∈ {D, E} (or σ δ ≤ λ δ + 2q δ for CBEAM/STRIBOB). We always write H = (H 1 , . . . , H u ) and M = (M 1 , . . . , M v ) whenever notation permits. In below sections we elaborate on these designs separately, where we slightly deviate from the alphabetical order to suit the presentation. Diagrams of all modes are given in Fig. 4. The parameters and achieved provable security levels of the schemes are given in Table 1. We remark that the attack of Sect. 5 carries over to CBEAM and STRIBOB, ICE-POLE and a simplified version of Keyak v1 (with only one round of key absorption). It does not apply to Ascon, GIBBON, and HANUMAN due to the additional XOR of the secret key at the end. Ascon Ascon is a submission by Dobraunig et al. [33,34] and is depicted in Fig. 4a. It is originally defined based on two permutations p 1 , p 2 that differ in the number of underlying rounds. We discard this difference, considering Ascon with one permutation p. Ascon initializes its state using init that maps (K , N ) to (0 b−κ−ν K N ) ⊕ const, where const is determined by some design-specific parameters set prior to the security experiment. The header and message can be of arbitrary length and are padded to length a multiple of r bits using 10 * -padding. An XOR with 1 separates header processing from message processing. From the above observations, it is clear that the proofs of NORX directly carry over to Ascon. ICEPOLE ICEPOLE is a submission by Morawiecki et al. [65,66] and is depicted in Fig. 4c. It is originally defined based on two permutations, p 1 and p 2 , that differ in the number of underlying rounds. We discard this difference, considering ICEPOLE with one permutation p. ICEPOLE initializes its state as NORX does, be it with a different constant. The header and message can be of arbitrary length and are padded as follows. Every block is first appended with a frame bit: 0 for header blocks H 1 , . . . , H u−1 and message block M v , and 1 for header block H u and message blocks M 1 , . . . , M v−1 . Then, the blocks are padded to length a multiple of r bits using 10 * -padding. In other words, every padded block of r bits contains at most r − 2 data bits. This form of domain separation using (K , N , H )) u , and 11 for message blocks M 1 , . . . , M v−1 and 10 for M v . Then, the blocks are padded to length a multiple of r bits using 10 * 1-padding. In other words, every padded block of r bits contains at most r − 2 data bits. This form of domain separation using frame bits suffices for the proof to go through. Due to above observations, our proof readily generalizes to SpongeWrap [19] and DuplexWrap [22], and thus to Keyak. Without going into detail, we note that the same analysis can be generalized to the parallelized mode of Keyak [22]. Additionally, Keyak also supports sessions, where the state is re-used for a next evaluation. Our proof generalizes to this case, simply with a more extended description of (17). BLNK (CBEAM and STRIBOB) CBEAM and STRIBOB are submissions by Saarinen [81,[83][84][85][86]. Minaud identified an attack on CBEAM [62], but we focus on the modes of operation. Both modes are based on the BLNK Sponge mode [82], which is depicted in Fig. 4b. The BLNK mode initializes its state by 0 b , compresses K into the state (using one or two permutation calls, depending on κ), and does the same with N . Then, the mode is similar to SpongeWrap [19], though using a slightly more involved domain separation system similar to the one of NORX. Due to above observations, our proof readily generalizes to BLNK [82], and thus to CBEAM and STRIBOB. PRIMATEs: GIBBON and HANUMAN PRIMATEs is a submission by Andreeva et al. [2,3], and consists of three algorithms: APE, GIBBON, and HANUMAN. The APE mode is the more robust one, and significantly differs from the other two, and from the other CAESAR submissions discussed in this work, in the way that ciphertexts are derived and because the mode is secure against nonce-misusing adversaries up to common prefix [4]. (See Sect. 7 for a discussion on APE.) We now focus on GIBBON and HANUMAN, which are depicted in Fig. 4e, f. GIBBON is based on three related permutations p = ( p 1 , p 2 , p 3 ), where the difference in p 2 , p 3 is used as domain separation of the header compression and message encryption phases (the difference of p 1 from ( p 2 , p 3 ) is irrelevant for the mode security analysis). Similarly, HANUMAN uses two related permutations p = ( p 1 , p 2 ) for domain separation. 6 GIBBON and HANUMAN initialize their state using init that maps (K , N ) to 0 b−κ−ν K N . The header and message can be of arbitrary length, and are padded to length a multiple of r bits using 10 * -padding. In case the true header (or message) happens to be a multiple of r bits long, the 10 * -padding is considered to spill over into the capacity. From above observations, it is clear that the proofs of NORX directly carry over to GIBBON and HANUMAN. A small difference appears due to the usage of two different permutations: we need to make two RP-RF switches for each world. PRIMATEs: APE Unlike GIBBON and HANUMAN, the APE authenticated encryption scheme follows a different design strategy. It is depicted in Fig. 5. APE is based on one permutation p, and characteristic to the design is the way the ciphertexts are derived and verified. APE uses a key of size c bits, and the initialization init places K into the inner part of the state. In case of a present nonce N , in APE it is prepended to the header H , denoted N H . The nonce is of fixed length, and of suggested size 2r bits [2,3]. The header and message can be of arbitrary length and are padded to length a multiple of r bits using 10 * -padding. In case the true header (or message) happens to be a multiple of r bits long, the 10 * -padding is considered to spill over into the capacity. In case the message is not a multiple of r bits long, the last ciphertext is derived slightly differently, and we refer to [2,3]. The scheme is designed and proven to be 2 c/2 secure against nonce-misusing adversaries up to common prefix [4]. We now consider the security of APE in the noncerespecting setting, and present an adversary that breaks the privacy with a complexity of about 2 c/2 . We assume that the adversary can make blockwise queries to the scheme. In more detail, upon an authenticated encryption of M 1 , . . . , M v , it only needs to input the jth message block after it receives the j − 1 ciphertext block, for j = 2, . . . , v. Proposition 1. Let Π = (E, D) be APE based on an ideal underlying primitive p. Then, where all q E queries are of length (2 (c+1)/2 + 1)/q E + ρ + 1. Conclusions In this work we analyzed one of the Sponge-based authenticated encryption designs in detail, NORX, and proved that it achieves security of approximately min{2 b/2 , 2 c , 2 κ }, significantly improving upon the traditional bound of min{2 c/2 , 2 κ }. Additionally, we showed that this proof straightforwardly generalizes to five other CAESAR modes, Ascon, BLNK (of CBEAM/STRIBOB), ICEPOLE, Keyak v1, and PRIMATEs. Our findings indicate an overly conservative parameter choice made by the designers, implying that some designs can improve speed by a factor of 4 at barely any security loss. It is expected that the security proofs also generalize to the modes of Artemia [1]. However, this mode is based on the JH hash function [96] and XORs data blocks in both the rate and inner part. It does not use domain separations, rather it encodes the lengths of the inputs into the padding at the end [9]. Therefore, a generalization of the proof of NORX to Artemia is not entirely straightforward. The results in this work are derived in the ideal permutation model, where the underlying primitive is assumed to be ideal. We acknowledge that this model does not perfectly reflect the properties of the primitives. For instance, it is stated by the designers of Ascon, NORX, and PRIMATEs that non-random (but harmless) properties of the underlying permutation exist. Furthermore, it is important to realize that the proofs of security for the modes of operation in the ideal model do not have a direct connection with security analysis performed on the permutations, as is the case with block ciphers modes of operation. Nevertheless, we can use these proofs as heuristics to guide cryptanalysts to focus on the underlying permutations, rather than the modes themselves.
2018-06-19T13:26:40.271Z
2018-06-15T00:00:00.000
{ "year": 2018, "sha1": "ff7643f3533feafec973883830d22d98659d7cbc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00145-018-9299-7.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9a715fec7b4aea44b6e85f53dbb5f739dd63af9d", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
59221075
pes2o/s2orc
v3-fos-license
Distributed Database Access Technology Research Based on the .Net . For big data analysis and processing of the SQL database has the very good read and write performance and scalability, but cannot support a complete SQL queries and inter-bank transactions across the table, which is mainly composed of a relational database for traditional financial business have limits on the application. The OceanBase is facing huge amounts of data query of distributed database, combines the advantages of relational database and relational database, support relational query and an inter-bank transactions across the table at the same time, the extensible. However, at present OceanBase supports only simple, the nested children the SQL query which can't very well to support the application. Therefore, we design a novel system to overcome the drawbacks in this paper. The experiment shows the feasibility of our approach. Introduction With the further development of cloud computing and web technology, no database is growing stronger. No database to abandon the traditional relational database strictly transactional consistency and paradigm of constraints, the weak consistency model, support distributed and horizontal extension, meet the needs of a lot of data management. To information security and reduce the database system to upgrade maintenance, domestic Banks began to advances the strategy of "go to the IOE". No data than traditional relational database which is ultra-high cost performance and good scalability. These qualities make no database be the first choice of the domestic banking sector deal with huge amounts of data. Database OceanBase is Alibaba group developed a massive distributed relational database system, adopted the no database architecture, based on the lateral extension mode, can be adjusted by means of dynamic increase/decrease online server system load which has a good scalability. Moreover, the system realized the important characteristics of a relational database, the SQL query language support, relative to other no (database, to better meet the needs of the financial business. OceanBase scalability, strong standard SQL query and transaction consistent sexual function, in response to the banking financial business, has a big advantage [1][2][3]. A financial business is characterized by the extensive use of nested children query, but for now, OceanBase supports only simple nested SQL, for complex query embedded within the cover of unrealized, thus hinders the financial business of import. Cloud computing environment, in response to the challenges brought about by the huge amounts of data and user request, to solve the large-scale data access bottleneck problem facing the traditional database, distributed cache technology to introduce, to provide users with high performance, high availability, scalable data cache service. Distributed cache the data distribution to multiple cache service node data in memory management, provides a unified access interface, based on redundancy backup mechanism to achieve high availability support, also known as memory data grid. Compared with the relational database technology, however, because of the lack of relatively uniform data model and standardization of data access technology, in many sources, heterogeneous, distributed spatial data sharing and integration and interoperability lags far behind in terms of relational database technology, restricts the further develop the technology and application of geographic information [4]. At present, the field of spatial information has more than 100 kinds of commonly used spatial data format and integrated information system of the application of various departments as demand differences and historical reasons tend to adopt different vendor's format data, communication between departments (Internet visits) become a thorny problem. Heterogeneous, multi-source spatial data sharing and interoperability technology solutions, data format conversion model, direct data access mode, relational database space expansion mode, and data integration technology of Web Service. These methods to a certain extent solve the problem of the spatial data sharing and interoperability, but there are also some limitations. The figure 1 shows the situation. Fig. 1The Structure and Development of Distributed Cache To overcome the mentioned drawbacks and difficulties, we conduct research on distributed database access technology in this paper. The detailed research will be introduced in the following. Technique Description and Corresponding Organization With the rapid development of computer network technology, more and more network information, database access technology is our concern all the people more and more, ADO. Net data access technology has gained wide support and praise. In the past, most Web pages are static letter pastor, composition, the site only allows visitors to read data, site interaction is not strong, and the information is not stored visitor, if you must store the visitor information, you need to use the database and data access strategy, allowing data access strategy, allowing the programmer and database connection, and to provide search, insert, update and delete data command. ADO. Net effective data manipulation, data access is decomposed into multiple can be used alone. ADO. Net SQL Server, OLEDB and XML data source access, user applications can use the ADO. It is easy to connect to the data source, data retrieval and update operations [5]. The network structure as shown in figure 2 Figure 2. System structure of ADO.Net Model and Structure of the System. There are two most common ways of system data management: using file system data and make use of the database system management data. The former can satisfy the requirement of system real time, but to write the data type of the file system should be according to the project need to define, a large number of programming jobs are spent on the data organization, break up, make whole programming work greatly increased, and the file number is likely to lead to chaos in the management. Existing measurement task is generally adopted the latter to store data for management and use. The formula 1 shows the cluster structure. (1) Powerful database function such as data storage, query, calls for industrial automation and testing and measuring system with a powerful technical support, the user can be used to create a database to manage complex testing tasks, storage, test data and can summarize the test result of automatic test system. The figure 3 shows the organization. Fig. 3 The Structure of the System Organization Dynamic extension online add or remove nodes to achieve the adjustment of the cache capacity, at the same time support services available. Capacity planning to provide support for elastic supply, its goal is to determine the initial and the adjusted cache capacity and number of nodes. Due to the size of the data access and model with time-varying, lead to some cache node and become a bottleneck restricting the throughput of the whole system. Elimination of load balance board on the system's performance and service quality assurance plays an important role. Such as load balance goal is to make the cache data and access load uniformly distributed, as far as possible between the nodes contain client equilibrium and the service side equilibrium two implementation methods. System Working Mechanism. Data migration is the key to realize the dynamic extension of nodes and load balance technology. At present, most of the distributed cache support online data migration, but a lack of balance of hot data migration processing. It should be pointed out that, in the process of data migration with a large number of state synchronization can bring some impact to the system performance, therefore, how to effectively reduce the migration overhead is the need to address the problem. The figure 4 shows the flowchart. Cloud computing has the characteristics of open, dynamic and changeable these characteristics determine the distributed cache needs to be agile, initiative, and adaptability. Described the system agility and adaptability in accordance with the change of internal and external condition, such as the request data quantity and proportion of reading and writing, etc. to respond in a timely manner and the ability to adjust, the dimension is closely associated with elastic supply dimensions. In-depth analysis of the characteristics of different caching strategies and the application of scene is conducted which is of great significance to the implementation of the adaptive mechanism. According to different strategy target, the caching policy progress can be divided into several dimensions, such as topology, cache replacement algorithm and data consistency strategy, etc. Near district strategy based on partitioning strategies to increase the capacity of a smaller front-end cache, is used to accelerate the hot data access [6][7]. User request object, can be no delay in the front-end cache loading, won't be loaded automatically from the back-end cache loading. The back-end cache capacity is larger but not as good as the front-end cache access speed. Replication technology's main goal is to improve the availability, at the same time can be balanced by creating a copy of the data node load and improve the system performance. From the perspective of replication model replication techniques can be divided into master-slave replication and more master replicates. Master-slave replication, read and write operations are executed in the master copy, to update master copy is forwarded to this; Master replicates, allowing any copy executed more read and write operations, forward to other copies. Each copy will update replication technology from the time dimension and can be divided into synchronous replication and asynchronous replication. Synchronous replication, the client needs to wait for a copy operation is complete to get response, performance overhead, but can guarantee the instant a sex; Asynchronous replication without waiting for the copy operation completed can return a result, has a better performance and scalability, and can guarantee the data consistency, eventually but at the expense of the instant consistency, this means that applications may read at a certain moment to inconsistent data. The experiment will be conducted in the next section. Experiment and Result In order to validate the proposed method, we conduct comparison experiment in the section. The experiment to measure mass subquery data set the performance of the nested children query strategy under the condition of same experimental environment. Under the large-scale subquery data set of nested query strategy, performance test results and performance of the testing is shown below figure 5. Conclusion and Summary The introduction of cloud computing, effectively promote the profound changes in the field of IT, at the same time also to the development of distributed cache technology has brought rare opportunities. As the cloud platform promotion application performance is an important means of distributed cache technology in recent years has received the extensive concern of the industry and academia. To level up the current methodology, we conduct research on distributed database access technology in this paper. The experiment result proves the correctness of the proposed system.
2018-12-28T23:31:30.921Z
2015-06-25T00:00:00.000
{ "year": 2015, "sha1": "00b063c2be200babf60e80c334ba47393c061b13", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/24806.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00b063c2be200babf60e80c334ba47393c061b13", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7272693
pes2o/s2orc
v3-fos-license
Proposed low-cost premarital screening program for prevention of sickle cell and thalassemia in Yemen In Yemen, the prevalence of sickle cell trait and β-thalassemia trait are high. The aim of this premarital program is to identify sickle cell and thalassemia carrier couples in Yemen before completing marriages proposal, in order to prevent affected birth. This can be achieved by applying a low-cost premarital screening program using simple blood tests compatible with the limited health resources of the country. If microcytosis or positive sickle cell is found in both or one partner has microcytosis and the other has positive sickle cell, so their children at high risk of having sickle cell or/and thalassemia diseases. Carrier couples will be referred to genetic counseling. The outcomes of this preventive program are predicted to decrease the incidence of affected birth and reduce the health burden of these disorders. The success of this program also requires governmental, educational and religious supports. INTRODUCTION Premarital screening and genetic counseling (PMSGC) programs for prevention of blood genetic diseases have been implemented in many populations with high prevalence rates of inherited blood disorders worldwide since the 1970s. (1,2) They aim to identify hemoglobinopathies carriers, in order to evaluate the risk of having children with severe anemia. Sickle cell disorder (SCD) and thalassemia are the most common hemoglobin (Hb) gene diseases in the world forming an important public heath problem in certain regions of the world including the Mediterranean and Middle East countries. (3) SCD and thalassemia are autosomal recessive inherited Hb disorders. SCD is a group of Hb disorders resulting from the inheritance of the sickle b-globin gene. The clinical manifestations of SCD are due to the tendency for Hb S variant to polymerize and deform red cells into the sickle shape under deoxygenated conditions. The homozygous sickle cell anemia (Hb SS) is the most common severe type of SCD. Thalassemias are a heterogeneous group of inherited disorders of globin synthesis that lead to a reduction of one or more of the globin chains. This unbalanced production of globin chain results in a decrease in Hb synthesis and microcytosis and hypochromia. The homozygous b-thalassemia is the most common severe form of thalassemias. The compound heterozygous sickle cell/b thalassemia also causes a severe anemia which is common in populations with high prevalence rates of both SCD and b-thalassemia. (4) The available treatment for sickle cell and thalassemia diseases are unsatisfactory and usually the patients depend on blood lifelong transfusion for survival, which causes a considerable stress for patients and their relatives; in addition to economical and emotional burdens on the society and the health system. Premarital screening is the best option for reducing Hb gene disorders than prenatal screening because the first is primary prevention whereas the latter is secondary or tertiary prevention. (5) Premarital screening for thalassemia was first established in 1975 in Latium, Italy (2) whereas for SCD in Virginia, USA in 1970. (1) They are currently carrying out with proven success in many parts of the world including the Mediterranean countries such as Greece, Italy and Cyprus with success preventive of 80-100%. In Arab countries like Egypt, Syria, Lebanon, Tunisia, Morocco, Saudi Arabia, Bahrain and United Arab Emirates and other developing countries, the success rates of these programs were satisfactory due to the economical obstacles or other heath priorities (infectious disease) or cultural and religious constraints. (6) Strategies of implementing premarital preventive programs in high-risk populations vary according to the economic status, culture and religion of these populations. In developed countries where advanced technologies are available advances in carrier diagnosis using hematological examination followed by DNA investigation has made population possible screening and prenatal diagnosis during pregnancy. This approach, in conjunction with genetic counseling, has lead to a steady decrease in the birth of affected homozygote in those countries, and raised the knowledge of the risk of being a carrier. However, the premarital preventive program has been introduced into many developing countries by using simple and inexpensive blood tests in combination with genetic counseling and has resulted in a noticeable reduction in the number of affected birth. For example in Saudi Arabia, complete blood count (CBC), sickle cell test and Hb-electrophoresis were used in the premarital screening leading to more than 70% reduction of the prevalence of b-thalassemia during the period of 2004 to 2009. (7) BACKGROUND Yemen is a poor country with a population of 24 million (8) and limited health resources. The prevalence of sickle cell trait (Hb AS) in Yemen is 2.2%, with a higher frequency in the western coastal and mid-western parts of the country where the incidence of affected homozygous births (Hb SS) may reach up 20/10,000. (9) Also the prevalence of b-thalassemia trait is 4.4% (10) with an estimated incidence of 11.3/10,000 of homozygous b-thalassemia births in the western coastal and mountainous regions of Yemen. The prevalence of thalassemia in SCD found to be high (19.4%) in Taiz region in mid-western area of the country. (11) The incidences of affected birth of either homozygous sickle cell anemia or b-thalassemia could be higher, depending on the proportion of consanguineous marriages and the frequency of heterozygous Hb S/b-thalassemia disease. The most effective factor for high prevalence and incidence of sickle cell and b-thalassemia diseases in Yemen is the consanguineous marriages, which was reported to be high (44.7%) and traditional. (12) To reduce the number of affected birth and their social, emotional and economical burden on the family and health system in Yemen, it is essential to apply PMSCG program. There is no any premarital screening preventive practice in Yemen yet. Therefore, this simple and low-cost premarital screening program is proposed to be suitable for the limited health resources of the country. Such proposal is essential to evaluate how a premarital screening program can be implemented in Yemen. counseling is proved to be an acceptable and effective process to reduce the number of affected birth. (6,13,14,15) This program consists of premarital screening and genetic counseling that is designed to produce a general infrastructure for accessible prevention of SCD and thalassemia before completing marriages proposal. Screening programs with genetic counseling can be implemented initially at high-risk populations in local communities in different parts of Yemen. To begin with, optional premarital screening using inexpensive and simple blood tests. Only couples at risk will receive information and genetic counseling about the consequences effects on the health of their children. Relevant instruments, methods, trained health workers, target groups and genetic counseling are required to carry out this preventive program. Hematological analyzer machines are required for CBC analysis to determine the microcytosis and/or hypochromia in red blood cells. Methods of sickle cell using either sickling test or solubility test also are required for detection of Hb S. Genetic counselors consist of trained doctors or professionals with Bachelor of Science degrees in health studies that will provide advices to carrier couples about the genetic condition, which may affect them, so that the couples have to make the appropriate choices concerning marriage and reproduction. Target groups in the country are required to be prepared for premarital screening which can be preformed by giving classes about SCD and thalassemia for young people in high schools, universities, sports clubs and military. In addition, the widespread of education programs through booklets, posters, TV and newspapers. Public and private laboratories equipped and licensed to screen for SCD and thalassemia can be the place for premarital screening. Carriers' data will be recorded for evaluation to adapt this preventive program to meet the public needs. PROCESS OF SCREENING All prospective couples have to be tested for both diseases and get the appropriate counseling (if needed) before completing their marriage proposals. Therefore, couples with marriage proposals will be referred to a local licensed equipped laboratory for premarital screening. After filling the premarital form (name, age, sex, national number, and address), the man's CBC and sickle cell test are tested first, because of high prevalence of iron deficiency among women of reproductive age, (16) which also causes microcytosis. If he has microcytosis [mean cell volume (MCV) ,80% and/or mean cell Hb (MCH) , 27 pg] and/or positive sickle cell test, then the woman is examined. If microcytosis or positive sickle cell test is found in both or one partner has microcytosis and the other has positive sickle cell test, so their children are at high risk of having sickle cell and/or thalassemia diseases ( Figure 1). The carrier couples will be referred to genetic counseling regardless to the results the couples has the choices concerning marriage and reproduction. INTERPRETATION PROBLEMS Because the common causes of microcytosis are thalassemia including aand b-thalassemia trait, and iron deficiency anemia, the important problems encountered are the interpretation of 1. Microcytosis due to b-thalassemia trait or non b-thalassemia The identification of b-thalassemia trait is very important because homozygous or compound heterozygous traits with sickle cell causes severe anemia; however, cases of a-thalassemia are very rare. A definitive diagnosis of b-thalassemia trait is based on microcytosis and high level of Hb A 2 . 3.5%. In very low health resources situations such as in Yemen where sophisticated equipments for measuring Hb A 2 level often unavailable, the b-thalassemia trait can be predicted by introducing one of well-known mathematical formulae as simple, fast and inexpensive method of discrimination between b-thalassemia trait and other causes of microcytosis and hypochromia. Some of these formulae have been applied in similar circumstances like in some health centers in Isfahan, CBC Iran, as part of the premarital screening program. (17) The diagnosis of these formulae is based on red cell indices obtained by hematological analyzer analysis (Table 1). Although none of these formulae are absolutely accurate, but they can recognize b-thalassemia trait from iron deficiency and a-thalassemia with limitations. Each one of these formulae showed different sensitivity and specificity in different populations. For example, England and Fraser index showed sensitivity . 95% and specificity .95% among a population from Kuwait, (18) whereas a sensitivity of 87.2% and specificity of 62.9% among people from Iran. (17) In the meanwhile, any one of these formulae can be used in this proposed program, until their sensitivity and specificity among Yemeni population are determined. Microcytosis due to coexisting of iron deficiency anemia and b-thalassemia trait Iron deficiency anemia is common among women of reproductive age. (16) This can be evaluated by rechecking their red cell indices after iron therapy for six weeks to clarify persistent microcytosis that may be due to b-thalassemia trait. (22) EXPECTED OUTCOMES The success of this proposed premarital screening program depends on the health resources, governmental policies, religious beliefs, culture norms, traditions, literacy and education level and attitudes of individual couples. It is predicated that this premarital preventive program may be accepted with some difficulties because of its economical viability, accessibility within local health services and couples want a healthy family. The difficulties of accepting this program may be due to marriages among the same family or tribe are traditional; in addition to religious reasons, however, the number of affected birth will be decreased which would reduce the health burden on the individuals and society. They will also provide the research scientific data about SCD and thalassemia which promoting further development of genetic knowledge and technology that will lead to improve premarital preventive program. Disadvantage of using mathematical formulae for discrimination b-thalassemia trait from a-thalassemia or iron deficiency anemia in this proposed preventive program will miss some cases, which will lower the reducing number of affected birth. This will be a major problem until advanced technology is established. FUTURE ASPECTS Development of genetic preventive services will be in response to public demand. Future planning should be taken after evaluating the outcomes of implementation of this preventive program taking into account the introduction of advanced technology such as Hb-electrophoresis or high performance liquid chromatography (HPLC) into the local health services to determine the levels of Hb A 2 , F and S and other abnormal Hb for accurate diagnosis of b-thalassemia trait, sickle cell trait and other hemoglobinopathies carries. In addition, the establishment DNA techniques to confirm the primary screening results and to study the nature of the mutations involved for predicting the likely severity of the disorders resulting from their inheritance should be considered. Governmental legalization for mandatory premarital screening may be applied at high risk population with genetic Hb disorders or nationally (if necessary) to reduce as much as possible the number of affected births with these diseases. Prenatal diagnosis services may also be introduced into the health services with legalization for termination pregnancy of affected fetuses. Premarital preventive program can also be extended to screen for viral infections particularly human immunodeficiency (HIV) and hepatitis viruses B and C (HBV, and HCV) because of their high prevalence rates and capability for sexually transmission and their serious clinical consequences. England & Fraser (19) MCV-RBC-5Hb 23.4 ,0 . 0 Mentzer (20) MCV/RBC count ,13 .13 Shine and Lal (21) MCV 2 £ MCH/100 ,1530 .1530 CONCLUSION Premarital screening and genetic counseling program using simple and inexpensive blood tests, initially for preventing sickle cell and thalassemia diseases can be introduced at high-risk population in the western regions of Yemen. By applying this proposed preventive program it is expected to decrease the number of affected births which leads to reduce the health burden of these diseases for the benefits of the Yemeni people. In the future, it may be applied nationally and extended to include screening for infections and other genetic diseases. The success of this preventive program needs governmental, educational and religious supports.
2016-05-12T22:15:10.714Z
2013-12-23T00:00:00.000
{ "year": 2013, "sha1": "f054f04e389be224815ea6efa729178ef2b6dd95", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5339/qmj.2013.13", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f054f04e389be224815ea6efa729178ef2b6dd95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3411747
pes2o/s2orc
v3-fos-license
Assessment of Vasculature of Meningiomas and the Effects of Embolization with Intra-arterial MR Perfusion Imaging: A Feasibility Study BACKGROUND AND PURPOSE: Embolization of meningiomas has emerged as a preoperative adjuvant therapy that has proved effective in mitigating blood loss during surgical resection. Arterial supply to these tumors is typically identified by diffuse areas of parenchymal staining after selective x-ray angiograms. We investigate the benefits that selective injection of MR contrast may have in identifying vascular territories and determining the effects of embolization therapy. MATERIALS AND METHODS: Selective intra-arterial (IA) injection of dilute MR contrast media was used to assess the vascular distribution territories of meningeal tumors before and after embolization therapy. Regions of the tumor that experienced loss of signal intensity after localized contrast injections into the external and common carotid as well as vertebral arteries were used to quantify the specific vessel's volume of distribution. Assessments were made before and after embolization to reveal changes in the vascular supply of the tumor. MR findings were compared with radiographic evaluation of tumor vascular supply on the basis of conventional x-ray angiography. RESULTS: MR proved to be an excellent means to assess tissue fed by selected arteries and clearly demonstrated the treated and untreated portions of the neoplasm after therapy. In some instances, MR revealed postembolization residual enhancement of the tumor that was difficult to appreciate on x-ray angiograms. Very low contrast dose was necessary, which made repeated assessment during therapy practical. CONCLUSION: MR perfusion imaging with selective IA injection of dilute contrast can reveal the distribution territory of vessels. Changes in tumor vasculature could be detected after embolization, which reveal the volumetric fraction of the tumor affected by the therapy. M eningiomas are highly vascular brain neoplasms that are often associated with substantial blood loss during surgical resection. Embolization of meningeal tumors 1,2 has emerged as a preoperative adjuvant therapy that has proved to be particularly effective in mitigating blood loss during surgical resection. 3,4 Embolization involves selective micro-catheterization of arteries feeding the neoplasm and subsequent injection of microspheres 5,6 or other embolic agents. The vascular bed of a meningioma is identified with conventional angiography by the presence of parenchymal staining after super-selective intra-arterial (IA) contrast injections. Embolization therapy is not appropriate for all arteries because of the potential for unintended ischemia in nonneoplastic tissue. Determination of whether embolization will be appropriate is based on a complete angiographic assessment of the neurovasculature. Neurologic deficits have been reported after embolization, 7 presumably as a result of improperly identified vascular territories, reflux of embolic agents, or tumor swelling after embolization. Dynamic susceptibility contrast (DSC) perfusion imaging 8 is an MR technique that has primarily been applied to the categorization of stroke 9 but also can provide useful information about neoplasms. 10 DSC perfusion studies are performed with systemic administration of a paramagnetic gadoliniumbased chelate through intravenous (IV) access. This method is convenient and minimally invasive, but quantitation is challenging partly because of difficulties in determining the arterial input function (AIF). 11 Moreover, the volume of contrast agent used to elicit a substantial response is typically double the recommended dosage, making repeated measures undesirable. IA contrast administration would have several potential advantages, including the ability to visualize the distribution territory of the vessel, decreased broadening of the AIF, and dramatically reduced volumes of contrast agent for effects comparable to IV studies. This feasibility study explores the possible benefits of MR guidance in embolization of meningiomas. It is specifically hypothesized that DSC perfusion imaging during selective injection of MR contrast media may offer better definition of arterial distribution territories with respect to the lesion and surrounding structures. A combined x-ray fluoroscopy/MR suite is used to augment conventional practices of meningioma embolization with selective IA DSC perfusion MR data. Patients A total of 6 patients undergoing embolization of a meningioma before surgical resection were studied. The local Institutional Review Board and the Cancer Center Oversight Committee approved the study protocol, and patients provided informed consent. Patients ranged in age from 36 to 65 years (mean, 49 years) and included 4 women and 2 men. All patients underwent a conventional cerebral angiogram with subsequent embolization dependent on whether this adjuvant therapy was deemed potentially efficacious on the basis of the x-ray an-giographic findings. Baseline MR acquisitions were performed the day before catheterization in all patients, and IA MR perfusion imaging was performed immediately before and after the delivery of embolic agents. Combined X-Ray and MR Suite We performed angiographic assessment, embolization of the meningiomas, and MR imaging in a combined "XMR" suite consisting of an x-ray catheterization laboratory (Integris V5000; Philips, Best, the Netherlands) and adjacent MR scanner (Intera; Philips). The 2 units can be connected, which enabled rapid transfer of the patients via a tabletop that floated between the 2 systems. The x-ray system was single plane with a 12-inch image intensifier. The MR system was 1.5T and equipped with 30 mT/m amplitude and 150 mT/m/ms slew rate gradients. MR before Catheterization An 8-element phased array head coil was used for all MR imaging before the procedure. The patients were positioned supine, and IV access was obtained. Perfusion Imaging Perfusion-weighted DSC imaging was performed with a single-shot echo-planar T2*-weighted acquisition (TR/TE/flip angle, 2000 ms/50 ms/90°; epi factor, 89; FOV, 24 cm; matrix, 128 ϫ 89; sections, 12-5 mm; number of dynamics, 60; axial plane; acquisition time, 2 min 6 s). The acquisition was designed to acquire a minimum of 20 seconds of baseline data before arrival of contrast and to continue for approximately 1 minute after arrival. Gadolinium-based contrast (Omniscan; GE Healthcare, Princeton, NJ) was injected intravenously, either through an antecubital or hand vein. IV injections commenced 10 seconds after the start of the first dynamic scan to permit the acquisition of baseline data. Contrast was injected at either 4 mL/s (antecubital) or 3 mL/s (hand) to a dose of 0.2 mmol/kg (typically ϳ20 mL). The contrast was followed by a 15-mL saline push at the same injection rate. Angiographic Procedure All patients received 2000 units of heparin via IV injection before the procedure to reduce the risk of clot formation. Vascular access was achieved with a transfemoral approach by the Seldinger technique. Vascular anatomy was determined by digital subtraction angiography (DSA) in combination with selective injection of iodinated contrast into the external and internal carotid arteries as well as into the vertebral arteries. This was performed bilaterally for all patients, irre-spective of the location of the meningioma. Superselective angiograms of vessels such as the middle meningeal artery were performed as warranted by these initial findings. The interventional radiologists (R.H., V.H., C.F.D.) then established a qualitative assessment of their impression of fractional contributions of all vessels feeding the tumor. This assessment was done without knowledge of MR findings. Embolization was performed in patients who demonstrated substantial tumor vascular supply through dural branches arising from either the external carotid or vertebral arteries in which a safe distal position of the catheter was achievable. Patients who were receiving embolization had an MR-compatible unbraided 5F catheter (Cook, Bloomington, Ind) placed into the vessel, through which the therapy was to be delivered. They were then transferred to the MR suite for IA MR perfusion imaging. Details of this imaging will be described below. After MR imaging, the patients were returned to the angiography suite for administration of therapy. Embosphere (BioSphere Medical, Rockland, Mass) was used as the embolic agent, and particle sizes ranged from 300 to 500 . A microcatheter was inserted into vessels that were associated with the tumor and appropriate for therapy. Embolization was administered until tumor parenchymal staining was obliterated. After completion of the embolization, the MR-compatible 5F catheter was returned to the same position used for preembolization therapy. The patients were then moved back to the MR suite for repeat IA MR perfusion imaging. After MR imaging, the patients were returned to the angiography suite. Intra-Arterial MR Perfusion To preclude the need for movement of patients during catheterization, a 2-element surface coil array consisting of two 20-cm circular loops was applied for all IA imaging. These coils were placed laterally against the patient's head and secured in place with tape. IA contrast injections were performed with dilute contrast. The contrast agent was diluted in physiologic saline to 0.05 mol/L (1/10 stock strength) for all studies. IA injections were performed through a catheter preloaded with the appropriate solution; injection rates varied between 1.0 mL/s (external carotid artery [ECA] and vertebral) and 3.0 mL/s (common carotid artery [CCA]). No saline push was necessary because the solution was released directly into the artery. A 5-second duration of injection was maintained to provide a bolus width comparable with IV injections and in balance with the temporal resolution of the whole-brain perfusion acquisition (2 s). These injection rates are also comparable with, but less than, those used for x-ray angiographic purposes. IA injections commenced 20 seconds after the start of the first dynamic scan because there was very little delay between injection and arrival of the contrast agent in the distal tissues. Patients receiving therapy via the ECA had perfusion scans performed in both the ECA and, subsequently, in the CCA. This was accomplished by initially placing the catheter in the ECA and then blindly retracting the catheter approximately 5 cm to enter the CCA. Perfusion Analysis Perfusion image data were fit to a standard gamma variate function on a pixel-by-pixel basis, and the following parameters were extracted: relative cerebral blood volume (rCBV), mean transit time (MTT), time to arrival (T0), and time to peak (TTP). The volume of tumor experiencing alteration in signal intensity during bolus passage was also quantified before and after embolization. We performed regionof-interest analysis on the tumors by dividing the lesion into the ECA, ICA, and whole-tumor sections. We obtained ECA territories on the IA ECA perfusion study and demarcated them from the image dem-onstrating peak signal intensity attenuation as well as from the calculated perfusion maps. The region of interest circumscribed the entire territory experiencing loss of signal intensity on the first pass of the contrast agent. These regions of interest were then copied to the CCA perfusion study, and the ICA territory was defined as the region of tumor experiencing signal intensity attenuation but external to the defined ECA territory. Finally, we copied the ECA and ICA territories to the IV study to investigate differences in perfusion properties of the ECA and ICA territories against the whole tumor. The IA segmentations were repeated after embolization to determine the effectiveness of the therapy and compared with changes evident on selective x-ray angiographic runs. All volumes were measured by an imaging scientist (A.J.M.) without knowledge of the radiologists' impressions of the fractional contribution of vessels. Results All patients received preintervention MR assessment including IV perfusion. Any patient experiencing even a mild reaction to IV injection of gadolinium-based contrast would have been excluded from further study, but this did not occur. One patient was not considered a good candidate for embolization therapy at the time of catheterization because of the predominant tumor supply coming from the internal carotid artery. This patient did not undergo any additional MR examinations. The remaining 5 patients received embolization therapy and additional MR examinations including IA DSC perfusion. Embolization was performed via a distal branch of the ECA (4/5) or vertebral artery (1/5) in all patients. One patient exhibited reflux into the ICA during an ECA injection and was excluded from further analysis. The IA injection rates were set to be compared with, but less than, those used for conven-tional angiography. The purpose of this strategy was to minimize the potential for streamlining effects and avoid reflux in any vessel. In this study, we used an injection rate of 3 mL/s into the common carotid artery and 1 mL/s into the external or vertebral arteries. Concentration of the injectate was initially estimated based on the anticipated flow in the carotid (5% to 6%) and vertebral (ϳ2%) arteries 12 as a fraction of cardiac output (assumed to be ϳ5 L/min). Therefore, an IV-injected bolus is anticipated to deliver the indicated fraction of the injectate to the corresponding territory. The prescribed injection rates were further anticipated to result in approximately a 1:1 dilution of the injectate in blood. Thus, to approximate arterial concentrations similar to those obtained with IV injections, we diluted the injected gadolinium solution to 1 part contrast in 9 parts saline (0.05 mol/L). This amount was initially tested in a set of 4 patients who received only postembolization IA perfusion imaging and proved to be a reasonable compromise. Therefore, each ECA and CCA perfusion scan requires only 2.5% and 7.5%, respectively, of the contrast used in conventional IV perfusion to create a similar effect. Patient Findings Patient 1. This patient (Fig 1) had a large right posterior meningioma that was determined on angiograms to have a significant pial supply. On the basis of complete angiographic evaluation, radiographic impression estimated that the right ICA contributed 85% of tumor blood supply, with the right ECA contributing the remainder. Superselective embolization of the right middle meningeal artery was performed, resulting in angiographically determined complete stasis. IA MR perfusion imaging was performed through the right external and common carotid arteries before and after embolization. ECA injections revealed the portion of the tumor associated with the vessel before embolization. The lack of changes in signal intensity after embolization supports the angiographic finding of obliteration of nonpial supply to this tumor. Patient 2. This patient had a large meningioma in the right posterior cranial fossa (Fig 2). A complete angiographic assessment led to a radiographic impression of blood supply from the following arteries: the right vertebral (50%), left vertebral (30%), right external carotid (15%), and left occipital (5%). Superselective embolization was performed via the right and left posterior meningeal arteries and resulted in a marked reduction of flow. There was residual supply from the right posterior inferior cerebellar artery (PICA) as well as the untreated right external carotid and left occipital arteries. MR perfusion imaging was performed via the right vertebral artery and revealed the portion of the tumor fed by this vessel before and after embolization. The residual enhancement after embolization likely reveals the distribution territory of the untreated right PICA. Patient 3. This patient had a large right frontal meningioma (Fig 3). A complete angiographic assessment led to a radiographic impression of blood supply from the following arteries: the right ECA (75%), right ICA (15%), and left ECA (10%). Superselective embolization was performed via the right and left middle meningeal arteries and produced a good angiographic result. IA MR perfusion imaging was performed through the right external and common carotid arteries before and after embolization. This revealed substantial involvement of the tumor in the right ECA before embolization, which was markedly reduced after treatment. However, MR perfusion provided evidence of residual supply from this artery after embolization that was difficult to appreciate on the postembolization right ECA angiographic run. Patient 4. This patient had a large meningioma in the posterior right parietal-occipital lobe that was heavily calcified. A complete angiographic assessment led to a radiographic impression of blood supply from the following arteries: the left occipital (70%), left ECA (20%), and right ECA (10%). Super-selective embolization of the left and right middle meningeal arteries was performed with a good angiographic result. Heavy calcification produced signal intensity void on MR perfusion images that encompassed 85% of the volume of the tumor. The signal intensity void precluded assessment of bolus passage in this portion of the tumor, which greatly limited the information obtained from MR perfusion. IA MR perfusion was performed through the left ECA, and evidence of bolus passage was evident on MR-visible portions of the tumor. Changes in signal intensity during bolus passage were not evident after embolization, consistent with a good angiographic result in this territory. Perfusion Analysis IA assessments before and after embolization provide insight into the portion of tumor affected by embolization therapy. Table 1 summarizes tumor vascularity on the basis of angiographic impression and MR perfusion imaging. Tumor volume was established from IV perfusion data acquired the day before embolization. X-ray data in this table are based on qualitative radiologic impressions of all vessels involved with the tumor but binned into the coarser arterial distributions assessed with IA MR techniques. The portion of the tumor fed by a specific vessel was quantitatively evaluated before and after embolization with IA MR perfusion methods. This was possible in only a limited subset of vessels, and vessels not interrogated by MR were marked with a dash. There were substantial differences in the portion of the tumor predicted to be fed by a specific artery, on the basis of angiographic impression versus MR perfusion. Many factors may affect these differences. For example, the MR-determined distribution territory of a vessel does not exclude that other vessels may also feed portions of this same region. The spatial extent of the region of signal intensity attenuation after selective ECA or vertebral injection was reduced in all patients after embolization. The tissue affected by embolization can be delineated by the region that experiences signal intensity attenuation with selective injection before, but not after, therapy. The effectiveness of embolization is reflected in the reduced fraction of the tumor vol- Fig 2. Patient 2. A large meningioma in the posterior fossa is evident on precontrast MR perfusion (A). Difference images between this baseline and peak signal intensity attenuation after IV (D) and IA right vertebral (B) contrast injection are shown. Selective x-ray angiograms obtained during a right vertebral injection reveal the pattern of the tumor (C). Embolization was administered more distally to this injection site (right posterior meningeal), and obliteration of the posterior component of the right vertebral distribution territory can be appreciated on the MR (E) and x-ray (F) images after embolization. ume associated with the vessel through which embolic agents were administered. Perfusion analysis was performed on all IV and IA contrast injections ( Table 2). Patient 4 was excluded from this Table because the portion of the tumor that provided some signal intensity still provided substantially less than normal tissue. Therefore, extraction of meaningful contrast dynamics was severely compromised. For the remaining patients, whole tumor assessments were determined from IV injections because several tumors were at least partially fed by contralateral circulation or a combination of carotid and vertebral sources. Thus, injection into the CCA would not necessarily highlight the entire lesion. Although the duration of injections was kept constant between the IV and IA techniques, the significant differences in bolus width and contrast concentration made direct comparison between these 2 groups difficult without improved techniques of quantitation. All relative measures of cerebral blood volume were normalized to the value obtained in white matter. Tabulated measures of ICA and ECA perfusion were extracted from the appropriate regions of interest on the CCA injection. rCBV within the lesion was substantially higher than the white matter in all patients. The white matternormalized rCBV values increased in 2 patients after embolization. The significance of this finding is unclear because many factors in this study that were not controlled for (including cardiac output), could affect the arterial concentration of gadolinium. Thus, it is not clear whether this change was meaningful or reproducible. The upper row depicts the lesion before embolization and includes a precontrast perfusion source image (A), perfusion difference images between baseline and peak enhancement after right CCA (B ) and ECA (C ) injection, and a lateral view x-ray angiogram with a right ECA injection (D ). Preembolization peak perfusion contrast after IV contrast administration is revealed (E). After embolization, the CCA (F ) and ECA (G ) perfusion studies and lateral view x-ray angiogram with a right ECA injection were repeated. Contrast injection into the right ECA demonstrates some residual attenuation (G) but is substantially reduced compared with the baseline state. The blush pattern in the proximity of the lesion that was evident on x-ray angiograms is no longer appreciable after treatment (H). MTT was an average of 18% shorter with IA delivery compared with IV techniques with identical duration of injections. This comparison is directly attributable to a broadening of the input function as the bolus moves from the IV injection site to the target tissue. The MTT through the portion of the lesion assigned as part of the ECA territory was, on average, 6% (IA measures) or 1.5% (IV measures) longer than the ICA territory. This discrepancy may be related to differences in the blood-brain barrier between these 2 vessels but does not seem sufficient to demonstrate appreciable contrast on MTT maps. There was a substantial elongation in the MTT for patient 2 after embolization; the cause of this is unknown. This prolongation also seems to be largely responsible for the higher rCBV measure noted in this patient after therapy. Discussion MR techniques have some significant advantages in their ability to correlate soft tissue with the vessels providing its vascular supply. The ability to independently visualize tumor and normal tissue in a volumetric sense, coupled with dynamic imaging during localized injections of contrast, provides a powerful enhancement over the x-ray angiographic techniques that are currently used. The improved visualization of where a contrast agent is deposited after superselective contrast injection can allow a more complete embolization of tumors without increasing the risk to nonneoplastic tissue. These data should provide a more detailed preoperative map of tumor vascularity compared with conventional imaging techniques and, by identifying regions of persistent hypervascularity, provide a target for further embolization or a map of sites of potentially significant intraoperative bleeding. The potential of embolizing meningiomas without subsequent surgery has already been proposed, 13 and this should only become more appropriate with better definition of vascular territories. Moreover, it is not unreasonable to expect that an increasing number of asymptomatic meningiomas will be discovered as a result of unrelated diagnostic procedures. This is because the prevalence of meningiomas at autopsy 14 is substantially greater than that in current clinical incidence. The increasing use of MR and CT to evaluate an assortment of unrelated conditions should increasingly reveal meningiomas at an earlier stage, when minimally invasive therapy may be more appropriate. The principal limitation of MR in the evaluation of vascular territories is that it is currently impractical to manipulate catheters under MR guidance. This greatly limits the number of vessels whose vascular territory can be explored because the catheter manipulations must be made under x-ray guidance, requiring shuttling of patients between the MR and x-ray systems for each independent vessel selection. Moreover, there are safety concerns about the presence of braided catheters in patients during MR scanning. This concern relates to the conductive nature of the metallic braid that is virtually ubiquitous with clinically used neurovascular catheters. This braiding provides the necessary mechanical properties to navigate neurovascular structures but may cause localized hot spots when exposed to the radio-frequency energy of an MR system. 15,16 Thus, we used an unbraided catheter in this study, which often required that the selected vessel be accessed initially with a conventional neurovascular catheter and then changed for the MR-compatible catheter by temporarily leaving a guidewire to maintain position. Evaluation of more distal vessels becomes increasingly difficult to access with MR-safe catheters. Therefore, we limited our injections to relatively proximal vessels. A simple blind pullback procedure proved effective at broadening the distribution territory within the MR scanner, but under ideal conditions branch vessels would be studied with independent injections. We are currently exploring the possibility of using braided catheters in these studies, given that the specific absorption rate associated with DSC perfusion imaging is extremely low. Other potential methods can determine the vascular supply to meningeal tumors. A combination of conventional angiography and CT has been proposed. 17 Placement of the catheter must again be performed in the catheterization laboratory, but CT evaluation of superselective injections provided more definite visualization of tumor perfusion in nearly half the cases studied. This benefit was largely because of the tomographic capabilities of the CT system, which revealed distribution territories beyond what could be appreciated with DSA. The heightened sensitivity of MR to the spatial extent of the tumor as well as the presence of contrast should only improve on these findings. However, all these methods require catheterization that may prove to be unnecessary if the evaluation of the tumor vasculature reveals that it is not amenable to embolization therapy. The use of arterial spin labeling methods 18 to selectively label specific vessels 19 is an intriguing idea that may provide a useful initial evaluation of tumor vasculature. This method could serve to screen out patients whose dominant supply is clearly from the ICA and potentially prevent catheterization without subsequent therapy. Improvements in the labeling specificity of these techniques will likely be necessary to realize this objective, however. Not surprisingly, the benefit of embolization therapy is greatest when a high degree of tumor vascularity has been obliterated. 20 Techniques that serve to improve the portion of tumors that can be embolized without affecting the surrounding tissue will only improve the efficacy of the technique. The interventional MR methods described here represent a step toward this goal and clearly demonstrate the correlation that can be achieved between tumor volume and vascular distribution territories. Conclusions Selective IA injection of dilute MR contrast media is an excellent means to assess the distribution territory of arteries feeding highly vascular lesions. Very low contrast dose is required, which makes repeated assessment during therapy practical. MR seems to be very sensitive for depicting tissue fed by a selected vessel and can also provide a volumetric assessment of both the tumor and distribution of vascular territories. These advantages, when coupled with conventional x-ray angiographic techniques, provide novel insight into tumor vascularity and the impact of embolic therapy.
2017-06-16T13:03:33.261Z
2007-10-01T00:00:00.000
{ "year": 2007, "sha1": "7c39cb83506844614a2419c939aa950395cbe15b", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/ajnr/28/9/1771.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a0dea25283e2e5affe0c5df742a20a06c00b8b85", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259605557
pes2o/s2orc
v3-fos-license
A study to evaluate the effectiveness of educational intervention program on knowledge regarding behavioral problems of school children among primary school teachers of selected government schools Hubballi Behavioural problem is often seen as less stigmatizing, less sever, more socially acceptable and more practical than the term emotionally disturbed. The term grew out of a behavioural model which process the teacher can see and describe behavioural disorder, but cannot easily describe disturbed emotions. In common usage today behavioral problems is usually attributed to less severally disturbed student whereas emotionally disturbed is reserved for the most seriously impaired. An evaluative study was conducted among 45 Primary school teachers of selected schools, Hubballi. The research design used for the present study was pre-experimental: one group pre-test, post-test design. Non-Probability, purposive sampling technique was used to select the sample. The data was collected by using structured knowledge questionnaire. Data analysis was done by using descriptive and inferential statistics. Overall result of the study revealed that, in the pre-test, Majority of them in the pre-test 25(55.56%) had average knowledge, 11(24.44%) had good knowledge and 9(20%) had poor knowledge. Where as in Post-test 44(97.78%) had good knowledge and 1(2.22%) had average knowledge and none of them had poor knowledge scores regarding Behavioural problem of school children. There was a significant gain in knowledge of school teachers who were exposed to the educational intervention program i.e. 25.27%. The paired ‘t’ value (t cal =17.21*) at p <0.05 level of significance for knowledge proved that the stated hypothesis i.e. the mean post-test knowledge scores of school teachers in Selected schools who were exposed to the educational intervention program will be significantly higher than the pre-test knowledge scores at 0.05 level of significance. The study concluded that Educational Intervention Program was more effective for Primary school teachers to increase and update their knowledge regarding Behavioural problems of school children. Introduction Children's are like buds in a garden and should be carefully and lovingly nurtured as they are the future of the nation and citizen of tomorrow [1] .Each child is a unique person, a person whose future will be affected for better or worse by the influences that mould his or her life during the early years.Children can be lovable one minute, and thoroughly disagreeable the next.They can be the source of immense joy but the cause of the much frustration and irritation, they can make enormous demand on their parents but equally they can give you unconditional love and an immeasurable sense of importance [2] .Normal behaviour in children depends on the child's age, personality and physical and emotional development.Normal or good behaviour is usually determined by whether it is socially, culturally and developmentally appropriate.Knowing what to expect from a child at each age will help you to decide whether his or her behaviour is normal.Developmental and behavioural issues requires an in-depth examination of a child's medical, social, and family history [3] .All young children behave badly from time to time, and occasional temper tantrums, aggression and defiance of authority are a normal part of growing up.Developing a consistent approach to diagnosis in the area of problem behaviour is thus fraught with difficulty and not without controversy, since many 'problems or disorders' are hard to define ~ 99 ~ and assign to a single medical condition or syndrome [4] .School children are emerging as creative person who are preparing for their future role in the society.During the school years the child develops while some attitude toward self as a person and learn appropriate masculine or feminine social role.The school years are a time of new achievement and experiences in their needs and preferences should be respected.In our country most of the population are children and they are considered as the future of our country, so their safety, basic needs and the development is our priority that relies great deal on it is human resource strength.Therefore, it is necessary in the interest of our country to look after the health and welfare as they form the most endangered segment of the population [5] .A child's behavioural problem represent a conflict between his developing personality and that of his parents, teachers and siblings and of other children with whom he comes into contact.The teachers, who are young, relatively inexperienced, bachelors coming from broken homes, unsatisfied in them may make a child more vulnerable to psychological mal development.Children's who are secure and emotionally satisfied in their home relationship are not usually affected by unstable [6] .School teachers are the second mother to every child.So children listen to every point that teacher teaches, the unhealthy child cannot be expected to take full advantages of schooling.Health education must remain in the hands of the teachers and school health workers.Health education is part of general education.A growing understanding of the physical, mental, emotional, and normal nature of the children is the essence of professional teaching ability [7] .The term behaviour refers to the way a person responds to a certain situation or experience.Behaviour is affected by temperament, which is made up an individual's innate and unique expectations, emotions and beliefs.Behaviour can also be influenced by a range of social and environmental factors including parenting practices, gender, and exposure to new situations, general life events and relationships with friends and siblings [8] .Behaviour problems among children are a deviation from the accepted pattern of behaviour on the part of the children when they are exposed to an inconsistence social and cultural environment.But these are not to be equated with the presence of psychiatric illness in the child as these are only symptoms or reaction to emotional and environment stress.But these behaviours are allowed to continue, they are like to pose problems of adjustment to the child in school age [9] .There are various behavioural disorders evident in children.Major concerns of them are Temper Tantrum, Attention deficit hyperactivity disorder, Conduct disorder, Learning disorder, School Phobia, Tics, Pica, and Juvenile delinquency which are mainly found in school age children [10] .ADHD is a one of the most common neurodevelopment disorders of childhood.It is usually first diagnosed in childhood and often last into the adulthood.Children with ADHD may have trouble paying attention, controlling impulsive behaviour (may act without thinking about what the result will be), or be overly active [10] .A school phobia is when child is very nervous and refuse to go to school is called school phobia.The reasons behind why the child has school phobia can be many, and it is important to understand them from the child's perspective [11] . Conduct disorder is a group of behavioural & emotional problems that usually begins during childhood.Children's with these disorders have a difficult time following rules and behaving in a socially acceptable way.They may display aggressive, disruptive, and deceitful behaviours that can violate the rights of others.Child may perceive them as "having a mental illness [12] .Autism spectrum disorder (ASD) is a complex development conditions that involves persistent challenges in social interaction, speech and non-verbal communication and restricted / repetitive behaviours [13] .The etiological factors for behavioural problems of children are usually biologically risk factors, family relationship risks, experimental risks and social environmental risk factors [8] .The severity and nature of these problems vary very much from child to child and consequently, the methods used to deal with behavior problems must vary accordingly.To determine the most effective program of management for any one child, the teacher needs to use his observations of that child to first decide what the main problems are and the circumstances in which they occur.The application of an individual treatment approach to the problems shown by children involves an analysis of the problems in a specific way [14] .The National Survey on Drug Use and Health (NSDUH) Report, States that during the past two decades, there have been marked changes in Inpatient services for school children with emotional and behavioral problems.It indicated that an estimated 2.6% are receiving home services for emotional and Behavioral Problems in the past 12 months in a hospital [15] .A Survey reports that 51% of the primary school teachers had an average knowledge regarding Behavioral Problems among that 9% had Good knowledge, 37% had poor knowledge & 3% had very poor knowledge regarding Behavioral Problems.Thus study concluded that, it is advisable to provide Educational Programs for primary school teachers regarding Behavioral Problems [7] .Behavioral service facilities available in India are very few.Although children institute 40% of the population of India, there are very low child guidance facilities to provide psychological care to behaviorally disturb children.It is generally noted that in developing countries more and more children's are brought into the school system, but at the same time, every section of the school is likely to have around 15-20% students, who are not able to maintain a satisfactory scholastic progress.Therefore here the teacher becomes important in the mental health of the children, especially true in case of Indian situation where there is a considerably shortage of the mental health facilities for the children.So school teachers can make important contributions in the promotion of mental health of the children [3] .From the above Statistical information and reviews researcher understood that behavior problems are potentially serious but treatable disease.It is in these contents, the importance of teachers become vital in Safeguarding and promoting the mental health of children and early identification of deviations from normal.Hence, I hope the trained teacher can manage and prevent behavioral problems of school children a certain extent, if they get adequate and sufficient knowledge regarding behavioral problems among school children. Materials and Methods Study Design The research design is a blue print for conducting the study that maximizes control over factors that can interfere with the validity of the findings.It is an overall plan investigator used to obtain valid answers to research questions [16] .The research design used for the present study was preexperimental: one group pre-test, post-test design. Setting and Sample In the present study, 45 samples of Primary school teachers of selected schools, Hubballi, were selected through Non -Probability; Purposive Sampling Technique. Measurements The subjects were given Socio-demographic sheet and the structured knowledge questionnaires.Section I consists of 12 items for obtaining information about socio demographic variables.Section II consists of 45 items for measuring the level of knowledge of primary school teachers regarding Behavioral problem of school children.Each correct answer carries 1 mark and incorrect answer with 0 marks.The tool and Educational intervention programme were validated by the experts in the field of Pediatric Nursing & Mental Health Nursing and by the members of the research committee of KLE'S Institute of Nursing Sciences, Hubballi.The tool was tested for reliability by using Split Half Method and applying Karl Pearson's Correlation Coefficient formula.The reliability of Structured knowledge questionnaire was r =0.80. Data Collection The research investigator had taken formal permission from the principal of Govt.H.P.S. Santoshnagar, Govt.H.P.S. Bharidevarkoppa, Govt.M.P.S. Goppankopa and Govt.M.P.S. Amargoal, Hubballi.The investigator introduced herself and explained the purpose of the study written consent was obtained from the participants.Data was collected using the structured knowledge questionnaires.The collected data was tabulated and analyzed. Data Analysis The data obtained were analyzed in terms of the objectives of the study using descriptive and inferential statistics.Tabulation of data in terms of frequency, percentage, mean, median, mode, standard deviation and range to describe the data.Classification of the knowledge scores (level of knowledge) were as follows: Good Knowledge = (X + SD) and above Average knowledge = (X -SD) to (X + SD) Poor knowledge = (X -SD) and below [Note: X=Mean, SD= Standard Deviation] Inferential statistics used to draw the following conclusions Paired 't' test for testing the effectiveness of Educational Intervention Program on Knowledge regarding behavioral problems of school children.Chi-square test to find out an association between pre-test knowledge scores with their socio demographic variables. Results Graph 1: The column graph represents percentage distribution of subjects according to their level of knowledge scores in pre-test and post-test. The level of knowledge on primary school teachers regarding Behavioral problems in school children during pre-test and post-test.Most of them in the pre-test 25(56.56%)had average knowledge, 11(24.44%)had good knowledge and 9(20%) had poor knowledge.After EIP, in post-test 44(97.78%)had good knowledge and 1(2.22%) had average knowledge and none of them had poor knowledge scores. Graph 2: The cone graph represents the mean percentage gain in knowledge scores of subjects according to their knowledge scores. . There was a significant gain in knowledge i.e. 25.27% among the primary school teachers who were exposed to the educational intervention program.The calculated paired 't' value (t cal =17.21*) was greater than the tabulated value (t tab =2.0211).Hence, H 1 was accepted.This indicates that the gain in knowledge scores was statistically significant at 0.05 level of significance.Therefore, the Educational Intervention Program was effective in terms of gain in knowledge scores of the subjects.These findings are supported through a study conducted by Bindu Y, Shantvan P. who observed that there was a significant gain in knowledge i.e. 20.92% among the subjects who were exposed to the structured teaching programme.The calculated paired 't' value (t cal =26.572*) was greater than the tabulated value (t tab =2.04).Therefore, this showed that Structured Teaching Programme on knowledge regarding Behavioural problems in school children has brought about significant gain in knowledge of the subjects [18] .The computed chi-square test revealed that there was association found between one variable.i.e. sources of information hence hypothesis was accepted in this variable. Where as in regards with reaming variable there was no association found, hence H 2 were rejected.These findings are supported through a study conducted by Dhesmukh P, who observed that there was statistical association found between two variables i.e.Education qualification & source of information.Hence hypotheses were accepted in these variables.Where as in regards with remaining variable there was no association found, hence H 2 were rejected [2] . Table 1 : Mean difference (d̅ ), Standard Error of difference and paired 't' values of knowledge scores of subjects regarding behavioral problems of school children.(n=45) It is my pleasure and privilege to express my deep sense of gratitude to my guide & mentor Dr. Somashekarayya Kalmath M.Sc.(N), Ph.D HOD, Dept. of Paediatric Nursing, KLES' Institute of Nursing Sciences, Hubballi for his keen interest on me at every stage of my research.His prompt inspirations, timely suggestions with kindness, enthusiasm and dynamism have enabled me to complete my thesis.I wish to express my profound sense of gratitude and genuine thanks to Mr.Shivappa Bannur, Lecturer, Dept of Medical Surgical Nursing, KLES' Institute of Nursing Sciences, Hubballi for his timely support, encouragement, cooperation and guidance as a co-guide has immensely helped me to complete this dissertation successfully.
2023-07-11T16:25:13.368Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "67e27a103d83b66780166119153630c59497955f", "oa_license": null, "oa_url": "https://www.paediatricnursing.net/article/view/122/5-1-17", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e678be46edbb73ba36ced0a51d51a75328126cbf", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [] }
21681810
pes2o/s2orc
v3-fos-license
Social Algorithms This article concerns the review of a special class of swarm intelligence based algorithms for solving optimization problems and these algorithms can be referred to as social algorithms. Social algorithms use multiple agents and the social interactions to design rules for algorithms so as to mimic certain successful characteristics of the social/biological systems such as ants, bees, bats, birds and animals. • Ant colony optimization: Ant colony optimization (ACO) is an algorithm for solving optimization problems such as routing problems using multiple agents. ACO mimics the local interactions of social ant colonies and the use of chemical messenger -pheromone to mark paths. No centralized control is used and the system evolves according to simple local interaction rules. • Bat algorithm: Bat algorithm (BA) is an algorithm for optimization, which uses frequencytuning to mimic the basic behaviour of echolocation of microbats. BA also uses the variations of loudness and pulse emission rates and a solution vector to a problem corresponds to a position vector of a bat in the search space. Evolution of solutions follow two algorithmic equations for positions and frequencies. • Bees-inspired algorithms: Bees-inspired algorithms are a class of algorithms for optimization using the foraging characteristics of honeybees and their labour division to carry out search. Pheromone may also be used in some variants of bees-inspired algorithms. • Cuckoo Search: Cuckoo search (CS) is an optimization algorithm that mimics the brood parasitism of some cuckoo species. A solution to a problem is considered as an egg laid by a cuckoo. The evolution of solutions is carried out by Lévy flights and the similarity of solutions controlled by a switch probability. • Firefly algorithm: Firefly algorithm is an optimization inspired by the flashing patterns of tropical fireflies. The location of a firefly is equivalent to a solution vector to a problem, and the evolution of fireflies follows a nonlinear equation to simulate the attraction between fireflies of different brightness that is linked to the objective landscape of the problem. • Metaheuristic: Metaheuristic or metaheuristic algorithms are a class of optimization algorithms designed by drawing inspiration from nature. They are thus mostly nature-inspired algorithms, and examples of such metaheuristic algorithms are ant colony optimization, firefly algorithm, and particle swarm optimization. These algorithms are often swarm intelligence based algorithms. • Nature-Inspired computation: Nature-inspired computation is an area of computer science, concerning the development and application of nature-inspired metaheuristic algorithms for optimization, data mining, machine learning and computational intelligence. • Nature-inspired algorithms: Nature-inspired algorithms are a much wider class of algorithms that have been developed by drawing inspiration from nature. These algorithms are almost all population-based algorithms. For example, ant colony optimization, bat algorithm, cuckoo search and particle swarm optimization are all nature-inspired algorithms. • Objective: An objective function is the function to be optimized in an optimization problem. Objective functions are also called cost functions, loss functions, utility functions, or fitness functions. • Optimization: Optimization concerns a broad area in mathematics, computer science, operations research and engineering designs. For example, mathematical programming or mathematical optimization is traditionally an integrated part of operations research. Nowadays, optimization is relevant to almost every area of sciences and engineering. Optimization problems are formulated with one or more objective functions subject to various constraints. Objectives can be either minimized or maximized, depending on the formulations. Optimization can subdivide into linear programming and nonlinear programming. • Particle swarm optimization: Particle swarm optimization (PSO) is an optimization algorithm that mimics the basic swarming behaviour of fish and birds. Each particle has a velocity and a position that corresponds to a solution to a problem. The evolution of the particles is governed by two equations with the use of the best solution found in the swarm. • Population-based algorithm: A population-based algorithm is an algorithm using a group of multiple agents such as particles, ants and fireflies to carry out search for optimal solutions. The initialization of the population is usually done randomly and the evolution of the population is governed by the main governing equations in an algorithm in an iterative manner. All social algorithms are population-based algorithms. • Social Algorithms: Social algorithms are a class of nature-inspired algorithms that use some of characteristics of social swarms such as social insects (e.g., ants, bees, bats and fireflies) and reproduction strategies such as cuckoo-host species co-evolution. These algorithms tend to be swarm intelligence based algorithms. Examples are ant colony optimization, particle swarm optimization and cuckoo search. • Swarm intelligence: Swarm intelligence is the emerging behaviour of multi-agent systems where multiple agents interact and exchange information, according to simple local rules. There is no centralized control, and each agent follows local rules such as following pheromone trail and deposit pheromone. These rules can often expressed as simple dynamic equations and the system then evolves iteratively. Under certain conditions, emergent behaviour such as self-organization may occur, and the system may show higher-level structures or behaviour that is often more complex than that of individuals. Introduction To find solutions to problems commonly used in science and engineering, algorithms are required. An algorithm is a step-by-step computational procedure or a set of rules to be followed by a computer. One of the oldest algorithms is the Euclidean algorithm for finding the greatest common divisor (gcd) of two integers such as 12345 and 125, and this algorithm was first given in detail in Euclid's Elements about 2300 years ago (Chabert 1999). Modern computing involves a large set of different algorithms from fast Fourier transform (FFT) to image processing techniques and from conjugate gradient methods to finite element methods. Optimization problems in particular require specialized optimization techniques, ranging from the simple Newton-Raphson's method to more sophisticated simplex methods for linear programming. Modern trends tend to use a combination of traditional techniques in combination with contemporary stochastic metaheuristic algorithms such as genetic algorithms, firefly algorithm and particle swarm optimization. This work concerns a special class of algorithms for solving optimization problems and these algorithms fall into a category: social algorithms, which can in turn belong to swarm intelligence in general. Social algorithms use multiple agents and the 'social' interactions to design rules for algorithms so that such social algorithms can mimic certain successful characteristics of the social/biological systems such as ants, bees, birds and animals. Therefore, our focus will solely be on such social algorithms. It is worth pointing out that the social algorithms in the present context do not include the algorithms for social media, even though algorithms for social media analysis are sometimes simply referred to as 'social algorithm' (Lazer 2015). The social algorithms in this work are mainly natureinspired, population-based algorithms for optimization, which share many similarities with swarm intelligence. Social algorithms belong to a wider class of metaheuristic algorithms. Alan Turing, the pioneer of artificial intelligence, was the first to use heuristic in his Enigma-decoding work during the Second World War and connectionism (the essence of neural networks) as outlined in his National Physical Laboratory report Intelligent Machinery (Turing 1948). The initiation of non-deterministic algorithms was in the 1960s when evolutionary strategy and genetic algorithm started to appear, which attempted to simulate the key feature of Darwinian evolution of biological systems. For example, genetic algorithm (GA) was developed by John Holland in the 1960s (Holland, 1975), which uses crossover, mutation and selection as basic genetic operators for algorithm operations. At about the same period, Ingo Recehberg and H. P. Schwefel, developed the evolutionary strategy for constructing automatic experimenter using simple rules of mutation and selection, though crossover was not used. In around 1966, L. J. Fogel and colleagues used simulated evolution as a learning tool to study artificial intelligence, which leads to the development of evolutionary programming. All these algorithms now evolved into a much wider discipline, called evolutionary algorithms or evolutionary computation (Fogel et al. 1966). Then, simulated annealing was developed in 1983 by Kirpatrick et al. (1983) which simulated the annealing process of metals for the optimization purpose, and the Tabu search was developed by Fred Glover in 1986 (Glover 1986) that uses memory and history to enhance the search efficiency. In fact, it was Fred Glover who coined the word 'metaheuristic' in his 1986 paper. The major development in the context of social algorithms started in the 1990s. First, Marco Dorigo developed the ant colony optimization (ACO) in his PhD work (Dorigo 1992), and ACO uses the key characteristics of social ants to design procedure for optimization. Local interactions using pheromone and rules are used in ACO. Then, in 1995, particle swarm optimization was developed by James Kennedy and Russell C. Eberhardt, inspired by the swarming behaviour of fish and birds (Kennedy and Eberhart 1995). Though developed in 1997, differential evolution (DE) is not a social algorithm; however, DE has use vectorized mutation which forms a basis for many later algorithms (Storn and Price 1997). Another interesting development is that no-free-lunch (NFL) theorems was proved in 1997 by D.H. Wolpert and W.G. Macready, which had much impact in the optimization and machine learning communities (Wolpert and Macready 1997). This basically dashed the dreams for finding the best algorithms for all problems because NFL theorems state that all algorithms are equally effective if measured in terms of averaged performance for all possible problems. Then, researchers realized that the performance and efficiency in practice are not measured by averaging over all possible problems. Instead, we are more concerned with a particular class of problems in a particular discipline, and there is no need to use an algorithm to solve all possible problems. Consequently, for a finite set of problems and for a given few algorithms, empirical observations and experience suggest that some algorithms can perform better than others. For example, algorithms that can use problem-specific knowledge such as convexity can be more efficient than random search. Therefore, further research should identify the types of problems that a given algorithm can solve, or the most suitable algorithms for a given type of problems. Thus, research resumes and continue, just with a different emphasis and from different perspectives. At the turn of this century, developments of social algorithms became more active. In 2004, a honeybee algorithm for optimizing Internet hosting centres was developed by Sunil Nakrani and Craig Tovey (Nakrani and Tovey, 2004 (Karaboga 2005). All these algorithms are bee-based algorithms and they all use some (but different) aspects of the foraging behaviour of social bees. Then, in late 2007 and early 2008, the firefly algorithm (FA) was developed by Xin-She Yang, inspired by the flashing behaviour of tropic firefly species (Yang 2008). The attraction mechanism, together with the variation of light intensity, was used to produce a nonlinear algorithm that can deal with multiomodal optimization problems. In 2009, cuckoo search (CS) was developed by Xin-She Yang and Suash Deb, inspired by the brood parasitism of the reproduction strategies of some cuckoo species (Yang and Deb 2009). This algorithm simulated partly the complex social interactions of cuckoo-host species co-evolution. Then, in 2010, the bat algorithm (BA) was developed by Xin-She Yang, inspired by the echolocation characterisitics of microbats (Yang 2010b), and uses frequencytuning in combination with the variations of loudness and pulse emission rates during foraging. All these algorithms are can be considered as social algorithms because they use the 'social' interactions and their biologically-inspired rules. There are other algorithms developed in the last two decades, but they are not social algorithms. As the focus of this work is on social algorithms, we will now explain some of the social algorithms in greater details. Algorithms and Optimization In order to demonstrate the role of social algorithms in solving optimization problems, let us first briefly outline the essence of an algorithm and the general formulation of an optimization problem. Essence of an Algorithm An algorithm is a computational, iterative procedure. For example, Newton's method for finding the roots of a polynomial p(x) = 0 can be written as where x t is the approximation at iteration t, and p ′ (x) is the first derivative of p(x). This procedure typically starts with an initial guess x 0 at t = 0. In most cases, as along as p ′ = 0 and x 0 is not too far away from the target solution, this algorithm can work very well. As we do not know the target solution x * = lim t→∞ x t in advance, the initial guess can be an educated guess or a purely random guess. However, if the initial guess is too far way, the algorithm may never reach the final solution or simply fail. For example, for p(x) = x 2 + 9x − 10 = (x − 1)(x + 10), we know its roots are x * = 1 and x * = −10. We also have p ′ (x) = 2x + 9 and If we start from x 0 = 10, we can easily reach x * = 1 in less than 5 iterations. If we use x 0 = 100, it may take about 8 iterations, depending on the accuracy we want. If we start any value x 0 > 0, we can only reach x * = 1 and we will never reach the other root x * = −10. If we start with x 0 = −5, we can reach x * = −10 in about 7 steps with an accuracy of 10 −9 . However, if we start with x 0 = −4.5, the algorithm will simply fail because This has clearly demonstrated that the final solution will usually depend on where the initial solution is. This method can be modified to solve optimization problems. For example, for a single objective function f (x), the minimal and maximal values should occur at stationary points f ′ (x) = 0, which becomes a root-finding problem for f ′ (x). Thus, the maximum or minimum of f (x) can be found by modifying the Newton's method as the following iterative formula: For a D-dimensional problem with an objective f (x) with independent variables x = (x 1 , x 2 , ..., x D ), the above iteration formula can be generalized to a vector form where we have used the notation convention x t to denote the current solution vector at iteration t (not to be confused with an exponent). In general, an algorithm A can be written as which represents that fact that the new solution vector is a function of the existing solution vector x t , some historical best solution x * during the iteration history and a set of algorithm-dependent parameters p 1 , p 2 , ..., p K . The exact function forms will depend on the algorithm, and different algorithms are only different in terms of the function form, number of parameters and the ways of using historical data. Optimization In general, an optimization problem can be formulated in a D-dimensional design space as where h i and g j are the equality constraints and inequality constraints, respectively. In a special case when the problem functions f (x), h i (x) and g j (x) are all linear, the problem becomes linear programming, which can be solved efficiently by using George Dantzig's Simplex Method. However, in most cases, the problem functions f (x), h i (x) and g j (x) are all nonlinear, and such nonlinear optimization problems can be challenging to solve. There are a wide class of optimization techniques, including linear programming, quadratic programming, convex optimization, interior-point method, trust-region method, conjugate-gradient methods (Süli and Mayer 2003, Yang 2010c) as well as evolutionary algorithms (Goldberg 1989), heuristics (Judea 1984) and metaheuristics (Yang 2008, Yang 2014b). An interesting way of looking at algorithms and optimization is to consider an algorithm system as a complex, self-organized system (Ashby 1962, Keller 2009), but nowadays researchers tend to look at algorithms from the point of view of swarm intelligence (Kennedy et al. 2001, Engelbrecht 2005, Fisher 2009, Yang 2014b). Traditional Algorithms or Social Algorithms? As there are many traditional optimization techniques, a natural question is why we need new algorithms such as social algorithms? One may wonder what is wrong with traditional algorithms? A short answer is that there is nothing wrong. Extensive literature and studies have demonstrated that traditional algorithms work quite well for many different types of problems, but they do have some serious drawbacks: • Traditional algorithms are mostly local search, there is no guarantee for global optimality for most optimization problems, except for linear programming and convex optimization. Consequently, the final solution will often depend on the initial starting points (except for linear programming and convex optimization). • Traditional algorithms tend to be problem-specific because they usually use some information such as derivatives about the local objective landscape. They cannot solve highly nonlinear, multimodal problems effectively, and they struggle to cope with problems with discontinuity, especially when gradients are needed. • These algorithms are largely deterministic, and thus the exploitation ability is high, but their exploration ability and diversity of solutions are low. Social algorithms, in contrast, attempts to avoid these disadvantages by using a population-based approach with non-deterministic or stochastic components to enhance their exploration ability. Compared with traditional algorithms, metaheuristic social algorithm are mainly designed for global search and tend to have the following advantages and characteristics: • Almost all social algorithms are global optimizers, it is more likely to find the true global optimality. They are usually gradient-free methods and they do not use any derivative information, and thus can deal with highly nonlinear problems and problems with discontinuities. • They often treat problems as a black box without specific knowledge, thus they can solve a wider range of problems. • Stochastic components in such algorithms can increase the exploration ability and also enable the algorithms to escape any local modes (thus avoiding being trapped locally). The final solutions tend to 'forget' the starting points, and thus independent of any initial guess and incomplete knowledge of the problem under consideration. Though with obvious advantages, social algorithms do have some disadvantages. For example, the computational efforts of these algorithms tend to be higher than those for traditional algorithms because more iterations are needed. Due to the stochastic nature, the final solutions obtained by such algorithms cannot be repeated exactly, and multiple runs should be carried out to ensure consistency and some meaningful statistical analysis. Social Algorithms The literature of social algorithms and swarm intelligence is expanding rapidly, here we will introduce some of the most recent and widely used social algorithms. Ant Colony Optimization Ants are social insects that live together in well-organized colonies with a population size ranging from about 2 million to 25 million. Ants communicate with each other and interact with their environment in a swarm using local rules and scent chemicals or pheromone. There is no centralized control. Such a complex system with local interactions can self-organize with emerging behaviour, leading to some form of social intelligence. Based on these characteristics, the ant colony optimization (ACO) was developed by Marco Dorigo in 1992 (Dorigo 1992), and ACO attempts to mimic the foraging behaviour of social ants in a colony. Pheromone is deposited by each agent, and such chemical will also evaporate. The model for pheromone deposition and evaporation may vary slightly, depend on the variants of ACO. However, in most cases, incremental deposition and exponential decay are used in the literature. From the implementation point of view, for example, a solution in a network optimization problem can be a path or route. Ants will explore the network paths and deposit pheromone when it moves. The quality of a solution is related to the pheromone concentration on the path. At the same time, pheromone will evaporate as (pseudo)time increases. At a junction with multiple routes, the probability of choosing a particular route is determined by a decision criterion, depending on the normalized concentration of the route, and relative fitness of this route, comparing with all others. For example, in most studies, the probability p ij of choose a route from node i to node j can be calculated by where α, β > 0 are the so-called influence parameters, and φ ij is the pheromone concentration on the route between i and j. In addition, d ij is the desirability of the route (for example, the distance of the overall path). In the simplest case when α = β = 1, the choice probability is simply proportional to the pheromone concentration. It is worth pointing out that ACO is a mixed of procedure and some simple equations such as pheromone deposition and evaporation as well as the path selection probability. ACO has been applied to many applications from scheduling to routing problems (Dorigo 1992). Particle Swarm Optimization Many swarms in nature such as fish and birds can have higher-level behaviour, but they all obey simple rules. For example, a swarm of birds such as starlings simply follow three basic rules: each bird flies according to the flight velocities of their neighbour birds (usually about seven adjacent birds), and birds on the edge of the swarm tend to fly into the centre of the swarm (so as to avoid being eaten by potential preditors such as eagles). In addition, birds tend to fly to search for food or shelters, thus a short memory is used. Based on such swarming characteristics, particle swarm optimization (PSO) was developed by Kennedy and Eberhart in 1995, which uses equations to simulate the swarming characteristics of birds and fish (Kennedy and Eberhart 1995). For the ease of discussions below, let us use x i and v i to denote the position (solution) and velocity, respectively, of a particle or agent i. In PSO, there are n particles as a population, thus i = 1, 2, ..., n. There are two equations for updating positions and velocities of particles, and they can be written as follows: v t+1 x t+1 where ǫ 1 and ǫ 2 are two uniformly distributed random numbers in [0,1]. The learning parameters α and β are usually in the range of [0,2]. In the above equations, g * is the best solution found so far by all the particles in the population, and each particle has an individual best solution x * i by itself during the entire past iteration history. It is clearly seen that the above algorithmic equations are linear in the sense that both equation only depends on x i and v i linearly. PSO has been applied in many applications, and it has been extended to solve multiobjective optimization problems (Kennedy et al. 2001, Engelbrecht 2005. However, there are some drawbacks because PSO can often have so-called premature convergence when the population loses diversity and thus gets stuck locally. Consequently, there are more than 20 different variants to try to remedy this with various degrees of improvements. Bees-inspired Algorithms Bees such as honeybees live a colony and there are many subspecies of bees. Honeybees have three castes, including worker bees, queens and drones. The division of labour among bees is interesting, and worker bees forage, clean hive and defense the colony, and they have to collect and store honey. Honeybees communicate by pheromone and 'waggle dance' and other local interactions, depending on species. Based on the foraging and social interactions of honeybees, researchers have developed various forms and variants of bees-inspired algorithms. The first use of bees-inspired algorithms was probably by S. Nakrani and C. A. Tovey in 2004 to study web-hosting servers ( For example, in ABC, the bees are divided into three groups: forager bees, onlooker bees and scouts. For each food source, there is one forager bee who shares information with onlooker bees after returning to the colony from foraging, and the number of forager bees is equal to the number of food sources. Scout bees do random flight to explore, while a forager at a scarce food source may have to be forced to become a scout bee. The generation of a new solution v i,k is done by which is updated for each dimension k = 1, 2, ..., D for different solutions (e.g., i and j) in a population of n bees (i, j = 1, 2, ..., n). Here, φ is a random number in [-1,1]. A food source is chosen by a roulettebased probability criterion, while a scout bee uses a Monte Carlo style randomization between the lower bound (L) and the upper bound (U). where k = 1, 2, ..., D and r is a uniformly distributed random number in [0,1]. Bees-inspired algorithms have been applied in many applications with diverse characteristics and variants (Pham et al. 2005, Karaboga 2005). Bat Algorithm Bats are the only mammals with wings, and it is estimated that there are about 1000 different bat species. Their sizes can range from tiny bumblebee bats to giant bats. Most bat species use echolocation to a certain degree, though microbats extensively use echolocation for foraging and navigation. Microbats emit a series of loud, ultrasonic sound pules and listen their echoes to 'see' their surrounding. The pulse properties vary and correlate with their hunting strategies. Depending on the species, pulse emission rates will increase when homing for prey with frequency-modulated short pulses (thus varying wavelengths to increase the detection resolution). Each pusle may last about 5 to 20 milliseconds with a frequency range of 25 kHz to 150 kHz, and the spatial resolution can be as small as a few millimetres, comparable to the size of insects they hunt. Bat algorithm (BA), developed by Xin-She Yang in 2010, uses some characteristics of frequencytuning and echolocation of microbats (Yang 2010b. It also uses the variations of pulse emission rate r and loudness A to control exploration and exploitation. In the bat algorithm, main algorithmic equations for position x i and velocity v i for bat i are x where β ∈ [0, 1] is a random vector drawn from a uniform distribution so that the frequency can vary from f min to f max . Here, x * is the current best solution found so far by all the virtual bats. From the above equations, we can see that both equations are linear in terms of x i and v i . But, the control of exploration and exploitation is carried out by the variations of loudness A(t) from a high value to a lower value and the emission rate r from a lower value to a higher value. That is where 0 < α < 1 and γ > 0 are two parameters. As a result, the actual algorithm can have a weak nonlinearity. Consequently, BA can have a faster convergence rate in comparison with PSO. BA has been extended to multiobjective optimization and hybrid versions (Yang 2011, Yang 2014b). Firefly Algorithm There are about 2000 species of fireflies and most species produce short, rhythmic flashes by bioluminescence. Each species can have different flashing patterns and rhythms, and one of the main functions of such flashing light acts as a signaling system to communicate with other fireflies. As light intensity in the night sky decreases as the distance from the flashing source increases, the range of visibility can be typically a few hundred metres, depending on weather conditions. The attractiveness of a firefly is usually linked to the brightness of its flashes and the timing accuracy of its flashing patterns. Based on the above characteristics, Xin-She Yang developed in 2008 the firefly algorithm (FA) (Yang 2008, Yang 2010a). FA uses a nonlinear system by combing the exponential decay of light absorption and inverse-square law of light variation with distance. In the FA, the main algorithmic equation for the position x i (as a solution vector to a problem) is where α is a scaling factor controlling the step sizes of the random walks, while γ is a scale-dependent parameter controlling the visibility of the fireflies (and thus search modes). In addition, β 0 is the attractiveness constant when the distance between two fireflies is zero (i.e., r ij = 0). This system is a nonlinear system, which may lead to rich characteristics in terms of algorithmic behaviour. Since the brightness of a firefly is associated with the objective landscape with its position as the indicator, the attractiveness of a firefly seen by others, depending on their relative positions and relative brightness. Thus, the beauty is in the eye of the beholder. Consequently, a pair comparison is needed for comparing all fireflies. The main steps of FA can be summarized as the pseudocode in Algorithm 1. Initialize all the parameters α, β, γ, n; Initialize a population of n firefies; Determine the light intensity/fitness at x i by f (x i ); while t < MaxGeneration do for All fireflies (i = 1 : n) do for All other fireflies (j = 1 : n) with i = j (inner loop) do if Firefly j is better/brighter than i then Move firefly i towards j using Eq.(17); end end Evaluate the new solution; Accept the new solution if better; end Rank and update the best solution found; Update iteration counter t ← t + 1; Reduce α (randomness strength) by a factor; end Algorithm 1: Firefly algorithm. It is worth pointing out that α is a parameter controlling the strength of the randomness or perturbations in FA. The randomness should be gradually reduced to speed up the overall convergence. Therefore, we can use where α 0 is the initial value and 0 < δ < 1 is a reduction factor. In most cases, we can use δ = 0.9 to 0.99, depending on the type of problems and the desired quality of solutions. If we look at Eq.(17) closely, we can see that γ is an important scaling parameter. At one extreme, we can set γ = 0, which means that there is no exponential decay and thus the visibility is very high (all fireflies can see each other). At the other extreme, when γ ≫ 1, then the visibility range is very short. Fireflies are essentially flying in a dense fog and they cannot see each other. Thus, each firefly flies independently and randomly. Therefore, a good value of γ should be linked to the scale or limits of the design variables so that the fireflies within a range are visible to each other. This range is determined by where L the typical size of the search domain or the radius of a typical mode shape in the objective landscape. If there is no prior knowledge about its possible scale, we can start with γ = 1 for most problems. In fact, since FA is a nonlinear system, it has the ability to automatically subdivide the whole swarm into multiple subswarms. This is because short-distance attraction is stronger than long-distance attraction, and the division of swarm is related to the mean range of attractiveness variations. After division into multi-swarms, each subswarm can potentially swarm around a local mode. Consequently, FA is naturally suitable for multimodal optimization problems. Furthermore, there is no explicit use of the best solution g * , thus selection is through the comparison of relative brightness according to the rule of 'beauty is in the eye of the beholder'. It is worth pointing out that FA has some significant differences from PSO. Firstly, FA is nonlinear, while PSO is linear. Secondly, FA has an ability of multi-swarming, while PSO cannot. Thirdly, PSO uses velocities (and thus have some drawbacks), while FA does not use velocities. Finally, FA has some scaling control by using γ, while PSO has no scaling control. All these differences enable FA to search the design spaces more effectively for multimodal objective landscapes. FA has been applied to a diverse range of applications and has been extended to multiobjective optimization and hybridization with other algorithms (Yang 2014a, Yang 2014b). Cuckoo Search In the natural world, among 141 cuckoo species, 59 species engage the so-called obligate brood parasitism (Davies 2011). These cuckoo species do not build their own nests and they lay eggs in the nests of host birds such as warblers. Sometimes, host birds can spot the alien eggs laid by cuckoos and thus can get rid of the eggs or abandon the nest by flying away to build a new nest in a new location so as to reduce the possibility of raising an alien cuckoo chick. The eggs of cuckoos can be sufficiently similar to eggs of host birds in terms the size, color and texture so as to increase the survival probability of cuckoo eggs. In reality, about 1/5 to 1/4 of eggs laid by cuckoos will be discovered and abandoned by hosts. In fact, there is an arms race between cuckoo species and host species, forming an interesting cuckoo-host species co-evolution system. Based the above characteristics, Yang and Deb developed in 2009 the cuckoo search (CS) algorithm (Yang and Deb 2009). CS uses a combination of both local and global search capabilities, controlled by a discovery probability p a . There are two algorithmic equations in CS, and one equation is where x t j and x t k are two different solutions selected randomly by random permutation, H(u) is a Heaviside function, ǫ is a random number drawn from a uniform distribution, and s is the step size. This step is primarily local, though it can become global search if s is large enough. However, the main global search mechanism is realized by the other equation with Lévy flights: where the Lévy flights are simulated (or drawn random numbers) by drawing random numbers from a Lévy distribution L(s, λ) ∼ λΓ(λ) sin(πλ/2) π 1 s 1+λ , (s ≫ 0). Here α > 0 is the step size scaling factor. By looking at the equations in CS carefully, we can clearly see that CS is a nonlinear system due to the Heaviside function, discovery probability and Lévy flights. There is no explicit use of global best Other Algorithms As we mentioned earlier, the literature is expanding and more social algorithms are being developed by researchers, but we will not introduce more algorithms here. Instead, we will focus on summarizing the key characteristics of social algorithms and other population-based algorithms so as to gain a deeper understanding of these algorithms. Characteristics of Social Algorithms Though different social algorithms have different characteristics and inspiration from nature, they do share some common features. Now let us look at these algorithms in terms of their basic steps, search characteristics and algorithm dynamics. • All social algorithms use a population of multiple agents (e.g., particles, ants, bats, cuckoos, fireflies, bees, etc.), each agent corresponds to a solution vector. Among the population, there is often the best solution g * in terms of objective fitness. Different solutions in a population represent both diversity and different fitness. • The evolution of the population is often achieved by some operators (e.g., mutation by a vector or by randomization), often in terms of some algorithmic formulas or equations. Such evolution is typically iterative, leading to evolution of solutions with different properties. When all solutions become sufficiently similar, the system can be considered as converged. • All algorithms try to carry out some sort of both local and global search. If the search is mainly local, it increases the probability of getting stuck locally. If the search focuses too much on global moves, it will slow down the convergence. Different algorithms may use different amount of randomization and different portion of moves for local or global search so as to balance exploitation and exploration (Blum and Roli 2001). • Selection of the better or best solutions is carried out by the 'survival of the fittest' or simply elitism so that the best solution g * is kept in the population in the next generation. Such selection essentially acts a driving force to drive the diverse population into a converged population with reduced diversity but with a more organized structure. These basic components, characteritics and their properties can be summarized in Table 1. No Free Lunch Theorems Though there are many algorithms in the literature, different algorithms can have different advantages and disadvantages and thus some algorithms are more suitable to solve certain types of problems than others. However, it is worth pointing out that there is no single algorithm that can be most efficient to solve all types of problems as dictated by the no-free-lunch (NFL) theorems (Wolpert and Macready, 1997). Even the no-free-lunch theorems hold under certain conditions, but these conditions may not be rigorously true for actual algorithms. For example, one condition for proving these theorems is the so-called no-revisiting condition. That is, the points during iterations form a path, and these points are distinct and will not be visited exactly again, though their nearby neighbourhood can be revisited. This condition is not strictly valid because almost all algorithms for continuous optimization will revisit some of their points in history. Such minor violation of assumptions can potentially leave room for free lunches. It has also been shown that under the right conditions such as co-evolution, certain algorithms can be more effective (Wolpert and Macready 2005). In addition, a comprehensive review by T. Joyce and J.M. Herrmann on no-free-lunch (NFL) theorems (Joyce and Herrmann 2018), free lunches may exist for a finite set of problems, especially those algorithms that can exploit the objective landscape structure and knowledge of optimization problems to be solved. If the performance is not averaged over all possible problems, then free lunches can exist. In fact, for a given finite set of problems and a finite set of algorithms, the comparison is essentially equivalent to a zero-sum ranking problem. In this case, some algorithms can perform better than others for solving a certain type of problems. In fact, almost all research papers published about comparison of algorithms use a few algorithms and a finite set (usually under 100 benchmarks), such comparisons are essentially ranking. However, it is worth pointing out that for a finite set of benchmarks, the conclusions (e.g., ranking) obtained can only apply for that set of benchmarks, they may not be valid for other sets of benchmarks and the conclusions can be significantly different. If interpreted in this sense, such comparison studies and their conclusions are consistent with NFL theorems. Future Directions The research area of social algorithms is very active, and there are many hot topics for further research directions concerning these algorithms. Here we highlight a few: • Theoretical framework: Though there are many studies concerning the implementations and applications of social algorithms, mathematical analysis of such algorithms lag behind. There is a strong need to build a theoretical framework to analyze these algorithms mathematically so as to gain in-depth understanding (He et al. 2017). For example, it is not clear how local rules can lead to the rise of self-organized structure in algorithms. More generally, it still lacks key understanding about the rise of social intelligence and swarm intelligence in a multi-agent system and their exact conditions. • Parameter tuning: Almost all algorithms have algorithm-dependent parameters, and the performance of an algorithm is largely influenced by its parameter setting. Some parameters may have stronger influence than others (Eiben and Smit 2011). Therefore, proper parameter tuning and sensitivity analysis are needed to tune algorithms to their best. However, such tuning is largely done by trial and error in combination with the empirical observations. How to tune them quickly and automatically is still an open question. • Large-scale applications: Despite the success of social algorithms and their diverse applications, most studies in the literature have concerned problems of moderate size with the number of variables up to a few hundred at most. In real-world applications, there may be thousands and even millions of design variables, it is not clear yet how these algorithms can be scaled up to solve such large-scale problems. In addition, though social algorithms have been applied to solve combinatorial problems such as scheduling and the travelling salesman problem with promising results, these problems are typically non-deterministic polynomial-time (NP) hard, and thus for larger problem sizes, they can be very challenging to solve. Researchers are not sure how to modify exist social algorithms to cope with such challenges. • Hybrid and co-evolutionary algorithms: The algorithms we have covered here are algorithms that are 'pure' and 'standard' in the sense that they have not been heavily modified by hybridizing with others. Both empirical observations and studies show that the combination of the advantages from two or more different algorithms can produce a better hybrid, which can use the distinct advantages of its component algorithms and potentially avoid their drawbacks. In addition, it is possible to build a proper algorithm system to allow a few algorithms to co-evolve to obtain an overall better performance. Though NFL theorems may hold for simple algorithms, it has been shown that there can be free lunches for co-evolutionary algorithms (Wolpert and Macready 2005). Therefore, future research can focus on figuring out how to assemble different algorithms into an efficient co-evolutionary system and then tune the system to its best. • Self-Adaptive and self-evolving algorithms: Sometimes, the parameters in an algorithm can vary to suit for different types of problems. In addition to parameter tuning, such parameter adaptivity can be advantageous. There are some basic form of adaptive algorithms in the literature and they mainly use random variations of parameters in a fixed range. Ideally, an algorithm should be self-adaptive and be able to automatically tune itself to suit for a given type of problems without much supervision from the users . Such algorithms should also be able to evolve by learning from their past performance histories. The ultimate aim for researchers is to build a set of self-adaptive, self-tuning, self-learning and self-evolving algorithms that can solve a diverse range of real-world applications efficiently and quickly in practice. Social algorithms have become a powerful tool set for solving optimization problems and their studies form an active area of research. It is hoped that this work can inspire more research concerning the above important topics.
2018-04-22T13:44:18.000Z
2018-04-22T00:00:00.000
{ "year": 2018, "sha1": "070e68dc78ee89f8d9ce283f76125b65d03847ea", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.05855", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "070e68dc78ee89f8d9ce283f76125b65d03847ea", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
74362479
pes2o/s2orc
v3-fos-license
The Diencephalic Syndrome of Russell : A Case Report Diencephalic Syndrome (DS) is also known as Russells Syndrome. This is associated with marked emaciation, locomotor hyperactivity, vomiting, and absence of obvious neurological signs and loss of subcutaneous fat. A 15-month old child who presented with hyperactivity, loss of weight and failure to thrive since bi rth is reported. On Computed Tomography he had a large supra sellar mass with extension into the third ventricle causing gross hydrocephalus. He underwent biventricular shunting followed by microscopic near total excision of the tumor. The histopathology revealed it to be fibrillary meningioma. Although DS is uncommon it must be kept as a differential diagnosis in all children who fail to grow despite adequate intake. Introduction D iencephalic syndrome (DS) which is also known as Russell syndrome was fi rst described in 1951 1 .This syndrome consisted of hyperkinesis, increased motor activity, loss of subcutaneous fat and marked emaciation with normal food intake.This disorder affects the age groups from early infancy to childhood (mean age of onset = 6.2 months) with 60% affected within fi rst 6 months, 26% in the second 6 months and the rest within 3 years.One of the earliest series was by Addy and Hudson who reviewed and summarized 48 similar cases in 1972 2 .Details of DS were described by Burr et al., who reported fi ve of their own cases and summarized 67 from the literature 3 .Their diagnosis was done on the basis of skull and optic foramina plain Skiagrams, pneumoencephalography and fi nal confi rmation with surgery.DS is usually associated with sellar, hypothalamic, optic chiasmal, third ventricular and even posterior fossa tumors 4,5,6 .Due to the rarity of this syndrome the literature consists of case reports and series only. The Case A 15-month old child was brought to the clinic with failure to thrive, loss of weight and increasing head size for the past 8 months.There was no history of fever, vomiting, seizure or other signifi cant antenatal history.He was delivered at full term, vaginal, without complications in this hospital.There was no family history of neurofi bromatosis.On examination the child was alert and crying with head circumference of 50 centimeters.The anterior fontanel was wide and tense with dilated scalp veins.(Figure 1) Gross emaciation was present with absence of subcutaneous fat.Computed Tomography (CT) revealed a large suprasellar mass extending to the third ventricle causing biventricular obstruction.The tumor was well capsulated with ring enhancement and central hypodense area on contrast scan.There was gross hydrocephalus and severe papilledema.Nystagmus was absent.Magnetic Resonance Imaging was not done because the parents could not afford to do it.The child underwent biventricular ventriculoperitoneal shunting without any complications. He was followed up again after two months and was found to have moderate increased appetite, weight gain and partial reduction in head circumference (Fig. 2).The CT showed normal placement of the biventricular shunts and with no change in the tumor characteristics (Fig. 3).Pterional craniotomy and gross microscopic excision of the tumor was done.Intraoperatively there was a thick vascular capsule with thick, vascular semisolid jelly like contents.There was no local infi ltration and the tumor was dissected free from the optic nerve, third cranial nerve, fi fth cranial nerve and the left internal carotid branches.Postoperatively he was well except for fever.He developed seizure on the second day which turned to status epilepticus which was controlled but he developed aspiration pneumonia and despite all efforts he expired.The histopathology revealed the tumor to be fi brillary type of meningioma (Fig. 4). Discussion Diencephalic syndrome is a rare cause for growth retardation in children who despite adequate intake are unable to grow normally.The common causes are tumors located in optico chiasmatic hypothalamic and the third ventricular areas 4,5,6 .The tumors associated with DS range from low grade pilocytic astrocytoma to malignant differentiated astrocytomas.The differentials include glioma (majority of which are astrocytomas), ependymoma, dysgerminoma, ganglioglioma and other rare tumors like spongioblastomas, astroblastomas, oligodendrogliomas, mixed astrocytoma/ spongioblastoma and mixed astrocytoma/ oligodendroglioma.DS has also been associated with lesions in the posterior fossa, and thalamic tumors 6 .Rare cases of its occurrence in adults secondary to third ventricular craniopharyngioma has also been reported 7 . These tumors usually manifest before the fi rst year of life although delayed presentation have also been reported.In a series by Poussaint et al the mean age of presentation was 26 months 6 .The common clinical manifestation of DS are progressive emaciation, altered appearance, hyperactivity, vomiting, euphoria, pallor, nystagmus and hydrocephalus.The possible cause for failure to thrive is partial deafferentiation of the hypothalamic pathway by the tumor leading to altered feeding habits, increased metabolic rates and hormonal changes.Most of the studies have reported growth hormone resistance and other hormonal abnormalities 8,9 .The normal IGF-1 levels and consistent linear growth in DS suggest a more selective GH-resistant state than in anorexia nervosa and other forms of emaciation.This fi nding also differentiates this diagnosis from that of other chronic illnesses or oncologic processes 10 . Many of these tumors are unresectable due to their location and proximity to vital neurovascular structures.Some of them have been known to be multifocal in nature with even spinal dissemination 11 .This implies that a search be made for distal metastasis in all cases of tumors associated with DS and presenting with atypical symptoms.All cases with high grade glioma and residual tumor postoperatively need chemotherapy including carboplatin and vincristine which may lead to clinical improvement in these patients 12 .Radiotherapy has also been proven to be an useful adjunct although the probability of transformation into a higher grade though small is still present 13 . Meningiomas are rare in children and representing 0.4-4.1% of pediatric tumors and 1.5-1.8% of all intracranial meningiomas 14 .Most of them are atypical and malignant in nature when compared with adult meningiomas.With complete excision the recurrence rate is low and the majority is associated with neurofi bromatosis. Although DS is uncommon it must be kept as a differential diagnosis in all children who fail to grow despite adequate intake.This is important in developing countries where the failure of growth is always attributed to malnutrition.In our case too the child would have passed as such if the there had been no enlargement of the head which led the parents for treatment.Early investigation, imaging and management can help to treat these children and assist them to lead a near normal life.Those with unresectable tumors need to undergo further chemotherapy or radiotherapy with long term followup. Fig. 3 : Fig. 3: CT showing the normal placement of biventricular shunt and the tumor in the third ventricle. Fig. 2 : Fig. 2: Picture showing the child with emaciation, loss of subcutaneous fat and normal linear growth.The abdominal end of the VP shunt is also seen (arrow).
2018-03-23T04:57:29.013Z
2009-12-25T00:00:00.000
{ "year": 2009, "sha1": "90e3e0358576b4d7268adc31c606c8e219d434d5", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/JNPS/article/download/2464/2198", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "90e3e0358576b4d7268adc31c606c8e219d434d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119270651
pes2o/s2orc
v3-fos-license
Energy transport and fluctuations in small conductors The Landauer-B\"uttiker formalism provides a simple and insightful way for investigating many phenomena in mesoscopic physics. By this approach we derive general formulas for the energy properties and apply them to the basic setups. Of particular interest are the noise properties. We show that energy current fluctuations can be induced by zero-point fluctuations and we discuss the implications of this result. I. INTRODUCTION The study of quantum effects in small conductors is generally referred to as mesoscopic physics. The wave nature of the electrons is relevant and many counterintuitive results appear: the quantization of the conductance, persistent currents in small loops, the quantum Hall effect and the weak localization effect, to cite but a few (for a review see Ref. [1] and references therein). In the Landauer-Büttiker formalism the motion of the electrons in the conductors is described as a scattering process. This approach was originally proposed to investigate the conductance of a single-channel wire [2,3] and then extended to other structures [4][5][6][7] and properties [8][9][10][11]. Instead, much less interest have received the thermoelectric properties. The first investigations of energy transport in mesoscopic conductors appeared in Refs. [12][13][14]. The average properties were studied also in Refs. [15,16], while the noise properties in Ref. [17]. In this paper we extend the Landauer-Büttiker formalism to account for energy transport and fluctuations. We derive the energy counterpart of several results characterizing the electrical properties of mesoscopic conductors. The role of irreversible processes is at the center of our attention, especially in equilibrium at low temperatures. In this regime we show that in a two-terminal conductor energy exchange can happen, of course, under the constraint of no net flow of energy. II. GENERAL RESULTS The model. We consider a multi-terminal many channel coherent conductor. This means that the energy carriers can enter or leave the sample through M leads with N α transverse channels, α = 1, . . . , M, and their motion from one lead to another is phase coherent. Each lead is connected to an electron reservoir characterized by the temperature T α , the chemical potential µ α and the Fermi-Dirac distribution func- The reservoirs absorb all incident electrons irrespective of their phase and energy. Furthermore, the reservoirs are incoherent, that is, the electrons emerging from different reservoirs do not have any phase relationship and their phase is also independent of that of absorbed electrons. We neglect any interaction of the electrons with other electrons or with phonons, magnetic impurities, et cetera. At the conductor elastic scattering processes take place. The elastic scattering properties of the conductor are described by the scattering matrix S. It relates the amplitude of the outgoing states to the amplitude of the incoming states. Let S αβ (E) be the submatrix of dimension N α × N β defined as S αβ (E) mn = S αβ ,mn (E), m = 1, . . . , N α and n = 1, . . . , N β . S αβ (E) connects the incident amplitudes in lead β to the outgoing amplitudes in lead α. An energy carrier arriving at the conductor in contact β in channel n has probability R β β ,mn =| S β β ,mn | 2 to be scattered back into contact β in channel m and probability T αβ ,mn =| S αβ ,mn | 2 to be scattered into contact α in channel m. Evidently, for a carrier in contact β the total probability of reflection and of transmission into contact α are given by, respectively, R β β = ∑ mn R β β ,mn = ∑ mn | S β β ,mn | 2 = Tr(S † β β S β β ) and T αβ = ∑ mn T αβ ,mn = ∑ mn | S αβ ,mn | 2 = Tr(S † αβ S αβ ) ; Tr stands for trace. The conservation of the energy carriers imposes that S is unitary. Average properties. We assume that the energy carriers are only electrons and we do not take into account the spin degeneracy. The classical expression of the energy current in lead α is given by W α (t) = (1/e) (E − µ)dI α (t, E). At low temperatures the chemical potential µ can be assumed to be approximately the energy value above which transport occurs, i.e., the Fermi energy E F . We subtract it from the total energy of the energy carriers since we are interested in the net energy flowing through the leads, to which the Fermi sea does not contribute. With calculations similar to those made in Refs. [8][9][10][11], we find that the average energy current in lead α is where δ αβ is the Kronecker delta. We consider the linear response regime, i.e., for all β we write µ β = E F + ∆µ β and T β = T + ∆T β . E F is the Fermi energy of the electrons in the reservoirs and T is approximately the average temperature of the system. When ∆µ α and ∆T α are supposed to be small, we find that we can write the above expression as with the thermal conductance matrices K ∆µ αβ and K ∆T αβ defined and (2) we see that, in the linear response regime, there are two, clearly independent, contributions to energy transport due to a temperature or a chemical potential gradient. In the zero temperature limit, of course, one has to use Eq. (1), as we shall see in the next section. Fluctuations. The spectral density of energy current fluctuations S W αβ is defined by We indicate by ∆Ŵ α (ω) =Ŵ α (ω) − Ŵ α (ω) the Fourier transform of the fluctuating part of the energy current operator in lead α. We introduce the matrix ; I α is the identity matrix α × α. By following closely the analysis proposed in Refs. [8,9,11], we find that From the physical quantities entering this formula we see that energy current noise is determined by the transmission properties of the conductor and the statistics of the energy carriers. It is straightforward to verify that our result satisfies In the zero-frequency limit there is another useful identity. For the unitarity of the scattering matrix we obtain We conclude this section by making the remark that noise evokes the idea of disorder, but S W αβ is also a measure of the correlation of the deviations away from the average value of the energy current in the leads. Another important point is that noise is determined by both the particle and the wave properties of the energy carriers. Indeed, S W αβ is derived starting from operators in the second quantization formalism. The details can not be given here exhaustively, but we address the reader to Refs. [9,11] for analogous calculations. III. APPLICATIONS The quantum of thermal conductance. As a first application of our general results, and notably of Eq. (2), we consider a two-terminal conductor. The leads have the same number of channels, N, and we suppose that energy transport is due only to a temperature gradient, that is, ∆µ 1 = ∆µ 2 = 0, ∆T 2 = 0 and ∆T 1 = 0, as shown in Figure 1. In the basis of the eigenchannels and by assuming that the scattering matrix is approximately constant over the energy range where transport occurs, the energy current through the two leads is given by We denote T n (E F ) the eigenvalues of the matrix S † 21 S 21 evaluated at the Fermi energy. They should not be confused with the temperature T . At low temperatures the integral in the above result can be estimated. We obtain the quantum of thermal conductance [12,18]: T . If now we apply a small voltage across the conductor, we readily obtain the Wiedemann-Franz law L is usually referred to as the Lorentz number. Dissipation and non-equilibrium noise. Let us consider a two-terminal conductor at zero temperature over which a small voltage V is applied. We choose ∆µ 1 = eV and ∆µ 2 = 0. The leads have the same number of channels. Making use of the Landauer formula, which yields the average current I = (e 2 /h)T 12 V , and of the unitarity of the scattering matrix, from Eq. (1) we readily obtain Ŵ 1 = − Ŵ 2 = (1/2)IV . We immediately see that Ŵ 1 + Ŵ 2 = 0, and thus the conductor does not absorb energy. This result is usually interpreted as follows. We might write the energy current flowing through the leads as (I/e) × eV/2. I/e represents the flow of particles through the conductor, and eV /2 is the average excess energy of the electrons. When an electron enters the sample, it leaves behind a hole with approximately the same energy. In order to obtain the total energy dissipated in the reservoirs we have also to take into account the energy released by holes. This is done by multiplying the energy current in the leads, to which contribute only the electrons, by a factor of 2. This yields the expected result IV . Nevertheless, this analogy with the ohmic behavior is only formal. In mesoscopic conductors we have a spatial separation between elastic and inelastic scattering. Our result of course depends on the geometry of the conductor via its transmissive behavior, but the energy is dissipated in the reservoirs. Let us come to the energy current noise properties. From Eq. (3), in the basis of eigen-channels, we obtain and from Eqs. (4) and (5) we see that S W 11 (0) = S W 22 (0) = −S W 12 (0) = −S W 21 (0). For completeness, let us point out that in the low transparency limit, i.e. T n ≪ 1, corresponding for example to the case of a tunnel barrier, we have S W 11 (0) = (2/3)(e 3 /h) ∑ n T n V 3 = (2/3)eIV 2 , where we have used Landauer formula G = (e 2 /h) ∑ n T n for the conductance, which yields the average current flowing through the conductor I = GV . The above result is usually referred to as the classical limit. It corresponds to the case where the emission of electrons is uncorrelated and, as a result, the instants of emission are random and governed by a distribution function of the Poisson type [11]. Inelastic scattering. We now study the effect of inelastic scattering on energy transport. Within the scattering formalism, neglecting any kind of interaction, it is possible to introduce inelastic scattering by adding a fictitious voltage probe to the mesoscopic conductor [19], as shown in Figure 2. This model for inelastic scattering has the advantage of reducing the study of inelastic scattering to an elastic scattering problem with the further requirement of local current conservation at the voltage probe. An ideal voltmeter has an infinite internal impedance and therefore at the voltage probe the current vanishes at any moment of time [19,20]: I 3 = (∆I 3 ) 2 = 0. This means that when an electron is absorbed by the voltage probe reservoir its phase and energy are randomized, and immediately another electron is injected into the conductor with an energy and a phase uncorrelated with those of the outgoing electron. The energy current flowing through the conductor has both a coherent and an incoherent component. A fraction of the electrons is scattered coherently from contact 1 to 2 and the others are scattered inelastically in the forward and in the backward direction. We concentrate ourselves on the case of completely incoherent transmission, i.e. T 21 = T 12 = 0, and thus T 3α = T α3 , at zero temperature. By using Eq. (1) we find for the energy current in the three leads The unitary of the scattering matrix guarantees that Ŵ 1 + Ŵ 2 + Ŵ 3 = 0, and so all dissipation processes occur in the reservoirs. Then, the voltage probe reservoir absorbs energy: the electrons entering the voltage probe are thermalized through inelastic scattering and release a fraction of their excess energy. Ŵ 3 is thus nothing but the Joule heat dissipated in the voltage probe (cf. Refs. [19,21]). We study instead energy current fluctuations in the quasielastic regime. This means that the electron entering the voltage probe is replaced by an electron with the same energy, but an uncorrelated phase [22]. This is the reason why this model is generally employed to simulate phase-breaking processes. Energy conservation is achieved by demanding that at the voltage probe current is conserved in each energy interval [22]. It is worth noting that phase-breaking processes do not affect the average energy current flowing through the conductor. In fact, we find that Ŵ 1 = − Ŵ 2 = IV /2, as obtained for the two-terminal conductor. For the noise properties, from Eq. (3), in the zero-frequency limit, we find that where T (1) n and T (2) n designate the transmission probabilities from contact 1 to 3 and from contact 3 to 2, respectively (see Fig. 2); then, R = G −1 = R 1 + R 2 is the total resistance of the conductor, with R 1 = (h/e 2 )/T 31 and R 2 = (h/e 2 )/T 32 . As before, S W 11 (0) = S W 22 (0) = −S W 12 (0) = −S W 21 (0). Interestingly, for a ballistic conductor the above result does not vanish, in contrast to Eq. (6), but reduces to S W 11 (0) = (2/3)eIV 2 R 1 R 2 (R 1 + R 2 ) −2 . This indicates that the presence of phase-breaking processes are associated with energy current fluctuations. Equilibrium noise. We recall that in equilibrium the power spectrum of current fluctuations is given by G is the conductance of a two-terminal conductor and E(ω, T ) is the average energy at temperature T of an oscillator of frequency ω, being the sum of the zero-point energy and the Planck spectrum. Equation (7) is known as the fluctuationdissipation theorem, stating that equilibrium is governed by irreversible processes at the microscopic level causing fluctuations because the system experiences a fluctuating force arising from the interaction with its environment [23,24]. At high temperatures Eq. (8) reduces to the classical equipartition value, indicating that the fluctuating force originates from thermal agitation, while at low temperatures we are left with the quantum of zero-point energy. We want to understand whether vacuum fluctuations are associated with energy exchange. Let us first consider energy current noise at a nonvanishing temperature T in the zero-frequency limit. A simple calculation shows that Eq. (3) yields S W αβ (0) = 2k B T 2 K ∆T αβ + K ∆T β α . This is the Johnson-Nyquist formula for energy current noise. Now, at zero temperature we find that For clarity we have written the result in terms of the conductance matrix G αβ = (e 2 /h)(N α δ αβ − T αβ ). The only fundamental constant that enters this result is the Planck constant, and we see that energy current noise is proportional tō h 2 , in line with what obtained in Ref. [25]. Equation (9) is the main result of our work. It is interesting to consider the case of a ballistic, single-channel, two-terminal conductor because this situation admits a simple interpretation. We find that S W 11 (ω) = S W 22 (ω) = −S W 12 (ω) = −S W 21 (ω) = (h 2 /12π) | ω | 3 > 0. This means that the energy current fluctuates and if a mode tends to enter the sample in a lead, the same mode tends to leave the sample from the other lead. It also follows that energy transport is forbidden only on the average. IV. CONCLUSIONS Within a unified framework we have investigated energy transport and fluctuations in mesoscopic conductors. Importantly, our results on noise can be of relevance for the debate on dephasing from vacuum fluctuations [25][26][27][28][29]. In the Landauer-Büttiker formalism there are no fluctuating forces appearing explicitly, but we neglect any kind of interaction in the leads. For this reason, Eq. (9) allows us to conclude that energy exchange between the reservoirs is forbidden only on the average. Finally, the conductor and the leads form a conservative hamiltonian system and ultimately we have shown with an example that the coherence of an open quantum system is not always fully preserved also in equilibrium at very low temperatures.
2010-12-22T14:03:35.000Z
2010-10-22T00:00:00.000
{ "year": 2011, "sha1": "c51b15fd4a313c4bfda4f0ef2fe6b553dc2113e6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1010.4652", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c51b15fd4a313c4bfda4f0ef2fe6b553dc2113e6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252844120
pes2o/s2orc
v3-fos-license
Biogasification of methanol extract of lignite and its residue: A case study of Yima coalfield, China To investigate the biogas generation characteristics of the organic matter in lignite, methanol extraction was conducted to obtain the soluble fraction and the residual of lignite, which were subsequently taken as the sole carbon source for biogas production by a methanogenic consortium. Afterward, the composition of compounds before and after the fermentation was characterized by UV-Vis, GC-MS, and HPLC-MS analysis. The results indicated that the methanogenic microorganisms could produce H2 and CO2 without accumulating CH4 by utilizing the extract, and the methane production of the residue was 18% larger than that of raw lignite, reaching 1.03 mmol/g. Moreover, the organic compounds in the methanol extract were degraded and their molecular weight was reduced. Compounds such as 1, 6-dimethyl-4-(2-methylethyl) naphthalene, 7-butyl-1-hexylnaphthalene, simonellite, and retene were completely degraded by microorganisms. In addition, both aromatic and non-aromatic metabolites produced in the biodegradation were detected, some of which may have a negative effect on the methanogenesis process. These results revealed the complexity of the interaction between coal and organism from another point of view. Introduction Coal-bed methane (CBM), a new energy currently being promoted and developed in China, is a clean energy of high quality. Many reports have indicated that nearly 20% of the methane gas in the developed CBM resources is produced by microorganisms [1]. This has drawn the attention of many researchers worldwide to the promotion of CBM production by microorganisms and proposed a mechanism for the bioconversion of coal to methane [2]. Biogas obtained from in-lab gas production simulation experiments typically consists of methane and unconverted hydrogen and carbon dioxide [3][4][5]. Although the methane concentration in the biogas is lower than that in the original coal seam, the presence of carbon dioxide allows the syngas to be further produced by other means such as methane dry reforming, providing a new way for the clean and efficient utilization of coal [6,7]. Biogasification of coal is a process in which methane is produced by anaerobic degradation of organic matter in coal by methanogenic bacteria. During this process, soluble organic matter is released from coal and continuously degraded by microorganisms to form precursor substances such as acetic acid, carbon dioxide, hydrogen, and C1 compounds such as methanol, which are finally converted into methane [8][9][10]. Heteroatoms such as nitrogen, oxygen, and sulfur in coal macromolecules are considered to be the active sites of biodegradation [11][12][13][14][15]. The soluble oxygen-containing organics are readily released from coal and come into contact with microorganisms, however, their effects on each stage of the biogasification process are unknown. Methanol, a relatively highly polar, readily available organic solvent, has enriched oxygencontaining compounds in coal [16]. In addition, the methanol extract of coal usually contains alkanes, aromatics and other heteroatomic compounds, which are involved in the biodegradation of coal [17][18][19][20][21]. Although aromatic compounds seem more resistant, they can still be degraded under anoxic conditions through the cleavage of aromatic rings [22,23]. Solvent extraction also causes changes in coal organic composition and pore structure, which also have implications for microbial-coal interactions [24,25]. Therefore, the research on the biodegradation of methanol extracts of coal and its residue is beneficial for exploring the interaction mechanism between microorganisms and coal. In this research, methanol was employed as an organic solvent for the Soxhlet extraction to obtain the organic matter of Yima lignite. And microorganisms with good anaerobic gas production effect on lignite, pre-stored in the laboratory, were used as bacterial sources. Afterward, extracts and residue were utilized as substrates to conduct gas production simulation experiments. Finally, various analytical methods were combined to analyze the gas production of extracts and residue, as well as the composition and content of organic matter in the gas production process. This research provided an experimental basis for the subsequent analysis and degradation mechanism of biogas-generating active organic components in coal. Lignite methanol extraction and GC-MS analysis Lignite was collected from the No. 2-3 coal seams of Qianqiu Mine in Yima field in Henan province, with a buried depth of 798.5 m and a coal thickness of 6.82 m. The sedimentary age of the lignite sample with Ro = 0.48% was the Middle Jurassic. After sampling on site, coal was stored in a nitrogen-filled sealed tank. Before conducting the experiments, the oxide layer was removed and pulverized to below 120 meshes. After being dried at 70˚C to a constant weight, it was stored in a sample bag and named YM. The methanol extraction process of coal was as follows: (1) 50 g of lignite was weighed, and 250 mL methanol was employed as the extraction solvent to perform Soxhlet extraction at 68˚C for 80 h [17]; (2) after the extraction was completed, the extract was concentrated at 45˚C using a rotary evaporator. Then, the concentrated extract was made up to 100 mL with methanol and recorded as M1; (3) the residual coal was dried to a constant weight at 70˚C and stored in the sample bag, recorded as M2. The calculation formula for methanol extraction rate was as follows: where P is the extraction rate; m 0 is the mass of raw coal, and m 1 is the mass of residual coal. The methanol extraction rate of Yima lignite was 3.12%. After diluting M1 10 times with methanol, the organic components of the extracts were analyzed by gas chromatography-mass spectrometry (GC-MS, Agilent 7890A-5795C). The column was VF-WAXms (30 m×250 μm×0.25 μm), the post-operation temperature was 280˚C which was maintained for 5 min, and no split injection was applied. The inlet temperature was 250˚C, the injection volume was 0.8 μL, the purging rate was 15 mL/min, and the purging time was 0.2 min. Also, the carrier gas was helium with high purity, the column flow rate was 1.0 mL/min, and the initial temperature was 60˚C which was kept for 2 min. Then the temperature was raised to 250˚C at a rate of 10˚C/min and was maintained for 20 min. The MS was operated in the electron impact mode, with an ionization energy of 70 eV. The mass spectrometric identification was performed using the mass spectral database NIST2008. Biogas production experiment of methanol extract and residue from lignite The biogas production experiment consisted of four experimental groups: M1, M2, YM, and CK. All groups were set up in triplicate. The substrate of each experimental group was M1 2 mL, M2 2 g, YM 2 g, and methanol 2 mL, respectively. The components of the medium used in the experiment were as follows: K 2 HPO 4 (2.9 g), KH 2 PO 4 (1.5 g), NH 4 Cl (1.8 g), MgCl 2 (0.4 g), yeast extract (0.2 g), L-cysteine hydrochloride (0.5 g), deionized water (1000 mL). Gas chromatography (GC, Agilent 7890) was employed to analyze the gas composition in the process of biogas production, with a Carbonplot column (60 m×320 μm×1.5 μm), TCD detector, and gas-tight injection needle. The injection volume was 0.5 mL. The inlet temperature, column temperature, and the detector temperature were 150˚C, 30˚C, and 200˚C, respectively. Organic composition analysis of extract and residue gas production system The fermentation broth was initially aspirated from the anaerobic bottle with a sterile sampler. Then it was collected through a 0.22 μm microporous filter membrane. A dual-beam ultraviolet-visible light spectrophotometer (UV-Vis, Unico, UV4802) was employed to perform spectral scanning. The scanning range was 190-400 nm with an interval of 1 nm. The deionized water was used as a blank to qualitatively analyze the composition of organic matter in the fermentation broth. Quantitative analysis of polar organic components of the fermentation broth was also carried out by liquid chromatography-mass spectrometry (HPLC-MS, Agilent 1290 6530 QTOF equipped with electrospray ionization source) equipped with an Agilent Zorbax C8 (1.8 μm×4.6 mm×50 mm). The mobile phase was methanol and 0.1% formic acid with a 0.5 mL/min flow rate. The column temperature was 25˚C, and the injection volume was 10 μL. The mass spectrometry acquisition mode was positive ion mode, and the fragmentor voltage and the capillary voltage were 130 V and 3500 V, respectively. N 2 was used as collision gas and dryer gas. The mass-to-charge ratio scanning range was 50-450 m/z. The non-polar organic matter in the fermentation broth was initially enriched with a solid-phase extraction column (Agilent Bond Elut C18, 500 mg, 120 μm, 6 mL) through the following steps: (1) 6 mL of methanol and 6 mL of deionized water were added to the extraction column sequentially to activate it. (2) 10 mL of the sample filtered with a 0.22 μm microporous membrane was added and passed the column at a flow rate of about 2 mL/ min. (3) The column was rinsed with 10 mL of deionized water initially, and then the column was blow-dried with nitrogen for 10 min. (4) The organic matter was eluted with 2 mL of methanol. Then the eluent was concentrated to a volume of 1 mL with nitrogen at 45˚C. After the enrichment, the sample was analyzed using the GC-MS method described in section 2.1. The organic composition of methanol extract of lignite The GC-MS total ion current chromatogram (TIC) of the extract is shown in Fig 1. The library search was performed on the chromatographic peaks. The 20 compounds with a matching degree greater than 60 are shown in Table 1. Alcohols, esters, carboxylic acids, amides, phenols, aromatic hydrocarbons, and heterocyclic compounds were mostly detected in the methanol extract. More than half of these compounds contained oxygen atoms, indicating that methanol extraction had an enrichment effect on oxygen-containing compounds in coal [16]. The oxygen element in the extract primarily existed in the form of hydroxyl, carbonyl, ester and amide groups, and the nitrogen element chiefly existed in the form of the heterocycle. The presence of nitrogen and oxygen heteroatoms provided potential sites for biodegradation. The aromatic hydrocarbons in the extract were principally alkyl substituents of naphthalene and phenanthrene, of which 7-butyl-1-hexylnaphthalene showed the highest abundance. Analysis of biogas production composition of lignite methanol extract and residual coal The results of biogas production in each experimental group are illustrated in Fig 2. Among the experimental groups, the M1 group had the largest H 2 production (0.72%, the cumulative H 2 production was 0.05 mmol/g). Also, all groups produced CH 4 (methane content of CK>M2>YM), except the M1 group. The CK group had a large amount of CH 4 , CO 2 , and a small amount of H 2 , which indicated that the bacteria employed could directly use methanol as a substrate to produce methane. Furmann et al. [14] found that a small amount of CH 4 could be detected after the anaerobic degradation of the methanol extract of highly volatile bituminous coal. However, only H 2 and CO 2 were detected in the M1 group, which contained organic matter extracted from coal in addition to methanol. It was thought that the extracted organic matter was the substrate of the fermentation bacteria and hydrogen-producing acetogens in the flora. The substrate promoted the production of more H 2 , but it did not play a corresponding role in the methanogenic process in the system. PLOS ONE Methane was produced by acetoclastic, methylotrophic, and hydrogenotrophic methanogens [2]. The largest abundance of methanogens in the original flora in this study was Methanoculleus [3], the most reported methanogenic microorganism in the literature. It can produce methane using small molecules such as H 2 /CO 2 and formate. Theoretically, the M1 group should be able to produce methane. However, based on the current results, it was likely that the organic matter in the extract or the intermediate metabolites produced by the organic matter had a negative impact on the metabolic process of the methanogens. The specific reason still needed to be further elaborated on. In contrast, after methanol extraction, the methane production of the M2 group reached 1.03 mmol/g and increased 18% compared with the YM group. It was speculated that methanol extraction increased the contact area between coal and microorganisms [26,27]. UV-vis analysis of organic matter in the biogas production system of lignite methanol extract and residual coal The results of the UV-vis analysis of fermentation broth before and after biogas production are exhibited in Fig 3. Before the gas production, the M1 group fermentation broth showed a strong continuous absorption (peak at 225 nm-275 nm), which was generated by the conjugation of the chromophore's double bond with the benzene ring in the system [28]. Combined with the GC-MS analysis results of the extract, the absorption peak primarily originated from aromatic ketones and esters. The M2, YM, and CK groups had shoulder peaks at 220 nm and 254 nm, predominantly generated by the absorption of organic matter in the medium. After biogas generation, the absorption of the M1 group at 225-275 nm was significantly weakened. This result suggested that organic compounds such as aromatic ketones and esters in the extract were degraded. The M1 group had shoulder peaks near 220 nm and 280 nm, primarily from aromatic compounds [29]. However, the M2 group and YM group had no shoulder peaks in this wavelength range, which indicated that the products of the M1 group contained more aromatic compounds. The ratio of the absorbance of the fermentation broth at 250 nm and 365 nm (E 250 /E 365 ) was applied to characterize the molecular weight of soluble organic matter. The larger the ratio, the smaller the molecular weight of the organic matter is [29,30]. The changes of E 250 / E 365 before and after gas production in each experimental group are displayed in Fig 4. It can be observed that the fermentation broth of the M1 group contained a large amount of macromolecular organic matter before biogas production, and its E 250 /E 365 was substantially smaller than that of other experimental groups. E 250 /E 365 of the M1 group increased slightly after gas production, demonstrating that some organic substances were biodegraded and the molecular weight became smaller. While, E 250 /E 365 of the M2 and YM groups decreased after gas production, which may be due to the release of macromolecular substances in the coal under the biological action [29]. Composition and change of organic matter in the biogas production system of lignite methanol extract and residual coal The GC-MS total ion current chromatogram of the fermentation broth of each experimental group before and after gas production is shown in Fig 5. Before gas production, eight compounds were detected in the samples of the M1 group. They included trans-1,2,3,4,4a,5,8,8aoctahydro-4a-methylnaphthalene (No.5 in Table 1), 3,5-bis (1,1-dimethylethyl)-phenol (No. 6 in Table 1), 1-(1,3-dimethyl-1H-indol-2-yl) ethanone (No. 11 in Table 1), methyl hexadecanoate (No. 13 in Table 1), 1,6-dimethyl-4-(2-methylethyl) naphthalene (No. 17 in Table 1), 7-butyl-1-hexyl Naphthalene (No. 19 in Table 1), simonellite (No. 20 in Table 1), and retene (No. 21 in Table 1). No compounds were detected in the remaining experimental groups. After gas generation, the height of the main peaks in the chromatograms of the M1 group samples decreased significantly. The peaks of 1,6-dimethyl-4-(2-methylethyl) naphthalene (No. 17 in Table 1), 7-butyl-1-hexylnaphthalene (No. 19 in Table 1), simonellite (No. 20 in Table 1), and retene (No. 21 in Table 1) disappeared. This result demonstrated that these compounds could be degraded and utilized by the flora. Different studies have reported that microorganisms of the Firmicutes and Proteobacteria can degrade polycyclic aromatic hydrocarbons [31,32], and the microorganisms of the Bacteroidetes can degrade phenols [33][34][35]. These microorganisms in the inoculum might be involved in the degradation of organic matter in the extract. A large amount of indole which was a metabolite produced by the use of methanol by the flora was detected in the fermentation broth after gas production in the CK group ( Fig 5B). However, indole was not detected in the M1 group. It was speculated that some compounds might inhibit the production of indole. Phenolic organic compounds could be connected with the degradation of lignin [36]. Phenol was found in the M1, M2, and YM groups after the experiment (Fig 5B), indicating the decomposition of polymers in lignite [37]. Moreover, these authors identified over 100 compounds from kerogen due to the decomposition of polymers, covering the compounds obtained in this study. Nevertheless, no more metabolites except phenol were detected in all groups. The possible reason was that the produced metabolites were polar and difficult to be detected using the GC-MS method. Therefore, HPLC-MS was employed for further analysis of the fermentation broth. The HPLC-MS total ion current chromatogram of the fermentation broth of each experimental group was exhibited in Fig 6. For samples of the M1 group after the experiment, the peaks of C 11 H 17 N 5 O 3 (at 3.0 min), C 7 H 11 N 5 (at 3.3 min), and C 13 H 11 NO 3 (at 3.9 min) disappeared (Fig 6A), indicating that these compounds were degraded by the flora. Moreover, the abundances of C 9 H 14 N 2 (at 2.8 min), C 11 H 12 N 2 (at 2.8 min), C 9 H 17 N 3 (at 2.8 min), C 6 H 11 NO 2 (at 4.8 min), and C 6 H 9 N 3 O 3 (at 6.7 min) increased significantly after the experiment (Fig 6B), which proved that these compounds were produced during the degradation of the extract. At the same time, YM and M2 groups also showed similar changes. Except for C 6 H 11 NO 2 , the number of rings plus double bonds of other compounds was more than 4. Combined with the results of UV analysis (Fig 3B), these products may be aromatic compounds. The degradation of the extract might lead to the rapid accumulation of some metabolites, which may have a negative effect on the methane production process. Especially, C 6 H 11 NO 2 existed in the extract and accumulated significantly after biological action in all groups. In another study, the addition of excess pulverized coal resulted in the inhibition of methanogenesis and the accumulation of C 6 H 11 NO 2 , which also showed that C 6 H1 1 NO 2 was involved in inhibiting the methanogenesis process [38]. Interestingly, it was not released from coal during the sterilization, which indicated that it was firmly bound to the macromolecular structure. Considering the potential effect of C 6 H 11 NO 2 on the methanogenesis process, its existence in coal also provided insight into the complexity of the process of microbial action. The number of rings plus double bonds of C 6 H 11 NO 2 is 2, indicating that it is not an aromatic compound. The MS and MS/MS spectra of C 6 H 11 NO 2 are shown in Fig 7, and their specific structure needs to be further identified. Conclusions Research on bioavailable organic matter in coal and its metabolites is an important part of elucidating the mechanism of coal biogas formation. In this study, many alcohols, esters, carboxylic acids, amides, phenols, aromatic hydrocarbons, and heterocyclic compounds were detected in the methanol extract of Yima lignite, with aromatic compounds showing the most considerable abundance. The methanol extract of Yima lignite was used by the microbial flora to produce H 2 and CO 2 without accumulation of CH 4 . Both Aromatic hydrocarbons and other oxygen-containing compounds in the extract were biodegraded, and their molecular weights decreased. Certain soluble organic compounds in the extract and metabolites of the biodegradation process may negatively affect the methanogenesis process, where the compound C 6 H 11 NO 2 present in both the extract and the product may be the primary inhibitor. The methane production of residual coal increased by 18% compared with the raw coal. In addition, both aromatic and non-aromatic compounds were produced during the biodegradation process, and the appearance of phenol in the product indicated the depolymerization of lignin in coal. The current study demonstrated the complex role of soluble organic matter and its metabolites during the biogasificaition process.
2022-10-13T06:18:10.285Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "ee759d07a2b42bb7eececab391f3e77807c13280", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0275842&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adfaabb9360f19496b4a56e43a0aec9a8becf4f1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
12946503
pes2o/s2orc
v3-fos-license
Toward a Hajnal-Szemeredi theorem for hypergraphs Let $H$ be a triple system with maximum degree $d>1$ and let $r>10^7\sqrt{d}\log^{2}d$. Then $H$ has a proper vertex coloring with $r$ colors such that any two color classes differ in size by at most one. The bound on $r$ is sharp in order of magnitude apart from the logarithmic factors. Moreover, such an $r$-coloring can be found via a randomized algorithm whose expected running time is polynomial in the number of vertices of $\cH$. This is the first result in the direction of generalizing the Hajnal-Szemer\'edi theorem to hypergraphs. Introduction One of the basic facts of graph coloring is that every graph G with maximum degree d has chromatic number at most d + 1. An equitable r-coloring of G = (V, E) is proper coloring with r-colors for which each color class has size ⌊|V |/r⌋ or ⌈|V |/r⌉. A much deeper result is: Theorem 1 (Hajnal-Szemerédi [8]). For every integer r and graph G with maximum degree d, if d < r then G has an equitable r-coloring. The original proof was quite complicated, and did not yield a polynomial time algorithm for producing the coloring, but recently Mydlarz and Szemerédi [15], and independently Kierstead and Kostochka [9], found simpler proofs that did yield polynomial time algorithms. See [11] for an even simpler proof. These ideas were combined in [12] to obtain an O(r|V | 2 ) time algorithm. Kierstead and Kostochka [10] also strengthened the Hajnal-Szemerédi Theorem by weakening the degree constraint-if d(x) + d(y) ≤ 2r + 1 for every edge xy then G has an equitable (r + 1)-coloring. A k-uniform hypergraph (k-graph for short) is a hypergraph whose edges all have size k. A proper coloring of a hypergraph is a coloring of its vertices with no monochromatic edge. Hypergraph coloring has a long history beginning with the seminal results of Erdős [2,3] about the minimum number of edges in a k-graph that is not 2-colorable. Apart from giving rise to many of the (still open) major problems in combinatorics, attempts to answer questions in this area have led to fundamental new proof methods, most notably the semi-random or nibble method and the Lovaśz Local Lemma [4]. In [4], Erdős and Lovász obtained, as a corollary to the Local Lemma, that every k-graph with maximum degree d has chromatic number at most 3d 1/(k−1) . In this paper we merge these two important areas of research by studying equitable colorings of hypergraphs. The situation for hypergraphs is much more complicated. First, we do not even know sharp bounds for chromatic number in terms of maximum degree. Our results deal only with 3-graphs. An easy consequence of the Local Lemma is that every 3-graph with maximum degree d has a proper coloring with at most √ 3ed = (2.85..) √ d colors. There appears to be no proof of this that does not use the Local Lemma. On the other hand, complete 3-graphs show that one needs at least d/2 > (0.707..) √ d colors. It remains an open problem to obtain the best constant here. As the above discussion indicates, it is premature to hope for a tight analogue of the Hajnal-Szemerédi theorem for 3-graphs. Nevertheless, we begin to address the question in this paper. We prove: Theorem 2. Let d ≥ 2 and r be integers satisfying r > 10 7 √ d log 2 d. Then every n vertex 3-graph with maximum degree d has an equitable r-coloring. Moreover, such an r-coloring can be found via a randomized algorithm whose expected running time is polynomial in n. It remains an open problem to find a deterministic polynomial time algorithm above. The bound on t in the theorem is likely not best possible, and we make the following conjecture. Conjecture 3. Let d and r be integers satisfying r > 2.86 √ d. Then every 3-graph with maximum degree d has an equitable r-coloring. Our approach does not seem to extend to k-graphs for k ≥ 4. Nevertheless, we believe a conjecture similar to the one above holds for k ≥ 4 as well (see Conjecture 10). Notation and terminology. A k-edge or k-set, is an edge or set of size k. We associate a hypergraph with its edge set, and refer to 3-edges as triads. Probabilistic tools We will use the Local Lemma [4] in the (standard) form below: . . , A n be events in an arbitrary probability space. Suppose that each event A i is mutually independent of a set of all the other events A j but at most d, and that P (A i ) < p for all 1 ≤ i ≤ n. If ep(d + 1) < 1, then with positive probability, none of the events A i holds. Our second tool is the Kim-Vu inequality [13]. This is needed to obtain exponential bounds for sums of not necessarily independent random variables. Let Υ = (W, F ) be a hypergraph of rank 2, meaning that each f ∈ F satisfies |f | ≤ 2. Let z v for v ∈ W be independent indicator random variables. Set where we allow f = ∅ in which case the empty product is 1. For A ⊆ W, |A| ≤ 2 let Then for any λ > 0, Proof of Theorem 2 In this section we prove the existence of the r-coloring guaranteed by Theorem 2. In the next section we will prove the algorithmic part of the theorem. Let d ≥ 2 and H be a 3-graph with vertex set V and maximum degree d. Let r ≥ 10 7 √ d log 2 d. Notice that we may assume that d ≥ 10 7 : Otherwise r > d, and so H has a cover graph H with maximum degree d. By the Hajnal-Szemerédi Theorem H, and thus H, has an equitable r coloring. To further simplify matters, first assume that r divides n = |V | and set s = n r . At the end of this section we will discuss the minor modification that is required in the general case. Our goal is to color H with r colors so that every class has size s. This is accomplished in three steps. Throughout the rest of the proof, set Step 1. First we partition V into t sets X 1 , . . . , X t and define a graph H so that , and H[X i ] has maximum degree less than p = 10 5 log 2 d. Actually, we will need the more technical statement of Proposition 7. If |X i | ≥ ps and s | |X i | then we can use the Hajnal-Szemerédi Theorem to partition X i into independent s-sets of H i ; since H i is a cover of H i , these are also independent s-sets of H i . So we are left with two problems: either X i is small or s ∤ |X i | for some i. Step 2. In the second step we move vertices from small X i to large X j , preserving our ability to partition all but less than s vertices of X j into independent s-sets. This leaves us with no small parts X i . Step 3. Finally in the last step we shift a small number of vertices from one class to the next so that all X i are divisible by s. In the next three subsections, we carry out the details. Step 1. Set a = t and note that Definition. An edge xy ∈ L is strong if otherwise it is weak. A triad vxy ∈ H is strong if it contains a strong edge of L; otherwise it is weak. Let G ⊆ L be the subgraph of L consisting of strong edges. Later we will view G as a digraph with edges oriented in both directions. such that for all vertices v and colors i, the following two properties hold: be a random coloring obtained as follows: For each vertex v, independently choose f (v) ∈ [t] so that each color in [t] has probability 1/t of being chosen. We will apply the Local Lemma to prove that with positive probability f satisfies (A v,i ) and (D v,i ) for all vertices v and colors i. For v ∈ V and i ∈ [t], let A v,i be the event that (A v,i ) fails and D v,i be the event that (D v,i ) fails. Bound on P (A v,i ): We use Kim-Vu concentration, where Υ is the set of edges xy such that vxy is a weak triad, and z x is the indicator random variable for vertex x receiving color i. Clearly E(z x ) = 1/t for each vertex x. Note that the Kim-Vu setting is consistent with our probability space, as we assign each vertex color i with probability 1/t independently of all other vertices. Moreover, For adjacent edges f, f ′ ∈ Υ, the events v∈f z v and v∈f ′ z v are not independent, and so we do need the Kim-Vu concentration. Let d v be the number of edges in Υ. For any vertex subset A of size two, Z A = 1 by definition of the empty product. Therefore If Υ has a vertex x of degree at least a, then there are at least a edges of H containing both v and x. This implies that vx is a strong edge and hence all triads of the form vxy where xy ∈ Υ are strong. This contradicts the definition of Υ. We conclude that Υ has maximum degree less than a and consequently, Bound on P (D v,i ): The crucial observation here is that the graph G has maximum degree at most 2d/a. This is true because every edge of G is contained in at least a edges of H. So if a vertex v is incident to more than 2d/a edges of G, then d H (v) > (2d/a)(a/2) = d, contradiction. Consequently, as long as no edge containing w or x shares a point with an edge containing v. So we may apply the Local Lemma with dependency degree at most 5d 2 . Since e(5d 2 )(1/d 9 ) < 1, the Local Lemma implies that there is a vertex partition with color classes X i := {x : f (x) = i} as in the lemma. Viewing G as a digraph, let H ⊃ G be the digraph formed from G by adding the diedges (v, x) and (v, y) for each weak triad vxy ∈ H with f (x) = f (y). Occasionally we will view H as a (simple) graph by replacing the diedges (x, y) or (y, x) by the (undirected) edge xy. Proof. Let xyz be a triad in H[X i ∪ X j ]. If xyz is strong then it contains a strong edge from Otherwise xyz is weak, but has two vertices from the same class, say x and y. Then (z, x), (z, y) ∈ E(H). Proof. By property (D v,i ), we have | E G (v, X i )| < 10 log d. The number of weak triads of H that contribute to E H−G (v, X i ) is at most 10 4 log 2 d by (A v,i ), and each of these triads contributes two out-edges to E H−G (v, X i ). Thus , and so u has already been counted as an out-neighbor of v. has an equitable p ′ -coloring with color classes of size s. To finish the argument we would like to partition each X i into blocks whose sizes are at least ps and is divisible by s, but this may not be possible. So we may need to make some adjustments to the X i 's. This is not too difficult if all the X i 's have size at least 12ps, but first we must arrange that none of the X i have size less than 12ps. This is done in Step 2 by distributing all vertices in small X i to big X j . Doing so may corrupt the nice properties of the remaining big X i , so we must preserve a reasonably large uncorrupted segment of each big X i . We may assume that J Recalling that n = rs and dividing by s yields This also implies that r − t 1 ≤ 25pt ≤ r/4 and hence t 1 ≥ 3r/4. Let us rename the Y i,j 's, to obtain a vertex partition For v ∈ S, define As the W i 's were formed by refining the partition given by the X j 's, and | E H (v, X j )| < p by Proposition 7, we obtain from (3) Now for each v ∈ S, pick one of the elements i ∈ I(v), where each i has probability 1/|I(v)| of being picked, and assign v the color i, calling this assignment χ(v) = i. In this way each There is a choice of χ so that each W + i is an independent set of H. Proof. For each triad e = vxy ∈ H, with v ∈ S, let B e be the event that e becomes monochromatic after these random choices have been made, i.e., e ⊂ W + i for some i ∈ [t 1 ]. By the choice of χ(v) ∈ I(v), {x, y} W i . If v, x ∈ S and y ∈ W i , then otherwise v, x, y ∈ S, and so again In both cases P (B e ) ≤ p = 1/100d. The event B e is mutually independent of all other events B f for which e ∩ f = ∅, so the dependency degree in the Local Lemma is at most 3d − 1. Since ep(3d) < 1, the Local Lemma implies that there is a partition W + 1 ∪ . . . ∪ W + s of V − Z that is a proper coloring of H. For each j ∈ [t 1 ], |W + j | ≥ |W j | = s. Partition each W + j into s-sets and one set R j (possibly empty) of size less than s, so that R j ⊂ W j = Y i,h ⊆ Y i ⊆ X i . This is possible because |W j | = s. Note also that these s-sets have been shown to be independent sets of H in Lemma 8. We have partitioned H[V \ U] into independent s-sets. Moreover, we have a partition of U into large subsets U i ⊆ X i , each with size at least 12ps, as U i ⊃ Z i . This completes Step 2. Step 3. We have already colored all of H[V \U] using classes of size s. The following lemma completes Step 3 and the proof of the theorem. The choice of Q i must be made with some care; since Q i ∪ U i+1 contains vertices from X i and X i+1 , the degree bound of Proposition 7(b) does not hold for this set. However the outdegree bound (a) does hold for both X i and X i+1 . So we will be able to choose Q i ⊂ U i \ P i and P i+1 ⊆ U i+1 so that We do this recursively. Initialize by setting Q 0 = ∅ = P 1 . Now suppose we have constructed Q j ⊆ U j and P j+1 ⊆ U j+1 for all j < i so that (i-iv) hold. Set Let P be an (8ps)-subset of U i \ P i and P ′ be a (8ps)-subset of U i+1 . By Proposition 7, each vertex v ∈ P has at most p neighbors in P and at most p out-neighbors in P ′ . Since in H every vertex of P ′ has at most p out-neighbors in P , | E H (P ′ , P )| ≤ 8p 2 s. Since ρ ≤ s ≤ |P |/2, we can choose a ρ-subset Q i ⊆ P so that every vertex v ∈ Q i has in-degree from P ′ satisfying | E(P ′ , v)| < 2p; so (i) and (ii) are satisfied. Similarly, we can choose P i+1 ⊆ P ′ such that |P i+1 | = 4sp − ρ and every vertex v ∈ P i+1 has in-degree from Q i satisfying | E(Q i , v)| < 2p; so (iii) is satisfied. It follows that the maximum degree of H[Q i ∪ P i+1 ] is less than 4p − 1; so (iv) is satisfied. By (i,iii,iv) and the Hajnal-Szemerédi Theorem, we can color H[Q i ∪ P i+1 ] so that every class has size s. The procedure terminates when we have constructed P t 0 ⊂ U t 0 . By the above, we may equitably color H[Q t 0 −1 ∪ P t 0 ] and so it remains to equitably color H[U t 0 \ P t 0 ]. Since s divides n and the remaining vertices have been partitioned into s-sets, we conclude that s also divides |U t 0 \ P t 0 |. Also, |U t 0 \ P t 0 | ≥ 12ps − 4ps = 8ps and by Proposition 7 (b), the graph H[U t 0 \ P t 0 ] has maximum degree less than p. Therefore, we can once again use the Hajnal-Szemerédi Theorem to equitably color H[U t 0 \ P t 0 ] into s-sets. Finally, we consider the case that n = qr + b, 0 < b < r, and set s = q + 1. So we need to partition V into b independent s-sets and r − b independent (s − 1)-sets. There is no change in Step 1. In Steps 2 and 3 we begin constructing independent s-sets, but after we have constructed b of them, we build blocks with parts divisible by s − 1. We apply the Hajnal-Szemerédi Theorem in exactly the same way, and it even does not matter if the switch comes in the middle of a block. A randomized algorithm In this short section we prove that the r-coloring of the previous section can be found via a randomized algorithm whose expected running time is polynomial in n. The r-coloring is obtained in the following sequence of steps: 1) use the Local Lemma to produce the partition X i ∪ . . . ∪ X t 2) apply the Hajnal-Szemerédi Theorem to equitably color a large subset Y i of vertices of the graph H[X i ] 3) apply the Local Lemma to insert the vertices of S = V \ i∈J X i into the sets W i , and 4) deterministically color U as in Lemma 9 repeatedly using the Hajnal-Szemerédi Theorem. By the results of Mydlarz-Szemerédi and Kierstead-Kostochka [9,15,12], we have deterministic polynomial time procedures for steps 2) and 4) above. Consequently, it suffices to provide randomized algorithms for Steps 1) and 3), which essentially boils down to derandomizing the Local Lemma in these two instances. This is obtained by applying the recent algorithmic version of the Local Lemma due to Moser and Tardos (Theorem 1.2 in [14]). The first is that one should be able to efficiently evaluate the conditional probabilities of bad events Concluding remarks given the values of the random variables on a subset of vertices. The second is that the maximum degree in the dependency graph for the Local Lemma is bounded by a constant. Subsequently, Chandrasekaran, Goyal, and Haeupler [1] have removed the second assumption above, so it suffices to efficiently compute conditional probabilities given partial information on the values of random variables. In the case of 3) above, we are able to do this, as it suffices to determine the size of I(v) which amounts to looking at the diedges (v, w) in H. In the case of 1), we are also able to do this for the events D v,i as this amounts to just checking if vertex x such that vx ∈ G, has color i. However, we are not able to compute this conditional probability for the events A v,i efficiently, due to lack of independence. This is precisely where we needed the Kim-Vu inequalities, and it remains the only bottleneck that prevents us from obtaining a deterministic polynomial time algorithm. • Surprisingly, our results do not extend to k-graphs when k > 3. Here the number of colors we would expect to use is d 1/(k−1)+o (1) . The technical reason why our proof doesn't seem to work is the following: Assume that k = 4. In the first application of the Local Lemma, we would need at least t = d 1/3+o(1) colors in order to guarantee that the X i were close to being independent sets. On the other hand, in the second application of the Local Lemma (Lemma 8) one would have to consider the case that we have edge e = vwxy ∈ H where v, w ∈ S and x, y ∈ W i . Then Consequently, the probability is not small enough to apply the Local Lemma. Nevertheless, we conjecture the following: Conjecture 10. For each k ≥ 3 there exists c k > 0 such that for every d and r ≥ c k d 1/(k−1) the following holds: Every k-graph with maximum degree d and n vertices has an equitable r-coloring. Such a coloring can be found in deterministic polynomial time in n. • With the recent activity [9,10,11,12] on results about equitable colorings, it seems appropriate to study the question of obtaining equitable colorings in other contexts where coloring problems were explored. One active area of research is to obtain good upper bounds for the chromatic number of graphs that have local constraints. In particular, a deep theorem of Johansson [7] that culminated many years of research is that every triangle-free graph with maximum degree d has chromatic number at most O(d/ log d). We conjecture the following: Conjecture 11. For every d > 1 there is a constant c such that the every triangle-free graph with maximum degree d has an equitable r-coloring, whenever r > cd/ log d. Very recently, Frieze and Mubayi have proved a hypergraph analogue of Johansson's theorem mentioned above. In particular, they prove in [5] and [6] that every linear (meaning that every two edges share at most one vertex) k-graph with maximum degree d has chromatic number at most O((d/ log d) 1/(k−1) ) and this is sharp in order of magnitude. While the proof of this theorem is much more complicated than Johansson's result, the basic method is similar, so an extension of Conjecture 11 seems plausible. Conjecture 12. For every d > 1 there is a constant c such that the every linear k-graph with maximum degree d has an equitable r-coloring, whenever r > c(d/ log d) 1/(k−1) .
2010-05-21T21:10:50.000Z
2010-05-21T00:00:00.000
{ "year": 2010, "sha1": "6853997c228ce4dd2dc07e21facdf8ea7cb04e57", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6853997c228ce4dd2dc07e21facdf8ea7cb04e57", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
85646908
pes2o/s2orc
v3-fos-license
Molecular studies on transmission of mung bean yellow mosaic virus ( MYMV ) by Bemisia tabaci Genn . in Mungbean The whitefly Bemisia tabaci Genn. is an important pest worldwide because of its ability to cause damage by direct feeding and its role as a vector of plant viruses including geminiviruses. Yellow mosaic virus (MYV) is a serious disease of pulse crops including mungbean, blackgram, frenchbean, pigeonpea and soybean. Yellow mosaic diseases are one of the most important viral diseases in mungbean caused by mungbean yellow mosaic virus (MYMV) which lead to severe yield reduction and it necessitates development of MYMV resistant lines for improved crop yield. Basic studies were carried out to elucidate the characteristics of MYMV transmission by its vector, B. tabaci. Artificial transmission experiments with B. tabaci were conducted under greenhouse conditions using cylindrical nylon cages with wire mesh tops. After 24 h acquisition access period (ASP) on agroinfected mungbean plants, B. tabaci collected from these agroinfected mungbean plants were considered viruliferous and transferred to a separate cage with healthy mungbean plants as confirmed via agroinoculation. After 24 h inoculation access period (IAP), B. tabaci were removed and the plants were sprayed with an insecticide and kept for observations of symptom development for 10 to 25 days in insect cages. Studies concluded with mungbean accessions using ten whitefly adults with 24 h of ASP and IAP resulted in transmission of virus of 70.50, and percent in MYMVR 111 (At VA 221), MYMVR 29 (At VA 239) and MYMVR 29 (At VA 221) respectively. Ten viruliferous whitefly adults did not cause MYMV symptom in KMG 189 (At VA 221), ML818 (At VA 239) and MYMVR 57 (At VA 221). Twenty viruliferous whitefly adults were able to cause MYMV after 48 h ASP and 24 h IAP and resulted in the maximum transmission efficiency in MYMVR 55 (At VA 221) (85.00%) and MYMVR 55 (At VA 239) (83.50%). The virus was proven to be a persistent discrete fragment of 703 bp using the polymerase chain reaction method on viruliferous whitefly adults, while no bands were obtained from non-viruliferous B. tabaci adults reared on CO2 brinjal host. INTRODUCTION The whitefly, Bemisia tabaci (Genn.) is one of the most economically important pests in many tropical and subtropical regions (Bock, 1982).This polyphagous pest can cause extensive damage in more than 500 species of agricultural and horticultural crops (Greathead, 1986) through its direct feeding, and its ability to directly transmit geminiviruses.Mungbean (Vigna radiata L.) is an important pulse crop in developing countries of Asia, Africa and Latin America where it is consumed as dry seeds or fresh green pods (Karuppanapandian et al., 2006).Mungbean serve as a vital source of vegetable protein (19 to 28%), mineral (0.18 to 0.21%) and vitamins.India is the leading mungbean producer, covering up to 55% of the total world acreage and 45% of total production (Rishi, 2009).Mungbean yellow mosaic virus belongs to family Geminiviridae (Fauquet et al., 2003).The family Geminiviridae is divided in to four genera, Mastrevirus, Curtovirus, Topocuvirus and Begomovirus (Ramos et al., 2008).Begomovirus is the largest genus of the family Geminiviridae (Dhakar et al., 2010) which is characterized by a pipartite genome or monopartite genomes that were transmitted in a circulative persistent manner by B. tabaci.Among biotic agents, plant viruses are responsible for a significant proportion of crop diseases (Prajapat et al., 2011).It causes serious economic losses in many major crops by reducing seed yield and quality (Kang et al., 2005).Yellow mosaic disease (YMD) is reported to be the most destructive viral disease, caused by yellow mosaic virus.Mungbean yellow mosaic virus causes severe yield reduction in all mungbean growing countries in Asia including India (Biswass et al., 2008).Among the various diseases, MYMV disease was given special attention because of its severity and ability to cause yield loss of up to 85% (AVRDC, 1998).Conventional methods are unsuccessful in developing MYMV resistant mungbean lines due to the lack of a reliable screening technique.Rogers et al., (1986) developed an innovative technique called "Agroinfection" which serve as an alternate route for viral infection of plants by using the Ti plasmid and was demonstrated in case of Tomato Golden Mosaic Virus.A new technique called agroinoculation was has been shown to be used successfully in screening.Agroinoculation was done using the viral constructs mobilized in Agrobacterium tumefaciens strains.This paper reports the molecular studies on transmission of Mungbean yellow mosaic virus (MYMV) by B. tabaci in mungbean and the development of a polymerase chain reaction-based technique to detect the virus from its insect vector. Mass culturing of B. tabaci Field collected B. tabaci nymphs and adults were reared in insect cages containing thirty day old CO 2 brinjal plants to maintain a laboratory culture at the temperature 27 and relative humidity 70% for the studies three generation maintained and adult age two days after emergence and used for the experiments.B. tabaci needed for the insect transmission experiment were collected from the culture using an aspirator. Acquisition access period of B. tabaci on MYMV agroinfected mungbean plants Adult B. tabaci were collected from the laboratory culture with the Govindan et al. 2875 help of an aspirator and transferred into a test tube which was then covered with muslin cloth.Adult whiteflies were starved for 2 h under cool conditions temperature at (24°C) at which time the mouth of the test tube was opened to allow the adults to transfer to MYMV agroinfected mungbean plants.Adult whiteflies 10 and 20 where then allowed to feed for an acquisition period of 24 and 48 h. Agroinoculated mungbean plants MYMV resistant agroinoculated mungbean plants were grown in pots and maintained in a greenhouse for future use in the transmission experiment.Need to describe conditions in the greenhouse (26°C and relative humidity 64.15%), media used to grow the plants, source of seed, watering regime, pot size.plant age at testing etc. Insect (vector) transmission The insect (vector) transmission protocol developed by Aidawati et al. (2002) was used.MYMV transmission experiments with B. tabaci were conducted using cylindrical nylon cages with mesh tops.Ten B. tabaci adults were introduced into the cage through a hole which was sealed afterwards.After 24 and 48 h acquisition access period, B. tabaci adults were removed from MYMV agroinfected mungbean plants and transferred to a separate cage containing healthy agro inoculated (resistant) mungbean plants. After 24 h inoculated access period B. tabaci adults were removed and the plants were sprayed with an insecticide (Dimethoate 30EC at 1 ml/L) and evaluated for MYMV symptom development 10-20 later.Five potted plants were used for each acquisition-access period and the percentage of virus infection was calculated from plants showing MYMV symptoms after 10-20 days.Resistance levels were assessed by visual scoring of symptoms under greenhouse conditions (26°C and relative humidity 64.15%), following the 1-9 grade scale for visual scoring of mungbean yellow mosaic virus diseases by Nene et al. (1981) (Table 1). DNA extraction Total nucleic acids were extracted from individual of viruliferous and non viruliferous whiteflies using CTAB (hexadecyl trimethyl ammonium bromide) method with necessary modifications.Quality and quantity of the isolated DNA was measured in Nanodrop® ND-1000 spectrophotometer (nanodrop technologies, USA) and 1.0% Agarose gel electrophoresis before being used as the template DNA for all polymerase chain reactions (PCR).The reagents were purchased from Bangalore Genei Ltd., Bangalore, India. Detection of MYMV in B. tabaci by polymerase chain reaction Adults of B. tabaci were collected after a 24 h acquisition access period.Sets of 10, 15, 20, 25 and 10 non-viruliferous B. tabaci were subjected to DNA extraction following the CTAB (hexadecyl trimethyl ammonium bromide) method of Goodwin et al. (1994), while DNA extraction from MYMV infected mungbean leaves was conducted using the method of Karuppanapandian et al. (2006).The method of Rojas et al. (1993) Grade Description 1. No visible symptom on leaves or very minute yellow specks on leaves 2. Small yellow specks with restricted spread covering leaf area 0.1 to 5.0 % 3. Yellow mottling of leaves covering leaf area 5.1 to 10% 4. Yellow mottling of leaves covering leaf area 10.1 to 15% 5. Pronounced yellow mottling and discoloration of leaves stunting of plants covering 50.1 to 75 % 8. Severe yellow discoloration of leaves covering 75.1 to 90% 9. Very severe yellow discoloration of leaves covering 90.1 to 100% followed by PCR amplification along with DNA clone of A. tumefaciens.Amplified DNA fragments were electrophoresed in 1% agarose minigels in TBE buffer and detected with UV light after staining in ethidium bromide (Maniatis et al., 1982). RESULTS The results revealed that transmission experiment after a 24 h acquisition access and 24 h inoculation assess period ten B. tabaci viruliferous adults were able to cause transmission of virus up to 70.Typical symptoms appeared after a minimum incubation period of 24 h under green house condition.The control plants inoculated with non-viruliferous whiteflies did not show MYMV symptoms.The characteristics MYMV symptoms observed on naturally infected plants were appear in the form of small irregular yellow specs and spots along the veins, which enlarge until leaves were completely yellowed fewer flowers and pods that bear smaller, occasionally shriveled seeds in severe cases MYMV symptoms observed after 15 -16 days of after virus inoculation. Detection of MYMV in B. tabaci by polymerase chain reaction (PCR) Polymerase chain reaction (PCR) amplified fragments of the predicted size from the annealing positions of the coat protein (gene specific) primers were obtained from groups (10, 15, 20 and 25) of viruliferous B. tabaci and 10 non viruliferous B. tabaci (Figure 1).The virus was proven to be persistently discrete fragments of 703 bp were observed when polymerase chain reaction method was applied to detect the virus in viruliferous adults of B. tabaci, while no bands were obtained from nonviruliferous B. tabaci adults. DISCUSSION Geminiviruses are single-stranded DNA plant viruses with one or two circular genome components of 2.7 to 3.0 kb in size, encapsidated in twinned particles.They are transmitted by whiteflies.The whitefly species B. tabaci is the most efficient vector of members of the genus Begomovirus (1998;Van Regenmortel et al., 2000). Begomoviruses are currently emerging as a major threat in many tropical and subtropical regions in worldwide (Varma and Malathi, 2003).The probability of subsequent transmission of circulative viruses by insect vectors generally increases with increasing acquisition access period until all insects that are able to do so have acquired the virus (Swenson, 1967).Virus acquisition by insect vectors may depend on the virus titer in the infected plant, the ability of the insect to ingest the virus, and the passage of the virus through the midgut wall and subsequent survival in the insect vector.00, 83.50, 83.00, 81.00, 80.50, 75.00 and 70.00%, respectively (Table 3).Aidawati et al. (2002) reported that twenty B. tabaci adults cause 100% tobacco leaf curl virus transmission efficiency occurred with a 24 h acquisition access and inoculation assess period in tobacco.Transmission of begomoviruses from Indonesia by B. tabaci has been demonstrated earlier (Aidawati et al., 2002;Sudiono et al., 2001;Rusli et al., 1999). The expected fragment of viral DNA 703 bp was amplified from a ten adult of viruliferous B. tabaci while no bands were obtained from non-viruliferous B. tabaci adults (Figure 1).Detection of MYMV from viruliferous B. tabaci using PCR technique showed that amount of viral DNA amplified in the polymerase chain reaction became shown higher as shown by the brightness of the DNA fragment in the gel electrophoresis.The results reported by Aidawati et al. (2002) tobacco leaf curl viral DNA fragments of 1.6 kb were observed when polymerase chain reaction method was applied to detect the virus in viruliferous nymphs and individual adults of B. tabaci, while no bands were obtained from non-viruliferous adults.The results were shown by Butter and Rataul (1977) with tomato leaf curl virus, as well as, Cohen and Nitzany (1966) and Mehta et al. (1994) with tomato yellow leaf curl virus.The virus persist inside the host genome the symptom development of 1-16 days after inoculation.The results reported by Cohen and Nitzany (1966) persistency of virus in the insect body varies, for example, 1-15 days for tomato yellow leaf curl virus, 8-55 days for tomato leaf curl virus (Butter and Rattaul, 1977). Table 1 . Grade scale for visual scoring MYMV diseases. was used for amplification of viral DNA from B. tabaci extracts by polymerase chain reaction (PCR).The viruliferous nature of these insects were confirmed by polymerase chain reaction products amplified by viral coat protein gene specific primers.Individual insects of viruliferous B. tabaci in groups of 10, 15, 20, 25 and 10 non-viruliferous B. tabaci, MYMV infected mungbean leaf samples were taken for DNA extraction NO -indicate that the plants did not show any visible symptom during observation period. Table 3 . Transmission efficiency of Mung bean yellow mosaic virus (MYMV) by B. tabaci in Mungbean. NO -indicate that the plants did not show any visible symptom during observation period.
2018-12-30T01:10:13.871Z
2014-09-16T00:00:00.000
{ "year": 2014, "sha1": "6fc19ccb8c84be1bce780635325d386fe5b92784", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/B38EAAF47398.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "6fc19ccb8c84be1bce780635325d386fe5b92784", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
14790035
pes2o/s2orc
v3-fos-license
RNA Editome in Rhesus Macaque Shaped by Purifying Selection Understanding of the RNA editing process has been broadened considerably by the next generation sequencing technology; however, several issues regarding this regulatory step remain unresolved – the strategies to accurately delineate the editome, the mechanism by which its profile is maintained, and its evolutionary and functional relevance. Here we report an accurate and quantitative profile of the RNA editome for rhesus macaque, a close relative of human. By combining genome and transcriptome sequencing of multiple tissues from the same animal, we identified 31,250 editing sites, of which 99.8% are A-to-G transitions. We verified 96.6% of editing sites in coding regions and 97.5% of randomly selected sites in non-coding regions, as well as the corresponding levels of editing by multiple independent means, demonstrating the feasibility of our experimental paradigm. Several lines of evidence supported the notion that the adenosine deamination is associated with the macaque editome – A-to-G editing sites were flanked by sequences with the attributes of ADAR substrates, and both the sequence context and the expression profile of ADARs are relevant factors in determining the quantitative variance of RNA editing across different sites and tissue types. In support of the functional relevance of some of these editing sites, substitution valley of decreased divergence was detected around the editing site, suggesting the evolutionary constraint in maintaining some of these editing substrates with their double-stranded structure. These findings thus complement the “continuous probing” model that postulates tinkering-based origination of a small proportion of functional editing sites. In conclusion, the macaque editome reported here highlights RNA editing as a widespread functional regulation in primate evolution, and provides an informative framework for further understanding RNA editing in human. Introduction Since its discovery in 1986 [1], an increasing number of genes have been found to be subject to RNA editing, a co-transcriptional process that alters hereditary information by introducing differences between RNA and its corresponding DNA sequence [2].The investigation of such regulation accelerated dramatically after the development of next generation sequencing (NGS) technology, which facilitates the genome-wide determination of DNA and RNA sequences at relatively low cost [3][4][5].Several early NGSbased editome studies in human [4][5][6][7] have challenged the traditional view of human genetics, since RNA editing might be a source of variations inaccessible to previous genetic studies. Although the identification of RNA-editing sites by discerning sequence discrepancies between RNA and DNA derived from the same specimen seems to be a straightforward approach, it is errorprone when the RNA/DNA sequences are compiled by short reads generated from NGS technology [5].As is being extensively discussed [5,[8][9][10][11][12], widespread RNA-editing sites detected in a recent study might be largely a result of technical errors.It thus remains technically challenging to accurately identify human editing sites using NGS data [5,[8][9][10][11][12].In addition, given the difficulties of obtaining specimens of different tissues from the same human individual as well as accurately quantifying the levels of editing using merely NGS data, the mechanisms by which the editome is maintained and regulated remain unclear.Recent studies with contrasting findings are thus in line with the notion that RNA editome may be governed by complex regulation, despite the fact that a large proportion of non-canonical editing types were identified due to technical errors [5].First, large cross-tissue variations of the RNA editome were detected [4], while the tissue-biased RNA editome could not be directly explained by the expression and activity of known enzymes catalyzing adenosine deamination [13,14].Second, genome-wide editome analysis in human also suggested large intrapopulation variations [3,7], whereas one study on candidate genes demonstrated otherwise [15].Third, as only sporadic functional RNA-editing sites have been reported, it remains controversial whether the editing events detected by NGS represent functional regulation rather than neutral signals [16].The ''continuous probing'' model postulated that most of the editing sites are neutral with low editing levels, acting as a selection pool for a few functional editing sites [17], further challenged the functional significance of those widespread RNA-editing sites detected by NGS. Overall, NGS technology has helped open the Pandora's box of the editome and so has raised more questions than it answers.Key issues, including experimental and computational strategies to accurately identify the editome, the mechanism by which its profile is maintained, and its functional significance, are presently not well addressed [18].Cross-species comparisons with our close evolutionary relatives would provide a framework to clarify these issues.Therefore, we set out to study the editome in rhesus macaque, with the aim of complementing several recent reports on human editome [4][5][6][7].The macaque editome we report here provides an important evolutionary context to understand RNA-editing regulation in human, emphasizing RNA editing as a form of widespread functional regulation shaped by purifying selection. Genome-wide identification of RNA-editing sites in rhesus macaque We performed a genome-wide study to identify RNA-editing sites in rhesus macaque (Figure 1A).Considering the error-prone gene structures in this species, strand-specific poly(A)-positive RNA-Seq was performed (Materials and Methods).A total of 824.8 million 90-bp paired-end reads were obtained for seven macaque tissues (prefrontal cortex, cerebellum, muscle, kidney, heart, testis and lung) derived from the same animal, and mapped to the macaque genome with high quality (Table 1, Figure S1).As a reference, genomic DNA derived from prefrontal cortex of the same animal was sequenced; a total of 1,763.3 million 90-bp paired-end reads were uniquely mapped to the macaque genome, with 92.2% of the genomic regions successfully sequenced and 91.6% covered by at least ten DNA reads (Table 1, Figure S2).Such deep coverage of the genome and transcriptome in multiple tissues of the same animal provided an ideal dataset to profile the RNA editome in rhesus macaque (Table 1 Stringent computational pipelines were then developed to place the DNA reads and cDNA reads to the macaque genome, especially for the definition of uniquely-mapped cDNA reads (Materials and Methods).Briefly, one cDNA read was considered to be uniquely mapped only if it had no second-best hit or the second-best hit included at least three additional sequence alignment mismatches, when considering both the genome and the transcriptome mapping models.The technical issues raised recently [8][9][10], such as systematic sequencing errors as well as pseudogene-or paralog-related misalignments in shortread processing, were thus adequately addressed.Based on uniquely-mapped reads, candidate RNA-editing sites were identified by distinguishing sequence discrepancies between DNA and cDNA.This initial list was further subjected to stringent inclusion criteria to control for false-positives (Figure 1A).Briefly, following previous large-scale studies in human [3,6,7,12], a standard computational pipeline with multiple filters was introduced to eliminate false-positives due to amplification bias, sequencing errors and mapping errors.To account for the error-prone gene structures in rhesus macaque, we introduced one additional filter to remove editing sites located in previously mis-annotated transcripts, on the basis of in-house revised transcript structures [19,20] (Figures 1A & 2B; Materials and Methods).Particularly, considering the less stringent requirement for accurately calling the widespread editing sites in Alu regions compared with those in non-Alu regions [21], we introduced more relaxed criteria for identifying editing sites in Alu regions (Materials and Methods).Overall, 31,250 macaque editing sites were identified, with 29 in coding regions, 1,198 in untranslated regions, 15,177 in intronic regions and 14,846 in intergenic regions (Figure 1A and Table S1; Materials and Methods).Similar to the reports in human [21], the vast majority (30,699 of 31,250) of these sites are located in Alu repeat elements. An accurate, quantitative and representative catalog of RNA-editing sites across rhesus macaque transcriptome We next set out to confirm that these sites represent bona fide RNA-editing events rather than technical artifacts.Twenty-eight of all 29 editing sites (96.6%) in coding regions (Figure S3, Tables 2 & S2), as well as 77 of the 79 randomly selected sites (97.5%) in untranslated, intronic and intergenic regions (Figure S4, Table S2) were experimentally verified by PCR amplification and Sanger sequencing of both DNA (median length of the PCR products, 449 bp) and the corresponding cDNA (median, 449 bp).The validation rates for both coding and non-coding regions suggested that most of the sites identified in this genome-wide study should be verifiable. In addition to the traditional approach of PCR amplification and Sanger sequencing, we also performed a medium-throughput Author Summary RNA editing is a co-transcriptional process that introduces differences between RNA and its corresponding DNA sequence.Currently, the next generation sequencing have allowed study of the editome in a comprehensive and efficient manner.However, fundamental issues involving accurate mapping of the editome as well as its regulation and functional outcome remain unresolved.To further unveil the underlying mechanisms from the evolutionary perspective, we report here the editome profile in rhesus macaque, one of our closest evolutionary relatives.We identified a list of 31,250 RNA-editing sites and deciphered an accurate and informative editome across multiple tissues and animals.We found that the adenosine deamination is associated with the macaque editome, in that both the sequence context and the expression profile of ADARs are relevant factors in determining the quantitative variance of RNA editing across different sites and tissue types.Importantly, some of these RNA-editing events represent functional regulation, rather than neutral signals, as suggested by substitution valley of decreased divergence detected around the editing sites, an indication of selective constraint in maintaining some of these editing substrates with their double-stranded structure.The macaque editome thus provides an informative evolutionary context for an indepth understanding of RNA editing regulation.study to quantify the levels of coding region-associated RNA editing in the seven tissues using a mass array-based genotyping platform (Materials and Methods).The levels of RNA editing were then estimated and compared between the high-throughput, medium-throughput and low-scale assays (Figure 1B).Strikingly, the levels of RNA editing estimated by high-throughput technology were in close agreement with those by the other two independent platforms, particularly for sites with $10 supporting reads (the Pearson correlation coefficients were 0.89, 0.96 and 0.89; Figure 1C).This adequacy of the NGS data in estimation of RNA-editing levels thus indicated that quantitative characterization of the RNA editomes, particularly among tissues, individuals, and species, may be based on integrating in-house RNA-Seq data with public transcriptome data (Figure S5). As stringent cutoffs for the sequencing depth of genome were instituted to distinguish RNA editing from systematic sequencing errors, allele-specific expression and duplication-related polymorphisms, we evaluated whether such a rigorous approach may have hampered the site-calling sensitivity in this study.Focusing on coding regions, we increased the coverage of genomic DNA sequences to 115-fold through an established whole-exome capturing and sequencing strategy [3] (Table 1).A total of 83.9 million DNA reads were then obtained and mapped to the macaque genome, with 96.9% of the coding regions being sequenced with high coverage (Figure S2).However, only six additional RNA-editing sites were identified using this targeted genomic reference, but were subsequently discarded by Sanger validation.These false-positives might have arisen largely due to biased capture efficiency in the exome sequencing assay favoring the wild-type allele (Figure 2C).Actually, even considering crossspecies differences, the majority (13 out of 14) of those wellcharacterized human RNA editing sites as summarized by Li et al [4] were included in the initial list (Table S3), suggesting the high calling sensitivity of editing sites in coding region. However, considering the coding regions are less repetitive and well-annotated than other genomic regions, it is not straightforward to generalize the high calling sensitivity in coding region to other genomic regions.Notably, the overall number of macaque editing sites we identified is lower than that in human, in which 84,750 editing sites were identified from poly(A)-positive RNA-Seq data (Supplementary Table 1 in Reference [7,21]).Although multiple factors, such as the differences in experimental design and the inherent difference in genetic makeup, may contribute to the human-macaque difference [7,21] (Discussion), it is likely that false-negatives in RNA-editing detection could still result from our stringent criteria (Materials and Methods, Discussion).Nonetheless, such rigorous approach is necessary for controlling false-positives, especially considering the poor genome annotations and error-prone gene structures in rhesus macaque [19,20,22].Importantly, despite the notion that certain degrees of falsenegatives exist, this dataset may still represent a representative list of macaque editing sites for further interrogation of some global attributes of the RNA editome. Association of ADARs-mediated reactions with the macaque editome Having established the feasibility of our experimental design and the authenticity of the macaque editing dataset, we next aimed to characterize the relevant molecular factors underlying the macaque RNA editome.To this end, several global attributes of the editome were first identified as follows. First, contrary to the previous study reporting all twelve possible forms for RNA-editing sites in human with a large proportion of transversions (43%) [5], we found that nucleotide transitions Table 1.Statistics of deep sequencing for one rhesus macaque.accounted for 99.9% of the editing sites in the macaque editome.Furthermore, 99.8% of the identified changes converted A to G, which is presumably a consequence of ADAR-mediated enzymatic reactions (Figure S6).We noted that the fraction of A-to-G transitions increased when more stringent filters were incorporated, from 65.6% in the initial list to 99.8% in the final list (Figure S6), suggesting that most of the nucleotide changes of the transversion type may have been due to technical artifacts [5,[8][9][10], rather than unknown mechanisms as proposed previously [5].Second, the identified sites exhibited considerable variance in editing levels, with the median level ranging from 2.9% in muscle to 30.4% in cerebellum (Figure 3A), indicating a differential regulation profile similar to that reported in human [4].Third, tissue profiling also revealed higher levels of RNA editing in the brain than in other tissues (Figure 3A), affirming a layer of regulation underlying the complex brain development in primates [23][24][25]. In addition, when comparing editing levels across tissues and individuals by integrating the in-house RNA-Seq data with public macaque transcriptome data, we further found smaller intrapopulation variations of the editing levels in comparison with cross-tissue variations as revealed by hierarchical clustering analyses (Materials and Methods; Figure 3B, Table S4).As the editing levels were estimated according to RNA-Seq data where the estimation might be less accurate for sites with lower sequencing coverage, we further used a mass array-based genotyping platform to quantify the levels of editing in coding regions of RNA from the seven original macaque tissues and nine additional samples (Materials and Methods).The mass array data further verified the intra-population conservation of the macaque editome (Figure 3C).Besides these qualitative clustering analyses, we further measured the coefficients of variation (CV) of editing levels across different animals, as well as across different tissues from the same animal (Materials and Methods).As expected, for most editing sites (93.4%), the intra-population standard deviation of editing levels was smaller compared to the average editing level (Figure S7A).In addition, the variability across macaque animals is significantly lower than that across tissues, as indicated by the pair-wise CV comparisons (Wilcoxon onetail test, p-value = 4.2e-6; Figure 3D).Our findings therefore demonstrated that, similar to other fundamental gene regulation mechanisms [26,27], there may be a regulatory commonality of RNA editing within populations, in accordance with a previous study on candidate genes [15].We next investigated the relevant molecular factors underlying the macaque editome, and subsequently made the following observations.First, the sequence context of the overwhelminglyrepresented A-to-G editing sites verified the known attributes of ADAR substrates, in that the nucleotide 59 to the editing site significantly disfavored G, while the 39 nucleotide favored G [28] (Figure 4A).In any given tissue, it seems that the local sequence context flanking the editing sites is a relevant factor for the global editing levels -sites with a matched ADAR recognition motif usually showed significantly higher editing levels than those with a partially-matched or non-matched recognition motif (Wilcoxon rank test, p-values shown in Table S5, Figure 4B).Particularly, 59 nucleotide seemed to be more determinative as sites with 59 matched motif usually showed significantly higher editing levels than those with 39 matched motif only, a finding consistent with previous reports [29,30] (Figure 4B, Table S5).However, quantitative analyses with a Triplet model as previously described [29] revealed that only a small proportion of the site-to-site variance could be explained by the nearby sequence motif (Materials and Methods).We suspect that some confounding factors, such as substrate-specific variations and quantitative accuracy of editing level by RNA-Seq, might partially contribute to the low prediction power: when investigating one RNA substrate harboring 15 editing sites with the editing levels estimated according to the Sanger sequencing data, where these confounding issues were controlled, 52.4% site-to-site variances could be explained by sequence motif (Figure S4), a proportion comparable to a previous study using fixed RNA substrate and peak-based editing level estimation [29]. Especially, we noted a quantitative correspondence of the tissuebiased profile of the RNA editome to the tissue expression profile of ADARs, although previous studies in rodents did not detect a significant correlation [13,14].First, on the basis of a test for Spearman's rank correlation, 70.8% of the A-to-G macaque editing sites showed a tissue distribution of editing levels positively correlated with the expression of ADARs (Spearman's rank correlation coefficient $0.5), such an observation represents a statistically significant excess of editing sites with positive correlations (Monte Carlo p-value,0.0001;Figure 4C; Materials and Methods).Second, to further provide a quantitative estimate, we performed linear regression analysis to illustrate the association of ADAR expression profiles with the editing levels (Table S6; Materials and Methods).To this end, the R 2 was used as a quantitative indicator for the proportion of the variance of editing level that may be explained by ADAR expression profile (Materials and Methods).Compared with the distribution of R 2 values on randomly shuffled profiles neglecting tissue relationships for the tissue expression profile, the detected distribution of the correlations between the cross-tissue variance of editing levels and ADARs expression could hardly be explained by random permutations (Monte Carlo p-value,0.0001;Figure 4D), indicating that ADAR expression levels are indeed a relevant factor in determining global editing levels (Figure 4D).In addition, according to the regression analyses, we further found that 209 of these sites (10.7%) were significantly correlated with ADAR1 only, 567 sites (29.0%) with ADAR2, and 31 sites (1.6%) with both ADARs (Table S6; Materials and Methods).For these sites, the distributions of editing levels across seven tissues were shown, which were closely commensurate with the tissue expression profiles of ADARs (Figure 4E & F).After multiple testing corrections, 381 sites (19.5%) still showed significant positive correlation in tissue distribution between RNA editing level and the expression of ADARs (Table S6; Materials and Methods). Overall, our qualitative and quantitative data demonstrated that the intra-population variability of editing levels is significantly lower than that across tissues, and that both the ADAR expression profile and the local sequence context are relevant factors in determining global editing levels.Furthermore, these findings are consistent for sites located in different genomic regions, such as Alu vs non-Alu regions (Figures S8 & S9). Evidence of purifying selection on the editome landscape With the spectrum of macaque editing sites, we next performed a comparative analysis to examine whether the editing sites we identified in rhesus macaque could also be detected in human and chimpanzee orthologous regions.To this end, we integrated public available RNA-Seq data in human or chimpanzee to trace the orthologous regions of macaque editing sites (Table S4).For the 1,111 macaque editing sites with homology in both of these transcriptomes with adequate cDNA coverage (Materials and Methods), 599 (or 53.9%) and 590 sites (or 53.1%) could also be detected in human or chimpanzee, respectively, with 476 sites (or 42.8%) detectable in all three species (Figure 5A).Such extent of overlap was significantly higher than the background, which was calculated using the adjacent non-edited sites to indicate the degree of RNA-Seq sequencing errors (Chi-square test, p-value,2.2e-16,Figures 5A & S9).Some macaque editing sites (138 sites, 12.4%) were found with the edited forms encoded in human or chimpanzee genome (Figure 5A), an observation in line with previous studies on several human candidate genes [31,32].Compared with other genomic regions, RNA-editing events in coding regions showed a particularly higher degree of parallels across the three species, in that, all of these macaque editing sites could also be detected in human and chimpanzee homologous regions (Figure 5A). On the basis of these clues, we further tested whether the crossspecies similarities in RNA editome were maintained by purifying selection due to the functional implications of these regulations, or simply due to the relatively lower sequence divergences among these closely related primate species.As shown above, the local sequence context of the editing site was important in RNA-editing regulation, as it may be implicated in the formation of a suitable ADAR substrate structure [28].Therefore, primate-specific purifying selection nearby the editing site would presumably be an evidence for the functional relevance of the RNA-editing events.When examining the distribution of diverged sites between human and rhesus macaque, we discovered substitution valley of decreased divergence flanking the editing sites, as compared with the more distal regions as background (Figures 5B & S11; Materials and Methods).As a control, for macaque editing sites encoded in both human and chimpanzee with other types of nucleotides, no decreased divergence was observed nearby the focal editing sites in sequence comparison between human and chimpanzee (Figure S11C).Further analysis revealed little effect of expression levels of the host genes on the signatures of substitution valley of decreased divergence (Figure S11E, F & G; Materials and Methods).Overall, the divergence rates dropped by 15.2%, 12.3% and 13.0% for RNA-editing sites located in untranslated, intronic and intergenic regions, respectively, in contrast to 74.0% in the coding regions.The stronger selective constraint detected on coding regions recapitulates the particularly higher degree of parallels of editome in coding regions, as compared with other genomic regions (Figure 5).Due to limited number of editing sites in coding regions, we performed Monte Carlo simulation with random sampling of coding regions across the macaque genome to test whether the detected divergence rate drop was an effect of sampling bias on limited observations.The result revealed that such a possibility is rare (Monte Carlo p-value = 0.005). For RNA-editing sites in coding regions, we also examined the distribution of synonymous divergent sites between human and rhesus macaque surrounding the editing sites.Although synonymous sites have been considered to be largely neutral [33], we noted that their presence near the editing sites was actually more selectively constrained than distant synonymous sites [23,34] (Figure 5C).The synonymous substitution rate nearby the editing site was nearly equivalent to the genome-wide substitution rate of nonsynonymous sites that is under strong purifying selection (Figure 5C; Materials and Methods).We further noted that the genes regulated by these recoding RNA-editing events were significantly enriched in the functional category of biological binding [35] (Hypergeometric test, p-value = 1.7e-3).Among these sites, three were located in orthologous proteins in human with solved crystal structures -the nucleotide/codon re-assignment by RNA editing reportedly regulate the activity of the voltage-gated potassium channel [34] or the efficiency of DNA glycosylases in the removal of damaged nucleotides [36].These RNA-editing events shaped by purifying selection may thus represent a form of functional regulation that underlies processes associated with protein, ion and nucleic acid binding. Taken together, the dampened divergence rate or synonymous substitution rate around the editing sites reflects the evolutionary necessity of retaining some of these editing substrates with their double-stranded structure (Figure 5).However, in contrast to the sites in coding regions, it is possible that a smaller proportion of functional RNA editing sites exist in these non-coding genomic regions, considering the weaker selective constraints detected (Figure 5A & B).These findings thus complement the ''continuous probing'' model postulating a tinkering-based origination of functional editing sites [17,37] (Discussion). Macaque editome identification: Experimental and computational strategies Despite extensive efforts and discussion, accurate definition of the human editome by using NGS data alone remains technically challenging [5,[8][9][10][11][12].The widespread human editing sites detected in a recent study might have been largely a result of technical artifacts, such as systematic sequencing errors and flaws in the subsequent computational analyses [5,[8][9][10][11][12].Other studies introduced more stringent pipelines to control for high falsepositive rate, such as by sequencing only target regions with putative RNA-editing sites [4], by removing RNA-editing sites on repetitive genomic regions [3], or by rejecting sites corresponding to genomic polymorphisms [12].These approaches have significantly improved the accuracy of RNA editing site calling, but additional barriers still exist for unbiased and definite identification of editome on the genome-wide scale [18].Particularly for species with poor genome annotations and error-prone gene structures such as the rhesus macaque [19,22], successful editome detection is hampered by difficulties in accurately mapping RNA-Seq reads and discerning discrepancies. In this study, beyond the filters established previously to remove potentially erroneous editing sites with i) low read coverage [6,7], ii) poor base-calling quality or multiple types of variation [6,7], iii) strand-biased cDNA read distributions [7], and iv) location in repeat genomic regions [3,7,12], we installed additional experimental designs and analytical measures with advantage in eliminating false-positives in our pipeline.First, all NGS assays were performed in macaque tissues derived from the same animal, which effectively excluded individual differences in the genome and transcriptome (Materials and Methods).Second, strandspecific RNA-Seq technique significantly controlled for potentially ambiguous calls due to the widespread cis-natural anti-sense expression [18,38] (Figure 2A).Third, long paired-end reads were generated in our deep sequencing analysis, ensuring accurate mapping with sufficient sequencing depth on .18,000mRNAs [39] and 96.9% of macaque genomic regions.Fourth, a more stringent read mapping strategy was applied to facilitate the definition of uniquely-mapped reads (Materials and Methods), which efficiently diminished false mapping due to processed pseudogenes [3].In addition, considering the error-prone gene structure annotations for rhesus macaque [19,20], we further introduced inclusion criteria to remove editing sites located in previously mis-annotated macaque transcripts.Taken together, as we demonstrated above, these efforts ensured the identification of an accurate and quantitative catalog of RNA-editing sites. However, the stringent criteria we used to control for the falsepositives would cause some false-negatives in RNA-editing detection, although the calling sensitivity in coding region was proved to be good (Figures 1 & 2).To evaluate the degrees of false-negatives of our stringent computational pipelines, we applied the identical pipeline and inclusion criteria on human poly(A)-positive RNA-Seq data reported previously to identify human editing sites [7,21].Compared with the original poly(A)positive RNA-Seq study reporting 84,750 human editing sites (see Supplementary Table 1 in Reference [7,21]), 20,065 editing sites were identified by our pipeline, with A-to-G transitions accounted for 94.3% of the identified editing sites.Considering the total sequencing depth of this human study is much lower than that of our study in rhesus macaque, we slightly modified our inclusion criteria for RNA-editing sites by decreasing from five to two the minimum RNA-Seq reads required to support the variant form (while keeping all other parameters used in sequence alignment and single-nucleotide variation calling), and consequently identified 80,375 editing sites, a number comparable with the original report in human (84,750 sites; Materials and Methods) [21].It is obvious that more macaque editing sites would be expected, especially when increasing the sequencing depth of the transcriptome.However, with the experimental efforts in minimizing the effects of computational detection sensitivity in our study, such as the significantly elevated transcriptome sequencing depth in rhesus macaque to increase the detection power of variants, and the strand-specific, long paired-end reads designed to increase the proportions of uniquelymapped reads (Materials and Methods), it is likely that some non-technical factors, e.g., the inherent difference in genetic makeup for the inverted Alu pairs, may also contribute to this human-macaque difference [4,7,40,41]. Viewed together, although our rigorous experimental and computational paradigm would cause some false-negatives, it would be a necessary compensation for an accurate and quantitative catalog of RNA-editing sites in rhesus macaque, considering the poor genome annotations and error-prone gene structures in rhesus macaque [19,20,22].Importantly, the catalog represents a representative account of RNA-editing sites across rhesus macaque genome for further interrogation of the global attributes of the RNA editome. Characteristics and implications of the macaque editome Aside from the technical issues, the present work on the macaque RNA editome provided novel insights into several aspects of the RNA editing process.First, large-scale sequencing on a broad range of tissue samples from the same or different animals allowed for a comparative editome analysis.We subsequently deduced from such a study that, while there is large degree of variance between sites and tissues (Figure 3A), the intrapopulation variability of editing levels is significantly lower than that across tissues, suggesting a regulatory commonality of RNA editing within populations similar to other fundamental gene regulation mechanisms [26,27] (Figures 3B, C, D, S7 & S8).Second, the global attributes of editing were further verified and quantified to show that the occurrence of RNA editing is correlated with the flanking sequence signatures, as well as the levels of ADARs expression (Figure 4).The macaque editome is thus partially associated with ADARs-mediated enzymatic reactions, and the cisand trans-directed mechanisms associate with ADARs, such as the chemical affinity of ADAR binding sites and ADARs concentration, are thus likely to be relevant with the regulation of the macaque editome. RNA-editing regulation: Functional outcome and significance While hereditary information is modified by RNA editing, evidence for functional significance of this process is largely lacking thus far [17].Although functional RNA-editing sites have been sporadically reported, they may represent only isolated cases rather than a general mode of regulation.In this study, with an accurate and informative editome defined across multiple tissues and animals, we found some intra-population conservation of the macaque editome, as well as some parallels of the editome across multiple primate species (Figures 3, 4 & 5).However, our findings also suggest that the editome is partially associated with ADARs-mediated enzymatic reactions.It is thus possible that sites showing high affinity to ADARs in one macaque animal would also have high affinity to ADARs in other macaque animals, or in humans and chimpanzees, considering the relatively lower sequence divergences among these closely related primate species.To this end, we tested whether the cross-species similarities in RNA editome were maintained by purifying selection due to the functional implications of these regulations, or simply due to such a passive mechanism.Interestingly, in support of the functional relevance of some of these editing sites, substitution valley of decreased divergence was detected around the editing site (Figure 5), suggesting the evolutionary necessity of retaining some of these editing substrates with their double-stranded structure.Taken together, the findings on the population-wide and evolutionary conservation of the macaque editome, as well as the contribution of purifying selection to editome shaping, lend support to the functional significance of this co-transcriptional regulation as a whole. Interestingly, when investigating the dampened divergence rate for editing sites across different genomic regions, stronger selective constraint was detected on coding regions, while sites in other regions also showed some degrees of weaker evolutionary constraints (Figure 5A & B).This analysis implies that, in contrast to the sites in coding regions, a smaller proportion of functional RNA editing sites exist in non-coding genomic regions (Figure 5A & B).The varied proportions of functional editing sites across different genomic regions thus support the ''continuous probing'' model postulating that most of the editing sites are neutral with low editing levels, acting as a selection pool for a few functional editing sites [17].However, our findings also suggest that the RNA editing levels are partially associated with the chemical affinity of ADAR binding sites, as well as ADAR concentration.Thus, the editing levels are not necessarily low even for those potentially neutral editing sites (Figure S12), a notion that complements the ''continuous probing'' model by illustrating a clearer process for the tinkering-based origination of functional RNA editing sites [17,37]. Ethics statement Rhesus macaque tissue samples were obtained from the AAALAC-accredited (Association for Assessment and Accreditation of Laboratory Animal Care) animal facility at the Institute of Molecular Medicine in Peking University.Experiments with animals were done in accordance with protocols approved by the Institutional Animal Care and Use Committee of Peking University and followed good practice. Library preparation, sequencing, and quality control for All candidate RNA-editing sites in coding regions that passed the above protocol, as well seventy-nine randomly selected RNAediting sites in untranslated, intronic and intergenic regions, were further verified by PCR amplification and Sanger sequencing of both DNA and the corresponding RNA (Figures S3 & S4, Tables 2 & S2).The sequence coverage of these sites ranges from 12 to more than 100 RNA-Seq reads, with the estimated editing levels from 3% to 100%.For editing sites in coding regions, we also performed mass array-based genotyping on all cDNA and the matched DNA samples on an iPLEX Gold MassARRAY system (Sequnom Inc.) to independently verify the RNA-editing sites and the corresponding editing levels.Primers were designed with MassARRAY assay design software.Amplification reactions, digestion of unincorporated dNTPs and MALDI-TOF mass spectrometry were performed in accordance with the manufacturer's instructions.Signal intensities for two alleles were automatically assigned followed by manual confirmation.Briefly, the genotype was assigned as the ratio of the area of 'G' signal to the area of both 'G' and 'A' signals if the editing form was A-to-G, and ideally a ratio of 0 represented homozygous A/A while 1 represented homozygous G/G.Considering the noise in the Sequenom mass array platform [44], a candidate RNA-editing site was confirmed when the ratio of edited form was $0.10 in at least one of the seven cDNA samples derived from macaque tissues, and ,0.10 in the DNA samples. To further assess the degrees of false-negatives of this stringent computational pipeline, two evaluations were performed on the basis of the human YH genome and the associated poly(A)-positive RNA-Seq data [7], which were used previously to identify human editing sites [7,21].First, we applied the identical pipeline and inclusion criteria used in our study on this dataset to identify human editing sites [7].Second, considering the total sequencing depth of this human study is much lower than that of our study in rhesus macaque [7], the inclusion criteria for RNA-editing sites were slightly modified by decreasing from five to two the minimum RNA-Seq reads required to support the variant form (while keeping all parameters used in sequence alignment and singlenucleotide variation calling). Characteristics of RNA-editing sites The levels of RNA editing were estimated separately for highthroughput, medium-throughput and low-scale data on the basis of read numbers [6], signal intensity contrast [44] and peak height ratio [45] between wild-type and edited forms, respectively.The sequence motif was built by Two Sample Logo [46], with the level of preference/depletion shown in height proportional to scale (Figure 4A). We evaluated the dependence of editing levels on sequence motif.The RNA-editing sites were divided into four categories according to the nearby sequence preferences (Figure 4A), with a 'matched' motif referring to the consensus sequence of YAS [Y = T/C, S = C/G], a '59 matched' motif of YAW [W = A/T], a '39 matched' motif of RAS [R = A/G], and a 'not matched' motif of RAW.We further performed a quantitative study to estimate how much site-to-site variances could be explained by the nearby sequence motif.We fitted the relationship between editing level and the local sequence context by controlling for cross-tissue and intra-population variations, using a Triplet model as previously described [29]: Where, y i indicates the editing level of i th editing site; A, T, C and G were denoted by 1, 2, 3 and 4; U i and D i represented the 1 bp upstream or downstream nucleotide of the i th editing site; 1{A} was characteristic function (when A is satisfied, 1{A} = 1, otherwise, 1{A} = 0.); e i was the normally-distributed error term.The adjusted R 2 values obtained under the regression model was used to indicate the prediction power of the local sequence context on the editing levels [29]. RNA-editing profile across individuals and tissue types Public high-throughput datasets of multiple tissues from human [27], rhesus macaque [27,47] and chimpanzee [27] were integrated and processed using a pipeline as previously reported [39].Mass array-based genotyping data (Sequenom) from multiple tissues were also generated to profile the distribution of editing levels for sites in coding regions across animals and tissues, from which RNA-editing sites without reliable genotyping data were excluded.Hierarchical clusters were built using complete linkage hierarchical clustering by Cluster (v3.0), on the basis of editing levels across different tissues in different individuals, for all editing sites (Figure 3B), or for several subsets of these editing sites (Figures 3C & S8). Besides the qualitative clustering data, we further measured the coefficients of variation (CV) of editing level across different animals, as well as across tissues.RNA-Seq data in brain samples from seven animals and seven tissues from the same animal were integrated and analyzed in standard pipelines for estimation of editing levels and CVs (Table S4).Only those editing sites covered with at least 30 RNA-Seq reads and at least 5 observations in each group were included.A CV score less than one indicates a smaller standard deviation than the mean, and thus a small intrapopulation variation for RNA editing levels. Expression profiles of ADARs were estimated as previously reported [39] and tissue-specific correlation between RNA editing level and ADAR expression was analyzed.Only those editing sites covered with at least ten RNA-Seq reads in each of the seven tissues were included.A cutoff for Spearman's rank correlation coefficient at $0.5 was used to indicate a positive correlation between the tissue-biased profile of the RNA editome and ADARs expression profile, and sites correlated with both ADARs were considered to be associated with the one showing higher correlation coefficient.To further provide a quantitative estimate, we performed linear regression analysis to illustrate the association of ADARs expression profile with the editing levels: Where Y i indicates the editing levels, m the mean of editing level, X 1i and X 2i the expression levels of ADAR1 and ADAR2, b 1 and b 2 the corresponding regression coefficients, and e i the normally distributed error term.The R 2 was used as a quantitative indicator for the proportion of the variance of editing level that could be explained by the ADAR expression profile.10,000 Monte Carlo simulations were performed to estimate the distribution of R 2 values on permutation datasets neglecting tissue relationships for the tissue expression profile.Finally, according to the test of significance of b 1 and b 2 , we further classified sites as significantly correlated with ADAR1 and/or ADAR2 using a cutoff of singletailed test p-value#0.05and coefficient .0(Figure 4E & F). Figure 1 . Figure 1.Genome-wide identification and verification of RNA editome in one rhesus macaque.(A) Overview of the experimental design -genome-wide identification, and medium-or low-throughput verification of RNA-editing sites.(B) An example showing the genotyping results for the genomic DNA (gDNA) and cDNA (cDNA) of one verified RNA-editing site (chr11:5028364, KCNA1).The levels of RNA editing were estimated from high-throughput, medium-throughput and low-scale data on the basis of read number, signal intensity contrast and peak height ratio between the edited and wild-type alleles, respectively.The primer peak and the genotype peak on mass spectrum are indicated by dotted lines in red.(C) Comparison of the levels of RNA editing estimated by high-throughput (H), medium-throughput (M) and low-scale (L) platforms.The example in (B) is highlighted in red.Pearson correlation coefficients between different platforms are shown on the right.doi:10.1371/journal.pgen.1004274.g001 Figure 2 . Figure 2. Experimental and computational strategies for accurate editome identification in rhesus macaque.Potential false-positives in the RNA editing calling workflow were minimized by a more thorough design in our pipeline strategy.(A) Two discrepancies between RNA and genomic-DNA sequences (highlighted by blue boxes) were located in a cis-natural antisense region where both DNA strands could be transcribed.Strand-specific RNA-Seq clearly distinguished the sequence reads transcribed from the two strands and correctly assigned this site as A-to-G editing, as no discrepancy was detected in the plus-strand transcribed gene.(B) Based on the macaque gene structures defined in-house (RhesusBase Structure), one of the exon-intron boundaries of ENSMMUT00000021567 was incorrectly defined by a previous annotation (Ensembl Structure).Two T-to-A DNA-RNA discrepancies highlighted by blue boxes would be incorrectly identified as T-to-A RNA editing with the RNA-Seq reads being aligned to the mis-annotated transcript structure.(C) The genotype of the site highlighted in the blue boxes was incorrectly recognized as homozygous in DNA and heterozygous in RNA, since only 1 out of 28 sequence reads supported the mutant allele T in DNA, leading to incorrect assignment of a C-to-T editing event.Both Sequenom mass array and Sanger sequencing validations excluded such false-positives, which may arise due to low sequencing coverage and biased allele capture efficiency in the exome-Seq assay.doi:10.1371/journal.pgen.1004274.g002 Figure 3 . Figure 3. Characteristics of the rhesus macaque editome.(A) For editing sites in each type of tissue, the distribution of the levels of RNA editing was shown in boxplot.(B) Hierarchical clustering of editing levels of all editing sites across multiple macaque tissues and animals.Editing levels were estimated on the basis of RNA-Seq data in this study (Testis, Lung, Kidney, Heart, Muscle, Prefrontal cortex) and other public RNA-Seq data [Brain (1-6), Cerebellum (1-2), Muscle (1-8), Heart (1-5), Kidney (1-3), Lung (1-3), Testis (1-3)], with missing data shown in dark cyan.(C) Hierarchical clustering of editing levels is shown for selected RNA editing sites located in coding regions.Editing levels were estimated on the basis of mass arraybased genotyping in seven macaque tissues derived from the same macaque (Testis, Lung, Kidney, Heart, Muscle, Cerebellum, Prefrontal Cortex), as well as five muscle and four brain samples obtained from different macaque animals [Muscles (A-E), Whole Brains (A-D)], with missing data shown in dark cyan.(D) The distribution of pair-wise comparison of intra-population and cross-tissue coefficient of variance (CV) values is shown in boxplot.doi:10.1371/journal.pgen.1004274.g003 Figure 4 . Figure 4. ADARs-mediated enzymatic reactions is associated with the macaque editome.(A) The enriched (above the top line) and depleted (below the bottom line) nucleotides nearby the focal editing sites are displayed in Two-Sample Logo, with the level of preference/depletion shown in height proportional to the scale.(B) The editing sites were divided into four categories on the basis of the local sequence context nearby the editing site, as described in Materials and Methods.For each category, levels of RNA editing are shown in boxplots according to the tissue types.(C) Distribution of the percentages of editing sites showing tissue distribution of editing levels positively correlated with the expression of ADARs (Spearman's rank correlation coefficient at $0.5), for 10,000 permutation datasets neglecting tissue relationships for the tissue expression profile.The percentage for the real data was indicated by the arrow with Monte Carlo p-value.(D) Distributions of R 2 values in models assuming association of editing level with ADARs expression are shown as the Real Data, as well as the Background, which correspond to randomly shuffled profiles.(E, F) The tissue expression profiles of ADAR1 or ADAR2 were ordered based on RNA expression levels, and normalized editing levels of A-to-G sites were aligned accordingly.These A-to-G editing sites showed similar trends in the distribution of editing levels along the ordered tissue expression profile of ADAR1 (E) or ADAR2 (F).doi:10.1371/journal.pgen.1004274.g004 Figure 5 . Figure 5. Contribution of purifying selection to the RNA editome in primates.(A) The percentages of macaque editing sites with corresponding editing sites in human and/or chimpanzee (red bars), or genomically encoded in the two species (blue bars), are shown for the total editome (top), or for editing sites in different genomic regions (bottom).(B) The genomic sequences nearby the macaque editing sites were compiled according to the distances to the editing sites.For each 6-nucleotide window, the proportion of divergent sites between human and rhesus macaque are shown for different genomic categories.(C) Distribution of human-macaque synonymous divergent sites nearby the A-to-G editing sites.The codons with RNA-editing sites are highlighted in yellow and each synonymous divergent site in purple.The distribution of synonymous divergence (dS) values near the RNA-editing site, calculated using a 6-codon window, is shown in the lower panel, with the genome-wide dN and dS between human and rhesus macaque indicated by the dotted line.doi:10.1371/journal.pgen.1004274.g005 Table 2 . 28verified editing sites in the macaque coding regions.
2016-05-12T22:15:10.714Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "17b05580f45276e20dc0619469fea2d6e2f1baae", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1004274&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17b05580f45276e20dc0619469fea2d6e2f1baae", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
222126932
pes2o/s2orc
v3-fos-license
Adaptive Gaussian Mixture Model-Based Statistical Feature Extraction for Computer-Aided Diagnosis of Micro-Calcification Clusters in Mammograms : In mammography, detection and categorization of micro-calcification clusters (MCCs) using computer-aided diagnosis (CAD) systems are very important tasks because MCCs are important signs at an early stage of breast cancer. However, the conventional methods of CAD only classify MCCs into benign and malignant types, and no method has been developed for a medical requirement to classify the MCCs into more detailed categories according to the spatial distribution of MCCs. To provide a cogent second opinion, we specifically focus on analyzing MCCs’ spatial distribution and propose an adaptive Gaussian mixture model-based method to extract the statistical features of the spatial distribution in this study. By mimicking the radiologists’ workflow, the proposed method used the main feature of each spatial distributions to classify the MCCs and then provide a cogent second opinion to increase the confidence level of diagnosis decisions. The experiments have been performed on 100 mammographic images with MCCs from a clinical dataset. The experimental results showed that the proposed method was able to detect the MCCs and classify the spatial distribution of the MCCs e ff ectively. Introduction Mammography has gained wide acceptance as the most effective, low-cost, and highly sensitive technique for detecting breast cancer at an early stage, resulting in at least a 30% reduction of breast cancer deaths [1]. It is a heavy burden on radiologists to provide an accurate and efficient evaluation for the enormous number of mammograms generated in widespread screening. Indeed, there are some limitations of human observation; 10 to 30% of breast lesions are missed during human routine screening [2]. With the advancement of digital image processing, pattern recognition, and artificial intelligence, radiologists have an opportunity to ease their burden with the second opinion provided by computer-aided diagnosis (CAD) systems. The CAD system has become one of the most important research subjects in medical imaging and diagnostic radiology. In fact, it has been reported that the readers' sensitivity can be increased by 10% on average with the assistance of the CAD systems [3]. There are several topics in the development of mammographic CAD systems, including detections and classifications of micro-calcification clusters (MCCs), masses, and architectural distortions [1]. Among the various types of breast abnormalities which are visible in mammograms, the presence of MCCs is an important sign for the detection of breast cancer at an early stage [2]. Micro-calcifications (MCs) are small bright spots in mammograms, and their frequency increases with the age of women [4]. As shown in Fig. 1, MCCs are clustered MCs in mammograms. Although the high spatial resolution of mammography enables the detection of micro-calcifications at an early stage, some MCCs are not an indication of possible cancer according to their mammographic anatomy [5]. Therefore, it is important to distinguish between benign MCCs and malignant MCCs. In previous researches, most of the authors enhance the original mammogram to segment and detect the MCCs [6]- [9]. After that, they extract features of the regions of interest (ROI) to pleomorphic and linear micro-calcifications distributed in a small area. Grouped MCCs are considered benign or suspicious according to the morphology [4]. (b) Regional: scattered in a larger volume of breast tissue and not in the expected ductal distribution. The malignant probability of regional MCCs is described as about 26% [4]. (c) Diffuse: round micro-calcifications diffusely distributed within the breast. Most of diffused MCCs have a benign aspect [4]. (d) Segmental: calcium deposits in ducts and branches of a segment or lobe. The malignant probability of segmental MCCs is described as about 62% [4]. (e) Linear: pleomorphic MCs following the distribution of a duct. The malignant probability of linear MCCs is described as about 60% [4]. classify the MCCs into benign and malignant types [10]- [16]. However, for a cogent second opinion, there is a medical requirement to classify the MCCs into more detailed categories of several degrees of malignancy according to mammographic anatomy such as spatial distributions and shapes. CAD methods for extracting the shape features have been reported [17]. On the other hand, there was no statistical method applied to analyze MCCs' spatial distribution quantitatively. Thus, we focus on the spatial distribution of MCCs in this study based on a statistical method. Figure 2 illustrates five different spatial distribution categories of MCCs used in the breast imagingreporting and data system (BI-RADS) categories. These distributions refer to the arrangement of the calcifications inside the breast and, relative to the probability of malignancy [4]. To analyze the spatial distribution of MCCs, we have proposed an adaptive Gaussian mixture model (GMM) method, which can classify three types of spatial distributions, including linear, diffuse, and grouped, effectively but has a shortcoming that the method is not capable of distinguishing regional and segmental MCCs [18]. The cause of the shortcoming is that the difference between segmental and regional MCCs not only depends on the spatial distributions but also relates to ducts architecture. According to BI-RADS categories, the main difference between segmental and regional MCCs is that segmental MCCs are developed along with the breast ducts while regional MCCs are not. In this study, to complete the analysis and further classify the remaining two types of spatial distribution of MCCs, we propose a novel method that takes into account the duct feature [19]. The new feature is integrated into the adaptive GMM method, and the novel method can analyze the features of all the five types of MCCs' spatial distributions. Experimental evaluation using clinical datasets demonstrates that the proposed method can classify all the five types of MCCs into spatial distribution categories. Proposed Method The proposed method consists of four major procedures: the pre-processing to enhance the MCs, the adaptive GMM method [18] to model the MCCs' spatial distributions, MCC detection, and spatial distribution categorization by using the GMM method and the duct feature. Pre-Processing Since most mammograms have a low-intensity contrast between MCs and surrounding tissue, pre-processing techniques are necessary in order to enhance MCs in the image [20]. In this study, we use a top-hat transform method, which is based on a morphological technique to enhance the MC candidates by removing the smooth background and extracting the small bright blobs. The formula of the top-hat transform is given by where f is the original image, • denotes the opening operation, and b is a disk structuring element [21]. The bright blobs, most of them are MCs in mammograms, can be suppressed by the opening operation. The subtracting operation yields an image in which only the bright blobs fitting to the structuring element remain. Figure 1 shows an original mammogram, and Fig. 3 shows the result of the top-hat transform. In Fig. 3, the MCs are enhanced against the breast region. After we enhance the MCs, individual MC candidates can be segmented by using thresholding [22]. In this study, we determined a threshold T based on experiments: where μ is the mean gray value of t within the breast region, and σ is the standard deviation of the gray value of t within the breast region. Figure 4 shows an example of the MC candidates' detection result; the bright points are the MC candidates. MCC Modeling Using an Adaptive GMM GMM is a robust and simple model that is capable of providing spatial distribution information. The robustness of GMM makes the result unaffected by small variations of misdetected non-MC candidates. In view of the above-mentioned reasons, we use the GMM method to cluster MC candidates and then obtain the parameters to extract the spatial features. In this study, the spatial coordinates of MCs can be modeled as a GMM. The MCCs can be detected and categorized based on the statistical characteristics of the GMM subsequently. Let y = [y 1 , . . . , y n ] T represent the coordinates of MCs where n is the number of MC candidates. The distribution of MCs' coordinates can be approximated by the following probability density function: where k is the number of components, α 1 , . . . , α k are the mixture weights, θ m is the set of the mean μ m and covariance matrix Σ m defining the m-th component, and θ ≡ {θ 1 , . . . , θ k , α 1 , . . . , α k } is the complete set of parameters needed to specify the Gaussian mixture, respectively. The expectation-maximization (EM) algorithm is an iterative method to estimate the parameters of the GMM. The number of components k needs to be estimated because it is a necessary input of the EM algorithm, but the number of MCCs in a mammogram is generally unknown. We employ the minimum message length (MML) criterion [23] to estimate the number of components. The message length is a value to evaluate model fit. To calculate the message length of model parameters θ and dataset y, Wallace and Dowe (2000) and Baxter and Oliver (2000) gave the formula as follows: where c is the dimension of θ, F(θ) is the expected Fisher information matrix, and |F(θ)| denotes its determinant [24], [25]. The estimation of the number of components is carried out by finding the minimum with respect to the parameters of the message length. The adaptive GMM method can be described by the flow chart in Fig. 5. Instead of using model selection criteria to choose one among a set of candidate models, the adaptive GMM method integrates estimation and model selection in a single algorithm. Firstly, we determine a maximum and a minimum number of components. Secondly, we make a random initialization of the components' parameters. Thirdly, we use the following formulas to calculate posterior probability ω (i) m which can be considered as the posterior probability that point y i belongs to the m-th component and then update the parameters: where n is the number of MC candidates. The fourth step is to calculate the message length when we use the updated model to fit the data. We repeat the third step and the fourth step until the message length converges. Then we decrease the number of the component by 1 and repeat the steps until the number of the component equals the minimum number. Finally, we find the minimum message length and output the best model. MCC Detection After obtaining the GMM model of the MCCs, we can analyze the distribution of the models. The analysis is based on the distribution features, which can be calculated by the estimated GMM parameters. The weight α m can be used to describe the number of MC candidates in each component. The mean μ m of the component is the spatial coordinate of the components' central point, so we can obtain the location of the components. We use singular-value decomposition (SVD) to transform the covariance matrix Σ m of the GMM into two matrices as shown in the following formula: where U is a unitary matrix and E is a diagonal matrix. As shown in the following formulas, U contains direction information of the component, and the elements of E positively correlate to axes of the component: where l 1 and l 2 are semi-major axis and semi-minor axis of the component, respectively, and φ 1 represents the angle between the component direction and the vertical direction. Then, we can obtain the component's size and shape information as shown in the following formulas: where S is the area of the component and e is the eccentricity of the component. Then we begin to detect MCCs. Because not all the components correspond to MCCs, we need to remove the non-MCC components. Most of non-MCC components have a small weight, and some non-MCC components have a large eccentricity. In view of the above reasons, we calculate a ratio r as follows: The components with large r are detected as MCC components, and others are seen as non-MCC components. Spatial Distribution Categorization The final stage is to categorize the MCCs into five categories by using their spatial distributions and the duct feature. According to BI-RADS categorization [4] shown in Fig. 2, a basic idea of the proposed method to represent the anatomical features of each type of MCC distributions is as follows. Let us define the thresholds T P1 , T P2 , T e , T φ and the area of breast region S b . As Fig. 6, the breast ducts in mammograms can roughly be simulated as the exponential curves [19], and the direction can be calculated as follows: where f (x) is the exponential curve for the simulated duct, B is the parameter of the exponential curve, and φ 2 represents the duct direction. Thus, segmental MCCs can be classified by the difference between the MCCs' direction and the breast ducts' direction (|φ 1 − φ 2 | < T φ ). Figure 7 shows the flowchart of the categorization process. The thresholds T P1 , T P2 , T e , and T φ were selected in experiments. Experimental Results To evaluate the performance of the proposed method, we conducted an experiment on mammograms with MCCs. In the experiment, 100 mammograms with 111 MCCs are selected from an image database acquired from 2006 to 2010 in Tohoku University Hospital. The size of the digital mammograms is 6,880 × 9,480 pixels. Among the 111 MCCs in mammograms, there were 66 grouped MCCs, 8 diffuse MCCs, 19 segmental MCCs, 6 linear MCCs, and 12 regional MCCs diagnosed by radiologists. In order to evaluate the detection results, we computed the free-response receiver operating characteristic (FROC) analysis, of which the test variable is r. The true-positive (TP) rate is defined as the rate of the number of correctly detected MCCs over the total number of MCCs, and false positives (FPs) are defined as the number of mis-detected MCCs in the experiment. The FROC curve is a plot of the TP rate (y-axis in Fig. 8) versus the average number of FPs per image (x-axis in Fig. 8). By counting the TP rate and FPs per image, the proposed method achieves a TP rate of 80% with 1.2 FPs per image and a TP rate of 90% with 1.7 FPs per image. Cohen's kappa value [26], a statistic which measures interrater agreement for categorical items, is utilized to evaluate the categorization result. It is generally thought to be a more robust measure than simple percent agreement calculation, as it takes into account the possibility of the agreement occurring by chance. Figure 9 shows the histogram of area percentage, Fig. 10 shows the histogram of eccentricity, and Fig. 11 shows the histogram of the direction difference between MCCs and the simulated breast ducts, respectively. The manual thresholds T P1 , T P2 , T e , and T φ selected by maximization of classification accuracy are shown in the histograms. Table 1 shows the categorization result with the manual thresholds. The accuracy was nearly 70% with Cohen's kappa value of 0.52, corresponding to a moderate agreement. Discussion To develop a CAD system that can classify MCCs into BI-RADS categories, it is necessary to analyze MCCs' mammographic anatomy such as spatial distributions and shapes. Although there are several conventional methods for the evaluation of MCs' shapes, this study is the first attempt for categorization of MCCs' spatial distributions to the best of our knowledge. The experimental results have suggested that the semiautomatic categorization with manual thresholding can classify MCCs' spatial distributions according to BI-RADS categories. However, it is necessary for a clinically useful CAD system to provide an appropriate automatic categorization. Taking this problem into account, we used Gaussian mixtures to fit the distributions of each feature as shown in Figs. 12, 13, and 14. We calculated the parameters of the Gaussian mixtures by using the EM algorithm. Corresponding to the manual thresholding, histograms of the area percentage, the eccentricity, and the direc- tion difference were fitted by 3 Gaussian distributions, 2 Gaussian distributions, and 2 Gaussian distributions, respectively. From the Gaussian mixture fits and the MCC components' features, we can calculate the likelihood of the MCC components belonging to each category as follows: where S b is the area of breast region, f p1 , f p2 , and f p3 are the Gaussian mixture fits of the area percentage histogram, f e1 and f e2 are the Gaussian mixture fits of the eccentricity histogram, and f d1 and f d2 are the Gaussian mixture fits of the direction differnece histogram. Table 3 shows the automatic categorization result according to the most likelihood. The accuracy was 68%, and the result showed that the proposed method is robust. Deep learning-based methods may achieve better results in benign and malignant categorization. However, there are several disadvantages of deep learning-based methods in this research. First, we need an explainable system for radiologists in clinical situations to explain why the system makes the decision. Neural networks work in a black-box fashion, so it is difficult to explain a decision made by a deep learning-based system. In addition, it is very difficult to use a deep learningbased method for MCC detection because the spatial distribution information of very little MCs may be removed in pooling steps. Last, deep learning-based methods need a large number of labeled mammograms, but the preparation of labeled data is difficult. Therefore, it is difficult to use a deep learning-based method in this research. Moreover, it should be noted that there is a great potential for improvement of the categorization accuracy. Firstly, the remained non-MC candidates may affect the modeled MCC components' shapes or locations. Utilizing another method or structure element for the top-hat transform may make a better segmentation [27]. Secondly, because of the limitation of the duct simulation method [19], the simulation can make errors, which is the reason for miscategorization in some cases. An accurate simulation may improve the accuracy of the categorization. Last of all, the accuracy may change with either more cases or better feature selection. The features used to categorize MCCs in this study are selected according to the BI-RADS categorization, but there are some other morphological features and texture features [12] not used here, which may be useful to classify MCCs. Conclusion Diagnosis of MCCs in mammograms is a difficult task even for expert radiologists. Spatial distributions of MCCs can provide valuable information for radiologists to diagnose MCCs. In this paper, an adaptive GMM method for diagnosis of MCCs in mammograms has been developed. The proposed method can model the MCCs in mammograms and classify them into spatial distribution categories. Except for the spatial distribution of MCCs, each shape of individual MCs is another important information of mammographic anatomy. In future work, we will make experiments on more mammograms including both mammograms with MCCs and normal cases for detection and spatial distribution categorization, classify the MCCs into shape categories. Then, we can classify the MCCs into BI-RADS categories by combining the shape and spatial distribution information.
2020-07-16T09:07:04.234Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "09b4dab49ae572902954895f0f0bfaa9b400b76a", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.9746/jcmsi.13.183?needAccess=true", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cf8c04a86a269c5b04e15f9d3fad650a7160e6df", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
270877386
pes2o/s2orc
v3-fos-license
Enhanced brackish water desalination in capacitive deionization with composite Zn-BTC MOF-incorporated electrodes In this study, composite electrodes with metal–organic framework (MOF) for brackish water desalination via capacitive deionization (CDI) were developed. The electrodes contained activated carbon (AC), polyvinylidene fluoride (PVDF), and zinc-benzene tricarboxylic acid (Zn-BTC) MOF in varying proportions, improving their electrochemical performance. Among them, the E4 electrode with 6% Zn-BTC MOF exhibited the best performance in terms of CV and EIS analyses, with a specific capacity of 88 F g−1 and low ion charge transfer resistance of 4.9 Ω. The E4 electrode showed a 46.7% increase in specific capacitance compared to the E1 electrode, which did not include the MOF. Physicochemical analyses, including XRD, FTIR, FESEM, BET, EDS, elemental mapping, and contact angle measurements, verified the superior properties of the E4 electrode compared to E1, showcasing successful MOF synthesis, desirable pore size, elemental and particle-size distribution of materials, and the superior hydrophilicity enhancement. By evaluating salt removal capacity (SRC) in various setups using an initially 100.0 mg L−1 NaCl feed solution, the asymmetric arrangement of E1 and E4 electrodes outperformed symmetric arrangements, achieving a 21.1% increase in SRC to 6.3 mg g−1. This study demonstrates the potential of MOF-incorporated electrodes for efficient CDI desalination processes. Zn-BTC MOF synthesis Zn-BTC MOF was prepared by a simple solvothermal method 49 as depicted in Fig. S1.Initially, 1.8 g of Zn(NO 3 ) 2 •6H 2 O and 0.6 g of C 9 H 6 O 6 each were dissolved in 30 mL of ethanol by constant stirring for 30 min.Subsequently, both solutions were mixed together and continuously stirred for another 60 min.Then, the mixture was transferred to a 75 mL Teflon-lined stainless-steel autoclave at a rate of approximately 5 °C per minute.The reaction lasted for 14 h at 130 °C.Afterwards, the autoclave was cooled down to room temperature.The resultant milky crystal precipitate of Zn-BTC MOF was centrifuged, washed several times with fresh ethanol and deionized water, and dried in a vacuum oven at 80 °C for 12 h.The yield of the Zn-BTC MOF prepared at this stage, compared to the metal salt used, was about 61.1%. Composite electrode fabrication Composite electrode fabrication consists of two stages as shown in Fig. S2: (1) the preparation of the electrode ink and, (2) coating the resultant ink on a current collector.Ink preparation is the key step, so that it is essential for the resulting ink to be homogeneous.In preparing the ink, six different electrode compositions (i.e., E1, E2, E3, E4, E5, and E6), including three components (i.e., AC, PVDF, and Zn-BTC MOF), were investigated, as observed in Table S1.For the preparation of each composite electrode, AC, PVDF, and Zn-BTC MOF were weighted according to specified composition formula as indicated in Table S1.Then NMP solvent was added to each composition, and they were immersed in an ultrasonic bath for 20 min.A completely homogeneous ink was prepared by placing it on an electromagnetic stirrer at ambient temperature for at least 12 h, and consequently, cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) tests were conducted in a threeelectrode cell.The electrode with best performance in these electrochemical tests was then selected for further characterization and desalination tests. The specific capacitance and overall electrochemical resistance were measured using CV and EIS tests, respectively 25,54 .For this purpose, CVs were conducted in 1.0 M NaCl at a rate of 5.0 mV s −1 [versus Ag/AgCl] for potential range of − 0.5 up to + 0.5 V. EIS was also conducted for a frequency range of 700.0 kHz to 1.0 mHz, with the alternating potential amplitude being 10.0 mV around the open circuit potential.Each of the prepared inks was coated on a glassy carbon (2.0 mm diameter) as a working electrode.The counter electrode was a 3.0 cm 2 rectangular platinum (Pt) plate, and the reference electrode was an Ag/AgCl electrode in saturated KCl. The specific capacitance values (C) (F g −1 ) were determined using the I-V curve according to Eq. ( 1) 55 : where S is the area surrounded by the CV curve, V is the potential window (V), m is the mass of active material on electrodes (g), and ϑ is the potential scan rate (V s −1 ). After selecting the most suitable composition of electrode in terms of electrochemical performance by CV and EIS tests, it is necessary to test the selected electrode in a CDI cell to evaluate its desalination performance.The composite electrode fabrication process for desalination tests was as follows.First, for each desalination test, the selected electrode composition (as the active layer of the electrode) was coated with a soft brush onto two circular pieces of carbon cloth (each 4.0 cm in diameter) placed on the surface of two graphite sheets (each 8.0 cm in diameter) as anode and cathode; then the electrodes were completely dried in three steps by a vacuum oven; at 60 °C for 3 h, at 80 °C for 2 h and at 100 °C for 1 h.The electrodes were then removed from the oven and allowed to cool to ambient temperature.The electrodes were rinsed with deionized water to remove contaminants.Finally, the electrodes were placed in a vacuum oven at 100 °C for 2 h to dry completely.At last, the total dried mass of the active layer on each composite electrode was 0.11 g with 260 µm in thickness.The CDI tests were conducted in a batch setup at ambient temperature using a 50.0 mL NaCl feed solution with an initial concentration of 100.0 mg L −1 (with an initial electrical conductivity of 253.4 µS cm −1 ), which is regarded as brackish water.The NaCl solution conductivity was monitored with a conductivity meter during the test.The relationship between conductivity and NaCl concentration was obtained by preparing a calibration curve before the experiments. CDI experimental setup CDI experiments were performed with a batch-mode setup.It contained a feed solution reservoir, a peristaltic pump (Lab 2015, Shenchen Co., China), a Galvanostat/Potentiostat device (SP-150, Bio-Logic Science Instruments SAS, France), a conductivity meter (EC-470 L, ISTEK Co., Korea), a pH meter (P25, ISTEK Co., Korea), and a lab-made CDI unit cell.To investigate the performance of the electrodes, a CDI device (Fig. 1) was constructed.This device consisted of two circular sheets of plexiglass for encasement, two composite electrodes containing active material coated on carbon cloth fixed on a circular graphite sheet as current collectors, and separated by a nylon mesh spacer.Additionally, several silicone rubber gaskets were used for sealing. Physicochemical/electrochemical characterizations To evaluate Zn-BTC MOF crystal formation, X-ray diffraction (XRD) test was performed at the 2Θ angle range from 5 up to 90 degrees, using the Philips Xpert device (Netherlands) and Cu-Kalpha radiation source.In order to investigate the chemical structure of H 3 BTC, Zn-BTC MOF, AC, and fabricated electrodes, Fourier transform infrared (FTIR) tests were carried out using the Thermo Electron Scientific Instruments LLC device (USA) in the spectral range of 400-4000 cm −1 .To study the morphology of the Zn-BTC MOF and the composite active material of electrodes, field emission scanning electron microscopy (FESEM) images with magnifications of 1, 5, and 20 µm were taken with a MIRA3 TESCAN device (Czech Republic), enabling a comprehensive analysis of their structural characteristics.Energy dispersive X-ray spectroscopy (EDS) and Elemental Mapping tests were conducted with a MIRA3 TESCAN device (Czech Republic) to further confirm the elemental composition and distribution within the materials.The particle size distribution of Zn-BTC MOF was also analyzed and estimated using the ImageJ software.The SSA and mean pore diameter of AC and Zn-BTC MOF were calculated based on the adsorption-desorption isotherms of nitrogen gas at liquid nitrogen temperature by the Zn-BTC MOF physicochemical characterization The XRD test was performed to confirm the construction of the Zn-BTC MOF.It is necessary to match the spectrum obtained from the as-prepared MOF with the spectrum obtained from samples reported in previous studies 48,58,59 .Figure S3a illustrates the XRD spectrum of the Zn-BTC MOF synthesized in this work and the XRD spectrum of the samples synthesized by Osman et al. 59 .According to Fig. S3a, a highly intense peak at 2Θ = 10°, and some minor peaks at 2Θ = 15.64°,17.72°, and 26.16° are observed, confirming the successful construction of the Zn-BTC MOF 48,58,59 .FTIR is another test employed for investigating the Zn-BTC MOF structure, considering that the bonds in the Zn-BTC MOF are formed by H 3 BTC organic ligand molecules 60,61 .The FTIR spectrum of the Zn-BTC MOF and the spectrum of the H 3 BTC organic ligand were measured, as shown in Fig. S3b, which are described in supplementary information. The morphology of Zn-BTC MOF particles was investigated using FESEM images.Figure S4a shows images at 1, 5 and 20 μm magnifications.The shapes of the Zn-BTC MOF particles are spherical and polyhedral.The existence of two different shapes (spherical and polyhedral) with different sizes for the Zn-BTC MOF particles can be caused by small variations in temperature during the synthesizing stage in the autoclave 50,62,63 . Particle size distribution has been obtained from FESEM images using ImageJ software.Figure S4b shows that particles with a diameter between 30 and 500 nm are most abundant.The presence of nano and micro-particles in composite electrode structure can be influential in two ways.The use of Zn-BTC MOF with nanometer-scale dimensions can improve dispersion and uniformity in the electrode structure and thus enhance the overall stability of structure.Conversely, coarser particles with micrometer-scale dimensions can afford more space between AC particles in the electrode, and consequently leading to better diffusion and greater access of ions to the active sites within the electrode structure 64,65 . EDS test is used to identify the type and quantity of elements and the elemental mapping test is used to determine the quality of elemental distribution.The EDS result as shown in Fig. S5a, confirms the elemental composition of the Zn-BTC MOF (i.e., C, O, Zn, and N).However, the additional peak observed belongs to aluminum, which is caused by the aluminum surface of the sample holder 63,66 .Also, in Fig. S5b, the elemental distribution of the MOF can be seen, which demonstrates the well distribution of all elements in the structure. The SSA, pore size distribution, and pore volume of the Zn-BTC MOF were assessed using the BET test, as well as nitrogen adsorption and desorption isotherms.These analyses generated relevant graphs and tables, which are presented in Fig. S6 and Table S2, respectively.According to the adsorption and desorption diagram in Fig. S6a, the adsorption isotherm of this MOF is of the fourth type with a hysteresis loop of the third type 17,67 .This shows non-hard and plate-like meso and micro pores presented in its structure 17,63,67 .According to Fig. S6 and Table S2, an SSA of 34 m 2 g −1 , a pore volume of 0.096 cm 3 g −1 , and a Mean pore diameter of 11.54 nm were achieved using Burt-Joyner-Holland (BJH) method.It's demonstrated that a mesoporous structure has better performance than macro and microporous structures for adsorbing ions from the feed solution and forming the electrical double layer (EDL) 17,68,69 .According to Fig. S7, the CA of water with the tablet prepared from Zn-BTC MOF powder is 26.7 degrees, which confirms the high wettability and hydrophilicity of this material 30,69 . Electrodes physicochemical/electrochemical characterization The electrochemical performance [i.e., Specific capacitance (F g −1 ) and Ion charge transfer resistance (Ω)] of the six electrodes with different composition is indicated in Table 1. The adsorption potential of an electrode strongly hinges on its capacity to hold and retain ions within its structure 19,70 .Thus, electrodes exhibiting higher specific capacitance values in the CV test are expected to demonstrate superior adsorption performance 19,71 .According to Table 1, the addition of up to 10% of the Zn-BTC MOF results in a maximum increment of 46.7% in the specific capacitance of electrodes from E2 to E6 compared to that of E1.A relatively sharp increase in ion charge transfer resistance was observed for MOF loadings greater than 6%, probably due to decreased overall electrical conductivity and active surface of the electrode 70,72,73 .As a result, the highest specific capacitance and minimum ion charge transfer resistance were obtained using the E4 electrode, which contained 6% of the Zn-BTC MOF.This outcome is likely related to the high hydrophilicity of the Zn-BTC MOF as well as the proper pore size distribution of the electrodes 69,74 .The overall behavior of CV and EIS test results indicates that the addition of a small quantity of Zn-BTC MOF and its optimization with other materials in the composition of electrodes lead to the enhancement of synergistically characteristic of electrode performance, which greatly impacts the specific capacitance and charge transfer kinetics of composite electrodes 70,74,75 . Therefore, further examination was conducted only on E1 and E4 electrodes to better reveal the superior performance of E4 as most appropriate electrode in CDI process.In the first step, the graphs obtained from CV and EIS characterizations were analyzed for E1 and E4 electrodes.Figure 2a shows CV test behavior for E1 and E4 electrodes.The CV curves of the electrodes have a quasi-rectangular shape and do not have peaks caused by Faradaic reactions, which confirms the capacitive behavior of the electrodes due to the formation of the EDL 17,76 .As a result, a greater surface area of the closed loop corresponds to a higher ion charge adsorption capacity of the electrode 17,25 .Also, The presence of CV curves signifies the reversible nature of the capacitive adsorption performance of the electrodes 17,25,77 .The elevation of the current slope observed at the initial and final stages of the CV curve for electrode E4, in contrast to E1, reflects an enhanced hydrophilicity and reduced electrical resistance of E4 compared to E1 69,[76][77][78] . Figure 2b shows the Nyquist curves obtained from the EIS test of E1 and E4 electrodes.As mentioned earlier, the EIS test indicates electrochemical resistances, especially ion charge transfer resistance in electrodes.In the EIS diagram, the vertical axis, which is imaginary resistance, is related to the capacitive resistance of the electrode, and the horizontal axis, which is the real resistance, is related to the electrical resistance of the solution, the charge transfer resistance in the electrode, and the ion diffusion resistance in the electrode 17,25,72 .The first intersection point of the curve with the horizontal axis indicates the electrical resistance of the electrolyte solution 17,76 .Also, the semicircle range at high-frequency values in the plot reflects the contact resistance of the electrode/electrolyte, which affects the efficacy of ion transfer 25,74,76 .Both electrodes exhibit similar shapes and trends in their respective plots.Upon the introduction of Zn-BTC MOF into the E4 electrode, a smaller halfcircle is observed as compared to the E1 electrode, indicative of lower ion charge transfer resistance within the electrode structure and potentially better diffusion of ions.However, the slope in the low-frequency region of the plot reflects the rate of ion diffusion, which is found to be nearly equivalent for both electrodes 17,25,73,74,76 . FTIR results of AC, Zn-BTC MOF, E1, and E4 electrodes are shown in Fig. 3.According to that, the AC spectrum shows peaks at 1060 cm −1 , 1635 cm −1 , and 2820 cm −1 to 3633 cm −1 related to C-O, C=C, and O-H bonds, respectively 17 .Additionally, peaks at 474 cm −1 , 624 cm −1 , and 890 cm −1 are caused by C-C=O, C-C-C, and C-H bonds in the AC structure 79 .The spectrum of the E1 electrode is very similar to AC's spectrum, and due to the small amounts of PVDF and the overlap of a number of PVDF and activated carbon indicator peaks, no apparent difference is observed in the spectrum of the E1 electrode and AC.In general, the peaks at 470 cm −1 , 621 cm −1 , and 1064 cm −1 in the E1 electrode spectrum, in addition to being related to C-C=O, C-C-C, and C-O bonds, can also indicate the presence of CF 2 bonds 79,80 .Also, the peaks at 1458 cm −1 and 2970 cm −1 confirm the presence of CH 2 bonds 79 .In the E4 electrode spectrum, the effect of increasing the Zn-BTC MOF on the mixture of AC and PVDF is observed.According to this spectrum, the weak peak at 717 cm −1 is due to the presence of a Zn-O bond 48,81 , and the peak ranging from 1480 to 1596 cm −1 equally belongs to the carboxyl group of the benzene ring due to the presence of a C=O bond 46,48,81 . The FESEM images of the E1 and E4 electrodes are shown in Fig. S8a and b, respectively, at three magnifications of 1, 5, and 20 μm.The good pore distribution and particle dispersity of compositions in the E4 electrode can be clearly observed, revealing the effect of MOF on the structure of the electrode, as compared to the E1 electrode. The EDS results of the E1 and E4 electrodes are shown in Fig. S9a and b, respectively, as well as the elemental mapping images of both electrodes in Figs.S10 and S11, which are described in supplementary information. Enhancing the hydrophilicity and wettability of the electrode surface can promote more efficient diffusion of ions within the electrode matrix 17,30 .Therefore, more pores participate in the ion adsorption process 17 .In addition, more active electrode surface is available to ions 73 .In Fig. 4, the CA of water with E1 and E4 electrodes can be seen.The CA of E1 and E4 electrodes is 108.3 and 52.4 degrees, respectively.PVDF binder and AC are both hydrophobic materials that generally make electrodes more hydrophobic 17,29 .Given the prominently high hydrophilicity exhibited by the Zn-BTC MOF, the observed rise in hydrophilicity of the E4 compared to the E1 electrode is consistent with prior studies 29,30,73 .The magnitude of the observed increment in hydrophilicity can result in a concomitant elevation in the total quantity and rate of ionic diffusion into the porous structure of the electrode 17,69,73 .Consequently, a broader and more stable EDL is established on the active surface of the electrode 17,29,73 . According to the adsorption and desorption diagram (Fig. S12a), the adsorption isotherm of AC is of the fourth type with a hysteresis loop of the third type, indicating the presence of non-hard, plate-like meso and macropores in its structure 22,47,54 .Additionally, Fig. S12 and Table S3 show an SSA of 723 m 2 g −1 , a pore volume of 0.364 cm 3 g −1 , and a Mean pore diameter of 3.10 nm that were achieved using BJH method.Materials with mesoporous structures exhibit superior performance compared to those with macro and microporous structures in terms of ion adsorption and the formation of an EDL within the material, effectively accommodating ions and facilitating ion diffusion 17,68,69 . Electrodes desalination performance In the CDI cell, the desalination process was executed using three different electrode arrangements: SymE1 (symmetric arrangement, with E1 used as both anode and cathode), SymE4 (symmetric arrangement, with E4 used as both anode and cathode), and Asym (asymmetric arrangement, with E4 used as anode and E1 as cathode).These arrangements were selected for two main reasons: (1) to study and compare the effect of adding Zn-BTC MOF to electrodes in a symmetric arrangement on the increase in SRC, and (2) to investigate the potential impact of the positive charge density of Zn-BTC MOF in the anode on both the electrical field force induced by externally applied voltage and the interaction forces of ions with electrodes in an asymmetric arrangement.Each desalination test was repeated three times, from a NaCl feed solution with an initial concentration of 100.0 mg L −1 and an initial electrical conductivity of 253.4 µS cm −1 .It is important to determine the appropriate conditions for the CDI process to achieve the best performance.Therefore, a suitable applied potential difference was determined.Insufficient applied voltage leads to a reduced formation of a suitable EDL, causing a decrease in the adsorption capacity of the electrode 17,70 .Excessive applied voltage can trigger Faradaic reactions or electrolysis of water, compromising the accuracy and stability of the electrode-electrolyte system 17,82,83 .Therefore, determining the optimal voltage for CDI cells is of particular importance. For the SymE1 arrangement, the results of desalination of a NaCl feed solution at voltages of 1.2 V and 1.6 V, and a flow rate of 20 mL min −1 , are depicted in Fig. S13a.It should be noted that due to the intense changes in the pH of the solution at a voltage of 2.0 V and the occurrence of Faradaic reactions 17,82,83 , the deionization process was stopped at this voltage, and therefore its results are not presented.According to Fig. S13a and some pre-tests in different voltages in all three different arrangements of electrodes, the best voltage was determined to be 1.6 V.As can be seen at the voltage of 1.6 V, the electrical conductivity of the feed solution has decreased to a greater extent in a period of 30 min, which means more desalination.The SRC at 1.2 and 1.6 V was equal to 2.4 and 5.2 mg g −1 , respectively, while the SRE was measured at 8.4% and 18.1%, respectively.This clearly indicates the direct effect of the electrical field force induced by the applied voltage on the amount of salt adsorption by the CDI cell 25 .Furthermore, the absence of gas bubbles and lack of intense pH changes suggests that Faradaic reactions or water electrolysis did not occur 17,82,83 . In another test to determine the appropriate flow rate of feed solution at 1.6 V, the SRC at flow rates of 10, 20, and 30 mL min −1 resulting in corresponding values of 4.7, 5.2, and 4.1 mg g −1 , respectively, as shown in Fig. S13b.The corresponding SRE values for these flow rates were found to be 16.3%, 18.1%, and 14.3%, respectively.Based on the results from Fig. S13b and pre-tests using different flow rates and electrode arrangements, the optimal flow rate was determined to be 20 mL min −1 .This outcome can be attributed to the effective diffusion of ions and the establishment of a stable EDL in the porous electrode structure, facilitated by adequate time 84,85 .Therefore, the assessment of desalination process were performed at the voltage of 1.6 V and the flow rate of 20 mL min −1 . The results of desalination in all three arrangements (SymE1, SymE4, and Asym) are shown in Fig. S13c.The SRC within 30 min of the desalination process for SymE1, SymE4, and Asym arrangements is equal to 5.2, 6.0, and 6.3 mg g −1 , respectively, which are equivalent to 18.1%, 20.8%, and 21.9% of SRE, respectively.As expected, the SymE4 has more desalination than the SymE1 arrangement.The high hydrophilicity of the Zn-BTC MOF, coupled with the greater specific capacitance and lower ion charge transfer resistance of the E4 electrode compared to E1, results in faster and more efficient ion diffusion and a more stable formation of the EDL within the electrode structure 17,73 .Additionally, the Asym arrangement exhibits more desalination efficiency than the SymE4 arrangement.The reason behind this phenomenon refers to the electrostatic interactions from the electrical field induced by the applied voltage on the electrodes and the charge of zinc ions (Zn 2+ ) present within the Zn-BTC MOF structure 29,60 .The presence of Zn 2+ ions in MOF structure creates positively charged sites that have a higher charge density than -COO-groups in the structure 29,53,86 .The anode, being the positively charged electrode, exhibits a distinct behavior because of the incorporation of Zn-BTC MOF along with the electric force effect of the externally applied voltage field 29 .The presence of Zn 2+ ions within the MOF structure exerts an attractive electrostatic force on the anions, thereby leading to enhanced attraction and separation of a larger number of anions from the passing solution comprising Na + and Cl − ions within the CDI cell 85 .This consequently leads to the formation of a stable EDL within the porous structure of the electrode 53,60,87,88 .The schematic mechanism involved in the Asym arrangement in the CDI system is depicted in Fig. 5. Furthermore, it can be inferred that the inclusion of Zn-BTC MOF within the cathode material of the SymE4 arrangement improves the electrode specific capacitance and concurrently reduces the ion diffusion resistance 86,89 .However, the presence of Zn 2+ ion sites within the cathode structure could potentially decreases its performance due to the consequent repulsion between Na + and Zn 2+ cations.Consequently, a decrease in cation adsorption can lower the performance of the SymE4 in comparison to the Asym arrangement 29,53,89 .The results of SRC and SRE of all three arrangements, considering the possible error for each arrangement, are shown in Fig. S13d and e, respectively. The results of a complete CDI process cycle (adsorption and desorption) are given in Fig. 6a.The desorption stage that removes ions from the electrodes (electrode regeneration) is carried out at 0.0 V and a flow rate of 20 mL min −1 .It can be seen that in the SymE1 arrangement, the electrodes are fully regenerated faster.After 15 min from the desorption stage, the feed solution electrical conductivity returns to its initial value.In the SymE4 arrangement, it takes 25 min for the electrodes to be completely regenerated and for the electrical conductivity of the feed solution to return to its initial value.This may be due to the increased adsorption of ions during the adsorption stage, leading to a prolonged time interval for their subsequent removal 29,60,87 .Additionally, the proposed arrangement exhibits a higher degree of stability of EDL compared to that of the SymE1 arrangement 29,86,89 . In the Asym arrangement, even after 30 min of the desorption stage, the electrodes have not been fully regenerated, and the electrical conductivity of the feed solution has not reached the initial value.This phenomenon can arise from several reasons.Firstly, the enhanced ionic adsorption in the adsorption stage can lead to a subsequent elongation in the ion removal time 29,60,87 .Secondly, the formation of a more stable EDL in the proposed arrangement outperforms that of both the SymE1 and SymE4 arrangements 29,86,89 .Last but not least, the anode comprises the Zn-BTC MOF in which the positive charge density is comparatively higher than the negative charge density 53,86,87 .The anions that accumulate in the EDL of the anode tend to retain their position even after the applied voltage is discontinued.Since the performance and efficiency of the anode and cathode electrodes during deionization processes are interconnected, the behavior of each electrode significantly affects the other 29,89 .This effect is triggered for the cations adsorbed at the cathode as well.Consequently, both SymE1 and SymE4, showed the lower ion removal efficiency and electrode regeneration 29,60,[86][87][88] . Figure 6b shows CDI Ragone plots for the three arrangements of electrodes.The CDI Ragone plots of both the SymE4 and Asym arrangements shift towards the upper right region compared to SymE1, indicating that they have a higher desalination capacity and desalination rate, possibly due to their increased accessible surface area and mesopores, as well as improved hydrophilicity 29,57,90 .As mentioned before, because of the effects of charge density in the anode and Zn 2+ electrostatic force of the Asym arrangement, which favor ion diffusion in the pores of the electrode matrix, it demonstrates both a higher desalination capacity and desalination rate compared to the other two arrangements.This indicates the effect of Zn-BTC MOF on capacitive behavior and ion charge transfer kinetics 29,57,75,89 .Figure 6c shows the cyclic adsorption/desorption experiments of the representative Asym arrangement of electrodes in a NaCl feed solution with a starting concentration of 100.0 mg L −1 .It is noted that the electrodes showed approximately a 2.9% decay in SRC after 50 cycles, proving its good cycling performance. Table 2 provides the details of the desalination process for all the three arrangements.The process was conducted at a voltage of 1.6 V and a flow rate of 20 mL min −1 .The initial feed solution contained 100.0 mg L −1 NaCl.The details are provided for one cycle of the process.The table demonstrates that incorporating a small quantity of Zn-BTC MOF into composite electrodes, in combination with an asymmetric electrode arrangement, significantly increases SRC.This is evident from the 15.3% and 21.1% increase in SRC for SymE4 and Asym arrangements, respectively, when compared to symE1. Conclusion Incorporation of a small amount of Zn-BTC MOF into the carbon electrodes enhanced the electrochemical and desalination performance.Although the SymE4 and particularly Asym arrangements exhibited weaker performance during the desorption stage, this issue could be resolved either by increasing the flow rate or by applying a reverse voltage for a short duration.Additionally, the electrode mass was periodically measured throughout the experiments, and the lack of any significant mass variation indicated their favorable stability and the absence of noticeable faradaic reactions during the desalination processes.MOFs, like conventional additives to carbon electrodes, enhances the performance, but MOFs typically exhibit higher SSA and increased active sites.The incorporation of MOF particles with variable sizes ranging from the nanometer to micrometer scale, coupled with the high hydrophilicity of MOFs, enhances MOF particle distribution and uniformity, and also ion diffusion within the electrode structure.This improved dispersion within the AC matrix then promotes better ion accessibility to the electrode porous structure.Zn-BTC MOF exhibits superior desalination performance in asymmetrical arrangement when compared to symmetrical arrangements in CDI systems.This is due to the higher density of positive charge relative to negative charge.Hence, this work demonstrated that composite MOF-incorporated electrodes could be of significant interest for future research aiming at enhancing the performance of CDI systems. Considering the lack of substantial research on investigating pH changes in adsorption/desorption cycles and their effect on the electrical conductivity of solutions, the importance of studying this aspect in future research is highly significant.Additionally, prioritizing future research should involve examining the optimization conditions of material composition percentages and the selection of metallic nodes and organic ligands for MOF Table 2. Details of the desalination process in all the three arrangements. Figure 1 . Figure 1.Schematic diagram of CDI cell used in this work. Figure 5 . Figure 5. Schematic mechanism involved in the Asym arrangement in CDI system. Figure 6 . Figure 6.(a) The results of one cycle of adsorption and desorption process for all the three arrangements of electrodes.(b) CDI Ragone plots of all the three arrangements of electrodes.(c) The adsorption and desorption cycling stability test of Asym arrangement of electrodes for 50 cycles.Experimental conditions: a voltage of 1.6 V and a flow rate of 20 mL min −1 . Table 1 . Results of CV and EIS tests of the electrodes.
2024-07-03T06:17:20.202Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "3d3b801d0bb1d313a286340fc13b681f5fdabe8b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b29afd17ace624e25d9d1abf9e83fd4ae591a87c", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
116246309
pes2o/s2orc
v3-fos-license
Numerical Comparison of Three Different Feedback Control Schemes Applied on a Forming Operation Feedback and process control of metalforming processes has received increasing attention the lastdecade. Basically there exist four control philosophies; control ofprocess parameters during the punch stroke, iterative learning control(based on historical data), a combination iterative learning andfeedback control and finally feed-forward control. The present work willpresent three different control schemes which all are based onfeedback philosophy i.e. control during the punch stroke or iterativelearning control, where process parameters are updated according toprocess history. The three control schemes are tested using a non-linear finite element model of a square deep-drawing and finallypros and cons are discussed based on the numerical results. Introduction It is generally accepted that the deep drawing and stamping operations are non-static over time i.e. changes in the material parameters, friction and lubrication, tool and press deflection, etc. all influence the process stability. The process noise can generally be divided into two categories: • Non-repetitive uncertainties -variance in sheet thickness and uneven lubrication. • Repetitive uncertainties -changes in the material properties, tool wear and tool temperature. The non-repetitive uncertainties can be categorised as normal distributed noise without any trends or development over time and are generally handled through an open loop strategy where the stability is enforced by conservative and robust tool-designs. The repetitive uncertainties can result in process breakdown and typically requires manual process adjustment e.g. adjustment of the blank-holder force, lubrication, shimming etc. (a) The influence of tool temperature on the process window for a cylindrical deep drawing. (b) Tensile test of DC06 at room temperature and elevated temperature, the strain hardening is modelled using a Hollomon Swift model. Online : 2018-11-20 Due to plastic work the tool temperature will increase during the first half hour production, the steady state temperature is typical in the range of 60-80°C [1,7]. Figure 1 shows the limiting blankholder force for a cylindrical deep drawing for different tool temperatures, in this case the process window (defined as range between the upper and lower blank-holder force) is reduced from 9 ton at room temperature to less than 1 ton for a tool temperature of 60°C. The correlation between tool temperature and process stability is very complex, however it is safe to conclude that tool temperature will influence friction conditions as well as material properties, see figure 1. This paper will present three different approaches to feedback control of stamping and deep drawing operation. Many, different approaches has been proposed in the literature, however, they can all be divided into four categories, see figure 2 • Feedback control: Corrective actions are taken during the punch stroke e.g. controlling the blank-holder force [5]. • Feedback control and Iterative learning control: A combination of in-process feedback control (corrections during the punch stroke) and an iterative learning control system which transfer process information from part to part [3,1] • Iterative learning control: A system which update process parameters based on post-process data -where the control system transfers information from part to part [4,2]. • Feed-forward (control): The process parameters are estimated based on material parameters see e.g [6], tool temperature, sheet thickness, etc. Feedback control is based on the ability to sample data from the process and one of the key elements in feedback control is the ability to establish a unique relation between the sampled output and input parameters (process parameters). We are dealing with plastic deformation and it seems reasonable to chose an output signal y(k) which reflects plastic deformation. Both material flow into the die cavity [8] and flange draw-in [10] has been proposed in the literature. The first two control algorithms (feedback control and the combined feedback and ILC)are based on the flange draw-in sampled during the punch stroke [5,3,1]. The last ILC algorithm is only using the final flange geometry as input, thus data can be sampled after the tool is opened using e.g laser scanners or image processing. [4,2]. Numerical example Deep-drawing of a square part enables full control of the flange draw-in using a special designed shimming system, where the blank-holder is locally deformed using hydraulic pressure. The deflection of the blank-holder is controlled by four cavities located on each side of the square cup, giving a total of five process parameters, including the blank-holder force, which can be individual adjusted, see figure 4. The performance of the three different control schemes is evaluated using a blank material with an uneven thickness distribution (0.95 to 1.05mm left to right) and changes in the material parameters, see figure 3b. The new material batch will result in uneven flange draw-in and process instability, see figure 7c. A well performing algorithm should stabilise the process by reproducing the reference flange geometry and the reference thickness distribution, see figure 3a. Modelling the control system Lim et al. [7] reviewed advances in feedback control of sheet metal forming processes and concluded that one of the requirement for designing and developing in-process-controllers are accurate models of the forming process. The current control schemes are all designed using a finite element model of the stamping or deep drawing process. Where feedback gain factors (two first control systems) are identified solving a non-linear optimal control problem where the system is modelled using LS-Dyna and the non-linear optimal control problem is solved iterative using a non-linear least square solver. The third ILC algorithm is also based on a non-linear least square formulation where the Jacobian matrix is calculated via the finite element model. It is important to note that the finite element model is only used during the design phase and are not involved in an experimental set-up. Feedback Control One approach to reduce the scrap rate is introducing an in-process feedback control. Polyblank et.al gives an extensive review of the area covering more than two decades of research activities, within the field of feedback control applied on various metal-forming processes [9]. In the current paper the plant (model of the control system) is based on a finite element model and the feedback constants (gain factors) are identified solving a non-linear optimal control problem. The weighing coefficients Q 1 and Q 2 are used to control the stability of the system, where Q 1 = q 1 I controls the impact of the draw-in error and Q 2 = q 2 I can be interpreted as a damping factor. Feedback during the punch stroke The in-process feedback loop, modelling and solution strategy has been developed over a series of articles, see [5]. The control scheme is based on the notion that time series can be used to predict the state of a system one step ahead, x j+1 , is estimated by assuming a linear correlation between the current and previous samples. If you can predict the state of the system one step ahead, then this information can also be used to take corrective actions thus, the state vector can be defined as: The updated system inputũ(k) is then defined as where w(k) represents the reference blank-holder force and cavity pressure and u(k) represents the controller input, defined as: where the controller effort ∆u(k) was calculated according to The control system eliminates the draw-in error during the punch stroke by adjusting both the cavity pressure and the blank-holder force. Figure 6b shows the draw-in error converges to zero during the punch stroke. However, some oscillation in the draw-in error and input parameters can be observed. The process instability, figure 7c, is eliminated and the thickness distribution in the final part is in the range of 0.67-1.2mm, see figure 11d. Feedback and Iterative Learning Control The in-process control system do not take in account that stamping and deep-drawing processes are repetitive processes running over a limited time span (process time in the range of 2-3 seconds) i.e. if the process is subject to a repetitive uncertainty (e.g. a new material batch) then the in-process control system will receptively correct the same error part after part. As a result, the repetitive uncertainties, 68 Uncertainty in Mechanical Engineering III will eventually use all the available correctable bandwidth i.e. the in-process control system will be unable to compensate for non-repetitive uncertainties. This problem is addressed by introducing a second control loop, which take advantage of the repetitive process layout and updates the process parameters based on historical data i.e. data from the previously produced part is used to update the input w(k), see figure 5. A linear learning algorithm is generally preferred and taking the form: where j represents the ILC iteration index, E j and W j are m × n matrices representing the current error and reference input. The gain matrix G is a m × m lower triangular matrix, with ϕ 1 in the diagonal and ϕ 1 + ϕ 2 below the diagonal: If a new material batch is taken into production, the feedback loop will initially try to minimise the error during the manufacturing of the first part -then the ILC algorithm will gradually take over reducing the error by adjusting the process parameters corresponding to the new operational conditions. By adjusting the reference input w(k) trajectory, see figure 9. The draw-in error, during the punch stroke, is almost eliminated for 20th part following the change in the material parameters. The main properties of the control system, combining feedback and iterative learning control can be summarised as: • Advantage: The algorithm offers full control over the reference input trajectory w(k) i.e. variable blank-holder and cavity pressures can be prescribed as a function of the punch displacement, see figure 7a. • Disadvantage: Accurate and robust draw-in samples are required as a sensor fallout will result in process instability. This is one of the main limitations preventing industrial applications. Iterative Learning Control Endelt and Volk [4] identified two major obstacles which need to be addressed before an industrial implementation is possible: • The proposed control algorithms are often limited by the ability to sample process data with both sufficient accuracy and robustness -this lack of robust sampling technologies is one of the main barriers preventing successful industrial implementation. • Limitation in the current press designs; many of the presses currently used in industry only offer limited opportunities to change the blank-holder force during the punch stroke. Even if, the press offers the opportunity to change the blank-holder force the reaction speed may be insufficient compared with the production rate in an industrial application. 1: Choose the "optimal" process parameters x * and the step size scalar α and s max , load or sample the reference flange geometry y ref . Initialize the counter k = 1, k max and set x k = x * . 2: Load the Jacobian matrix J (x * ) and calculate the Hessian matrix using the Gauss-Newton approximation Update the flange geometry y k , the residual vector and the gradient according to: 5: Residual vector: Calculating maximum step size according to: 9: if (s max > 0) then 10: α max = min Update the process parameters x k+1 according to: The third algorithm is based on a classical Gauss-Newton formulation where the control problem is treated as non-linear curve fitting problem. Where the objective is to minimise the least square error between a reference flange geometryŷ and the current flange geometry y. The "optimal" or reference flange geometry which represents the stable process were represented by 32 sample points which were collected along the flange edge, see figure 9a. The non-linear optimization algorithm, can be reformulated to an iterative learning control scheme. If the flange fitting problem is assumed to be convex and close to linear, in a sufficient region surrounding the optimal process parameters x * which also defines the optimal flange geometry. Under these assumptions, it is only necessary to calculate the Jacobian representation at the point x * . Thus, only one Jacobian matrix is defined J (x * ), which will govern the optimization problem, for any set of parameters x k and any residual vector r k which are sufficient close to x * . The Jacobian is a m × n matrix, where n represents in this case represents the number of process parameters x and m represents the number of sample points y. The j-th column of the Jacobian matrix 70 Uncertainty in Mechanical Engineering III gives the sensitivity of the flange draw-in error r with respect to the process parameter x j . The vectors ∂r/∂x j represents the sensitivity of each sample point i with respect to process parameters x j , the algorithm is summarised in figure 8. [mm] (a) Reference edge -represented by 32 samples. The ILC algorithm (based on post process data) stabilises the process within 5-10 produced parts, see figure 9. Figure 10 shows the thickness distribution for the second and 20th part produced after the change in material properties, note the close resembles between the two components, this indicates that only the first part produced, following the shift, will be non conforming, see figure 7c. Conclusions All three control schemes stabilised the process and producing parts with almost identical thickness distributions, see figure 11. Furthermore, figure 12 shows the flange geometry produced by the three control schemes, they all produce a flange geometry which are very close to the reference part. Uncertainty in Mechanical Engineering III • Feedback: The feedback system is based on flange draw-in samples and the feedback system stabilises the process by adjusting the blank-holder and cavity pressure during the punch stroke. The main disadvantage; dependent on accurate and robust draw-in samples as a sensor fallout will result in process instability. • Feedback and ILC: The first part produced is identical to the one produced by the feedback scheme, the following parts will gradually update the process parameters according the process history i.e. the blank-holder and pressure profiles are updated before the production of the next part. Also this algorithm is depending on accurate and robust draw-in samples. • ILC (part to part update): The process parameters are constant during the punch stroke -this limit the flexibility. However, the flange geometry is measured post process using e.g. a laser scanner or image processing technologies. The selection of control schemes/algorithms are a compromise between the equipment at hand (which process parameters can be adjusted and at which speed), production rate, which sensor technology is available etc. Depending on the physical constraints different option are available, offering different levels of control and reaction speed. Taking into account the present robustness of drawin sensors, post processing flange geometry seems to be the most promising of the three algorithms presented in this paper.
2019-04-16T13:29:07.470Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "7fa642d2fc0d53adc39358f9f0af7670a9d3e2c2", "oa_license": "CCBY", "oa_url": "https://www.scientific.net/AMM.885.64.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "eb64a52357802006064e9f479cafa4553a77fcc0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
32809910
pes2o/s2orc
v3-fos-license
Efficient Multi-Unit Auctions for Normal Goods I study multi-unit auction design when bidders have private values, multi-unit demands, and non-quasilinear preferences. Without quasilinearity, the Vickrey auction loses its desired incentive and efficiency properties. I give conditions under which we can design a mechanism that retains the Vickrey auction's desirable incentive and efficiency properties: (1) individual rationality, (2) dominant strategy incentive compatibility, and (3) Pareto efficiency. I show that there is a mechanism that retains the desired properties of the Vickrey auction if there are two bidders who have single-dimensional types. I also present an impossibility theorem that shows that there is no mechanism that satisfies Vickrey's desired properties and weak budget balance when bidders have multi-dimensional types. Motivation Understanding how to design auctions with desirable incentive and efficiency properties is a central question in mechanism design. The Vickrey-Clarke-Groves (hereafter, VCG) mechanism is celebrated as a major achievement in the field because it performs well in both respects -agents have a dominant strategy to truthfully report their private information and the mechanism implements an efficient allocation of resources. However, the VCG mechanism loses its desired incentive and efficiency properties without the quasilinearity restriction. Moreover, there are many well-studied cases where the quasilinearity restriction is violated: bidders may be risk averse, have wealth effects, face financing constraints or be budget constrained. Indeed, observed violations of quasilinearity are frequently cited as reasons for why we do not see multi-unit Vickrey auctions used in practice. For example, Ausubel and Milgrom (2006), Rothkopf (2007), and Nisan et al. (2009) all cite budgets and financing constraints as salient features of real-world auction settings that inhibits the use of the Vickrey auctions. Che and Gale (1998) note that bidders often face increasing marginal costs of expenditures when they have access to imperfect financial markets. In this paper, I study multi-unit auctions for K indivisible homogenous goods when bidders have private values, multi-unit demands, and non-quasilinear preferences. I provide conditions under which we can construct an auction that retains the desired incentive and efficiency properties of the Vickrey auction: (1) ex post individual rationality, (2) dominant strategy incentive compatibility, and (3) ex post Pareto efficiency (hereafter, efficiency). My results hold on a general preference domain. Instead of assuming that bidders have quasilinear preferences, I assume only that bidders have positive wealth effects; i.e. the goods being auctioned are normal goods. My environment nests well-studied cases where bidders are risk averse, have budgets, or face financing constraints. My first main result shows that there is a mechanism that satisfies the desired properties of the Vickrey auction if there are two bidders and bidders have single-dimensional types (Theorem 1). The mechanism implements an (ex post Pareto) efficient outcome -i.e. an outcome where there are no ex post Pareto improving trades amongst bidders. The proof of Theorem 1 differs from proofs of positive implementation results in quasilinear settings. With quasilinearity, an efficient auction can be constructed in two steps. First, we note that there is a generically unique efficient assignment of goods. Then, we solve the efficient auction design problem by finding transfers that implement the exogenously determined assignment rule. Without quasilinearity, the space of efficient outcomes is qualitatively different because a particular assignment of units can be associated with an efficient outcome for some levels of payments, but not for others. This is because a bidder's willingness to buy/sell an additional unit to/from her rival depends on her payment. For this reason, I use a fixed point argument to determine the efficient mechanism's payment rule and assignment rule simultaneously. More precisely, I construct a transformation that maps an arbitrary mechanism to a more efficient mechanism. The transformed mechanism specifies the efficient assignment of units in the case when payments are determined according to the arbitrary mechanism's payment rule. The transformed payment rule is the payment rule that implements the transformed assignment rule. I show that a fixed point of the transformation defines an efficient mechanism and I use Schauder's fixed point theorem to show that a fixed point of the transformation exists. Thus, there is a mechanism that retains the Vickrey auction's desirable incentive and efficiency properties in the two-bidder single-dimensional types case. Furthermore, I provide a constructive proof that shows there is also a mechanism with the desired Vickrey properties when many bidders compete to win two units. The positive implementation results for the single-dimensional types case do not carry over to multi-dimensional settings though. I obtain an impossibility result because wealth effects and multi-unit demands combine to inhibit efficient implementation. 1 These two modeling assumptions imply that in an efficient auction, a bidder's demand for later units of the good endogenously depends on her rivals' reported types, even in the private value setting. This is because a bidder's demand for her second unit of the good depends on the price she paid for her first unit, and in an efficient auction, the price a bidder paid for her first unit necessarily varies with her rivals' reported types. Thus, positive wealth effects imply that in an efficient auction, there is endogenous interdependence between a bidder's demand for later units and her rivals' types. Furthermore, the prior literature on efficient multi-unit auction design without quasilinearity has not noted this connection between private value models without quasilinearity and interdependent value models with quasilinearity (like those studied by Dasgupta and Maskin (2000), for example). I use this connection to motivate my proofs of the impossibility theorem for the multi-dimensional types case (Theorem 3). The paper proceeds as follows. The remainder of the Introduction discusses the related literature. Section 2 presents my model for bidders with single-dimensional types. Section 3 presents the results for the single-dimensional case. Section 4 presents the impossibility theorem for bidders with multi-dimensional types. Proofs and additional results are in the appendix. Friedman (1960) proposed the uniform price auction for homogenous goods. If bidders truthfully report their demands, the uniform price auction will allocate goods efficiently. However, Ausubel et al. (2014) show that bidders have an incentive to underreport their demand in the uniform price auction. In contrast, the Vickrey-Clarke-Groves mechanism efficiently allocates goods and gives bidders a dominant strategy to truthfully reveal their private information to the mechanism designer. Holmstrom (1979) gives conditions under which VCG is the unique mechanism that satisfies these two objectives. In addition, Ausubel (2004) describes an ascending auction format, called the clinching auction, that implements the VCG allocation and payment rule. Related literature Two crucial assumptions are needed to obtain Vickrey's positive implementation result: (1) agents have private values and (2) agents have quasilinear preferences. There is a long literature that studies how Vickrey's result generalizes without private values. In this literature, Dasgupta and Maskin (2000), Jehiel and Moldovanu (2001), Jehiel et al. (2006) give impossibility results for when agents have multi-dimensional types. 2 In contrast, Bikhchandani (2006) shows that there are non-trivial social choice rules in interdependent value settings where bidders compete to win private goods. He proves the existence of a constrained efficient mechanism in a single unit auction setting where bidders have multi-dimensional types. There is a relatively smaller literature on how Vickrey's positive implementation result generalizes without (2), the quasilinearity restriction, and that is the question I study in this paper. 3 In particular, I study how Vickrey's results extend to a multi-unit auction setting with homogenous goods where bidders have multi-unit demands and non-quasilinear preferences. There is a literature that studies efficient multi-unit auction design in settings where bidders have unit demands and non-quasilinear preferences. Saitoh and Serizawa (2008) and Morimoto and Serizawa (2015) both show that Vickrey's positive implementation result can be extended to such settings. Saitoh and Serizawa study the case where all objects are homogenous. That is the case studied in this paper as well. Morimoto and Serizawa show that Vickrey's positive implementation result can be extended to a heterogeneous good setting 2 Maskin (1992), Krishna (2003), and Reny (2002, 2005) give sufficient conditions for efficient auction design in single-dimensional type settings. 3 Most of the literature on auctions without quasilinearity has studied revenue maximization and bid behavior in commonly used auctions. Maskin and Riley (1984) study revenue maximization when bidders have single-dimensional private information. Baisa (2017) studies revenue-maximizing auction design in a similar setting to this paper where bidders have positive wealth effects. There is also a literature that studies the performance of standard auction formats in certain non-quasilinear settings. Matthews (1987), Hu, Matthews, and Zou (2015), and Che and Gale (1996, 1998, 2006) study standard auctions when bidders have budgets, face financial constraints, and are risk averse. where bidders have non-quasilinear preferences. In particular, they show that Demange and Gale's (1985) minimum price Walrasian rule is dominant strategy implementable and efficient when bidders have unit demands. Their positive implementation result holds in cases where bidders have multi-dimensional private information. My paper is different from this line of research because I study the case where goods are homogenous and bidders have multi-unit demands. My results show that the combination of multi-dimensional private information and multi-unit demands yield an impossibility result in a homogenous good setting. Most other work on efficient multi-unit auction design without quasilinearity focuses on a particular violation of quasilinearity -bidders with hard budgets. Dobzinski, Lavi, and Nisan (2012) study efficient multi-unit auction design where bidders have multi-unit demands, constant and private marginal values for additional units, and hard budgets. They show that the clinching auction (see Ausubel (2004)) is an efficient auction if bidders have public budgets. If bidders have private budgets, then they show that there is no efficient auction. Subsequent work by Lavi and May (2012) and Goel, Mirrokni, and Paes Leme (2015, Theorem 5.11) also provide impossibility results for the case where bidders have hard budgets. In Lavi and May, bidders have two-dimensional types and a public budget; and in Goel, Mirrokni, and Paes Leme, bidders have an infinite-dimensional type and a public budget. Kazumura and Serizawa (2016) also study efficient design with multi-unit demand. Like this paper, they move beyond the hard budgets case and consider a general non-quasilinear setting. In their setting, there are heterogeneous goods and one buyer has multi-unit demands. Their setting assumes that bidders have infinite-dimensional private information because their impossibility theorem allows bidders to have any rational preference. My paper expands on this line of research by similarly studying the efficient auction design problem. Like the papers cited in the prior paragraph, I use the taxation principle to form necessary conditions for efficient auction design that yield a proof by contradiction. My proof approach also differs. The prior work has obtained impossibility results by finding an efficient mechanism in a more restricted setting and then showing that the efficient mechanism loses incentive compatibility in their more general settings. I instead get a proof by contradiction by noting the connection between this design problem and the efficient design problem with interdependent values -the proof of my impossibility theorem (Theorem 3) does not rely on the results I used to prove the existence of an efficient mechanism (Theorems 1 and 2). In addition, my impossibility theorem for bidders with multi-dimensional types (Theorem 3) holds on a coarser type space and nests the cases where bidders have two-dimensional and infinite-dimensional types. Maskin (2000) and Pai and Vohra (2014) study a related question of expected surplus maximizing auctions in the budget case under the weaker solution concept of Bayesian im-plementation for bidders with i.i.d types. In contrast, this paper studies Vickrey's problem of efficient auction design in dominant strategies. My results are also related to recent literature on value maximizing bidders (see Fadaei and Bichler (2016)). Value maximization is a limiting case of my model where a bidder gets arbitrarily small disutility from spending money up to their budget. Similar to this paper, Kazumura and Serizawa (2016) study efficient design with multiunit demand in a general non-quasilinear setting. In their setting, there are heterogeneous goods and buyers with non-quasilinear preferences, and one buyer with multi-unit demands. Their setting assumes that bidders have infinite-dimensional private information because their impossibility theorem allows bidders to have any rational preference. Outside of the auction literature, there is some work on the scope of implementation without quasilinearity. Kazumura, Mishra, and Serizawa (2017) provide results on the scope of dominant strategy implementation in a general mechanism design setting where agents are not restricted to have quasilinear preferences. Garratt and Pycia (2014) investigate how positive wealth effects influence the possibility of efficient bilateral trade in a Myerson and Satterthwaite (1983) setting. In contrast to this paper, Garratt and Pycia show that the presence of wealth effects may help induce efficient trade when there is two-sided private information. Nöldeke and Samuelson (2018) also study implementation in principal-agent problems and two-sided matching problems without quasilinearity. They extend positive implementation results from the quasilinear domain to the non-quasilinear domain by establishing a duality between the two settings. Bidder preferences -the single-dimensional types case A seller has K 2 units of an indivisible homogenous good. There are N 2 bidders who have private values and multi-unit demands. Bidder i's preferences are described by her type If bidder i wins q 2 {0, 1, . . . , K} := K units and receives m 2 R in monetary transfers, her utility is u(q, m, ✓ i ) 2 R. We assume that u is commonly known and ✓ i 2 ⇥ is bidder i's private information. A bidder's utility function is continuous in her type ✓ i and continuous and strictly increasing in monetary transfers m. 4 4 It is without loss of generality to assume that a bidder has an initial wealth of 0, or measure wealth in terms of deviation from initial wealth. A bidder with utility u and initial wealth w 0 , has the same preferences over units and transfers as a bidder with initial wealth 0 and utilityû where we defineû aŝ u(q, m, ✓ i ) = u(q, m + w 0 , ✓ i ) 8q 2 {0, 1, . . . , K}, m 2 R, ✓ i 2 ⇥. I study deviations from initial wealth because this allows a more flexible interpretation of the model where we can also include wealth as an element of bidders' private information. For example, in Section 3.1, I provide an example of an efficient If ✓ i = 0, then bidder i has no demand for units, If ✓ i 2 (0, ✓], then bidder i has positive demand for units, Without loss of generality, I assume that u(0, 0, ✓ i ) = 0 8✓ i 2 ⇥. Bidders have bounded demand for units of the good. Thus, I assume that there exists a h > 0 such that I make three additional assumptions on bidders' preferences. First, I assume that bidders have declining demand for additional units. Therefore, if a bidder is unwilling to pay p for her q th unit, then she is unwilling to pay p for her (q +1) st unit. This generalizes the declining marginal values assumption imposed in the benchmark quasilinear setting. Second, I assume that bidders have positive wealth effects. This means a bidder's demand does not decrease as her wealth increases. To be more concrete, suppose that bidder i was faced with the choice between two bundles of goods. The first bundle provides q h units of goods for a total price of p h , and the second bundle provides q`units of goods for a total price of p`, where q h , q`2 K and p h , p`2 R are such that q h > q`and p h > p`. If bidder i prefers the first bundle with more goods, then positive wealth effects state that she also prefers the first bundle with more goods if we increased her wealth prior to her purchasing decision. This is a multi-unit generalization of Cook and Graham's (1977) definition of an indivisible, normal good. I define two versions of positive wealth effects, weak and strict. I assume that bidder preferences satisfy the weak version, which nests quasilinearity, when I present the positive implementation result. When I present the impossibility theorems, I assume the strict version of positive wealth effects, because the strict version rules out the mechanism where a bidder's wealth (her soft budget) varies with her private type. quasilinear setting where the benchmark Vickrey auction solves the efficient auction design problem. Assumption 2. (Positive Wealth Effects) Consider any q h , q`, p h , p`where, q h > q`, p h > p`, q h , q`2 K, and p h , p`2 R. Bidders have weakly positive wealth effects if and strictly positive wealth effects if Finally, I assume that bidders with higher types have greater demands. Assumption 3. (Single Crossing) Suppose q h > q`and p h > p`where q h , q`2 K, and p h , p`2 R. Then, bidder preferences are such that be the amount that bidder i is willing to pay for her first unit of the good. Thus, b 1 (✓ i ) implicitly solves for all ✓ i 2 ⇥. It is without loss of generality to assume types are such that b 1 (✓) = ✓ 8✓ 2 ⇥. 5 Thus, ✓ i parameterizes the intercept of bidder i's demand curve (assuming bidder i pays no entry fee). I similarly define b k (✓ i , x) where b k : ⇥ ⇥ R ! R + as bidder i's willingness to pay for her k th unit, conditional on winning her first k 1 units for a cost of x 2 R. More precisely, b k (✓ i , x) is implicitly defined as solving for all k 2 {2, . . . , K}, ✓ i 2 ⇥ and x 2 R. I analogously define s k (✓ i , x) as bidder i's willingness to sell her k th unit, conditional on having paid x in total. Thus, a bidder's willingness to sell her k th unit s k (✓ i , x) is implicitly defined as solving for all k 2 {1, . . . , K}, ✓ i 2 ⇥ and x 2 R. Note that by construction, In words, this means that bidder i is indifferent between buying/selling her k th unit at price b k (✓ i , x), given that she paid x to win her first k 1 units. Assumptions 1, 2, and 3 imply: 2. b k (✓, x) and s k (✓, x) are continuous and decreasing in the second argument x for all 3. b k (✓, x) and s k (✓, x) are continuous and strictly increasing in the first argument ✓ for The first point is implied by declining demand. The second point is implied by positive wealth effects. The final point is implied by single crossing. Mechanisms By the revelation principle, it is without loss of generality to consider direct revelation mechanisms. I restrict attention to deterministic direct revelation mechanisms. A direct revelation mechanism maps the profile of reported types to an outcome. An outcome specifies a feasible assignment of goods and payments. An assignment of goods y 2 K N is feasible if P N i=1 y i  K. I let Y be the set of all feasible assignment. A direct revelation mechanism consists of an assignment rule q and a payment rule x. An assignment rule q maps the profile of reported types to a feasible assignment q : ⇥ N ! Y . I let q i (✓ i , ✓ i ) denote the number of units won by bidder i when she reports type ✓ i 2 ⇥ and her rivals report types ✓ i 2 ⇥ N 1 . The payment rule maps the profile of reported types to payments x : ⇥ N ! R N . I let x i (✓ i , ✓ i ) denote the payment of bidder i in mechanism when she reports type ✓ i 2 ⇥ and her rivals report types ✓ i 2 ⇥ N 1 . I study direct revelation mechanisms that satisfy the following properties. Definition 1. (Ex-post Individual Rationality) A mechanism is ex-post individually rational if Thus, a mechanism is ex-post individually rational (hereafter, individually rational) if a bidder's utility never decreases from participating in the mechanism. I study mechanisms that are dominant strategy incentive compatible (hereafter, incentive compatible). Thus, we say that is incentive compatible, then bidder i's payoff from reporting her true type ✓ i 2 ⇥ weakly exceeds her payoff from reporting any ✓ 0 i 2 ⇥, for any report by her rivals ✓ i 2 ⇥ N 1 . This is stated in Definition 2. I look at mechanisms that satisfy ex-post Pareto efficiency. This is the same efficiency notion studied by Dobzinski, Lavi, and Nisan (2012) and Morimoto and Serizawa (2015). for some i 2 {1, . . . , N}, then either Thus, an outcome is ex-post Pareto efficient, if any reallocation of resources that makes bidder i strictly better off necessarily makes her rival strictly worse off, or strictly decreases revenue. I say that the mechanism is an ex-post Pareto efficient mechanism (hereafter, The weak budget balance condition is an individual rationality constraint on the auctioneer. Definition 4. (Weak Budget Balance) A mechanism satisfies weak budget balance if A mechanism that satisfies weak budget balance always yields weakly positive revenue. When I study the single-dimensional types setting with N 3 bidders, I impose a stronger but related requirement -no subsidies. A mechanism provides no subsidies if it never pays a bidder a positive amount to participate. Morimoto and Serizawa (2015) impose the same condition when studying efficient auctions in a setting where bidders have unit demand. 3 Efficient auctions for bidders with single-dimensional types In this section, I prove that there is a mechanism that has the Vickrey auction's desirable incentive and efficiency properties when there (1) are two bidders and K units, and (2) N bidders and two units. I consider the two cases separately. In addition, I discuss the challenges associated with extending my positive implementation results to the general N ⇥ K singledimensional types setting. The two bidder K object case In this subsection, I prove that there is a mechanism that has the Vickrey auction's desirable incentive and efficiency properties when there are two bidders and K units. More precisely, I assume that bidder i's private information is described by a single-dimensional parameter and ✓ i parameterizes bidder i's commonly known utility function u, where u satisfies the conditions described in Section 2.1. I show that when there are two bidders, there is a symmetric mechanism that satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) no subsidies. I use a fixed point proof to characterize the efficient mechanism. In particular, I form a transformation that maps an arbitrary mechanism to a more efficient mechanism, and I show that the fixed point of the transformation corresponds to a mechanism that retains the Vickrey auction's desirable properties. I describe an arbitrary symmetric mechanism by cut-off rule d : ⇥ ! ⇥ K . The k th dimension of the cut-off rule d k (✓ j ) gives the lowest type that bidder i must report to win at least k units when her rival reports type ✓ j . 7 Hence, a mechanism has cut-off rule d if expressions 1 and 2 hold for all k 2 {1, . . . , K}, where Incentive compatibility implies that q i (✓ i , ✓ j ) is weakly increasing in ✓ i for all ✓ i , ✓ j 2 ⇥. Incentive compatibility and efficiency imply that q i (✓ i , ✓ j ) weakly decreasing in ✓ j for all ✓ i , ✓ j 2 ⇥. 8 Thus, the cut-off rule d k (✓) is weakly increasing in ✓ and weakly increasing in k for all ✓ 2 ⇥ and k 2 {1, . . . , K}. I let D ⇢ {d|d : ⇥ ! ⇥ K } be the set of all cut-off rules that are weakly increasing in ✓ and k. Note that a cut-off rule d 2 D does not necessarily correspond to a feasible mechanism. I use the taxation principle (see Rochét (1985)) to find a pricing rule that implements a mechanism described by cut-off rule d 2 D. The pricing rule p is a mapping p : ⇥⇥D ! R K+1 that states the price a bidder pays to win each additional unit of the good is conditional on her rival's type. We say that a pricing rule p implements a (symmetric) cut-off rule d 2 D if bidder i demands at least k units where k 2 {1, . . . , K} if and only if her type ✓ i 2 ⇥ exceeds the k th unit cut-off d k (✓ j ). The pricing rule p(·, d) is such that bidder i demands at least one unit (✓ i > p 1 (✓ j , d)) if and only if her type exceeds the first unit cut-off (✓ i > d 1 (✓ j )). Thus, the pricing rule is such that p 1 (✓ j , d) = d 1 (✓ j ) 8✓ j 2 ⇥. I proceed inductively to find the price a bidder pays to win a k th unit. The pricing rule is such that bidder i demands at least k units of the good (b k (✓ i , P k 1 n=1 p n (✓ j , d)) > p k (✓ j , d)) if and only if her type exceeds the k th unit cut-off is bidder i's demand for her k th unit conditional on having paid P k 1 n=1 p n (✓ j , d) for her first k 1 units. Therefore, the price of the k th unit is This inductive construction shows that a symmetric cut-off rule d 2 D is implemented by the pricing rule p(·, d) described above. Lemma 1 shows bidder i pays a higher total price for k units when bidder j has a higher type. To condense notation, I let be the total amount a bidder spends to win k units when she faces cut-off rule d. I construct a transformation that maps an arbitrary mechanism to a more efficient mechanism. The transformed mechanism's assignment rule is such that a bidder wins at least k units of the good where k 2 {1, . . . , K} if and only if her willingness to pay for her k th unit ranks among the top K willingness to pay of both bidders. However, the ranking of bidders' willingness to pay for additional units depends on the pricing rule because wealth effects imply that a bidder's willingness to pay for her k th unit varies with the amount she paid for her first k 1 units. 9 I obtain the ranking by calculating bidders' willingness to pay for additional units under the pricing rule that corresponds to the arbitrary mechanism. This ranking of bidders' willingness to pay determines my transformed mechanism's assignment rule. In other words, the transformed assignment rule is the efficient assignment rule if prices were determined by the untransformed mechanism's pricing rule. The transformed pricing rule is the pricing rule that implements the transformed assignment rule. I argue that a fixed point of this transformation corresponds to an efficient mechanism and I use Schauder's fixed point theorem to show that such a fixed point exists. In order to formalize the above argument, I calculate a bidder's willingness to pay for her k th unit conditional on her payment for her first k 1 units under the untransformed pricing rule that implements cut-off rule d 2 D. This amount is Similarly, bidder j's willingness to pay for her K k + 1 st unit conditional on her payment for her first K k units is b K k+1 (✓ j , P K k (✓ i , d)). I construct the transformed assignment rule by defining a function that compares the above two quantities. In particular, I define a function f : represents the amount that bidder i's willingness to pay for her k th unit exceeds her rival's willingness to pay for her K k +1 st unit, when we evaluate bidders' willingness to pay under the pricing rule implementing the cut-off rule d 2 D. Bidder i's willingness to pay for her k th unit ranks among the top K willingness to pay of both bidders when f (k, ✓ i , ✓ j , d) is positive. I define the transformed cut-off rule to be such that bidder i's type exceeds k th cut-off if and only if her willingness to pay for her k th unit ranks among the top K willingness to pays. Formally, bidder i's transformed cut-off rule is such that Note that when f (k, ✓, ✓ j , d) > 0, then Lemma 2 implies that bidder i's willingness to pay for her k th unit exceeds her rival's willingness to pay for her K k + 1 st unit when ✓ i is sufficiently large. In this case, the transformed cut-off rule T k (d)(✓ j ) states the lowest type for which bidder i's willingness to pay for her k th unit exceeds her rival's willingness to pay for her K k + 1 st unit. If f (k, ✓, ✓ j , d) < 0, then bidder i's willingness to pay for her k th unit is always less than her rival's willingness to pay for her K k + 1 st unit. In this case, the transformed assignment rule gives bidder i wins fewer than k units for any reported type. I calculate a bidder's willingness to pay for her k th unit by assuming that the price she paid for her first k 1 units was determined by the pricing rule corresponding to the (untransformed) cut-off rule d. This is stated in Remark 1 below. My transformed cut-off rule is related to the assignment rule used by Reny (2002, 2005). The papers by Perry and Reny study efficient auction design in an interdependent value setting where there are two bidders and bidders have single-dimensional types and quasilinear preferences (see Section 3 of 2002 paper, or Section 4 of the 2005 paper). In their papers, bidder i's cut-off for her k th unit is the lowest signal such that her marginal value for her k th unit exceeds her rival's marginal value for her K k + 1 st unit. In my private value non-quasilinear setting, a bidder's willingness to pay for her k th unit conditional on the amount she paid for her first k 1 units, b k (✓ i , P k 1 (✓ j , d)), takes the place of a bidder's marginal value in interdependent value settings studied by Perry and Reny. Note that when preferences are quasilinear, there is a (generically) unique efficient assignment of goods. Thus, in Perry and Reny's quasilinear setting the efficient auction design problem is solved by finding a pricing rule that implements the exogenously-determined efficient assignment rule. However, there is not a unique efficient assignment of goods in my non-quasilinear setting. That is because without quasilinearity a bidder's willingness to pay/sell for a unit of the good depends on her level of transfers. My transformed cut-off rule is the efficient assignment rule for the case where prices are determined by the pricing rule that implements the untransformed cut-off rule. Yet the transformed assignment rule T (d) is implemented by the transformed pricing rule p(·, T (d)). Thus, if d is not a fixed point, then the assignment rule that sorts units efficiently when prices are determined by the untransformed pricing rule p(·, d), is not the assignment rule that sorts units efficiently when prices are determined by the transformed pricing rule p(·, T (d)). Or in other words, we construct T to be such that a mechanism with an assignment rule given by T (d) and pricing rule p(·, d) gives an efficient outcome. But if d is not a fixed point of T , then the mechanism with allocation rule T (d) and pricing rule p(·, d) would not be incentive compatible. Instead, we can construct a mechanism that implements that same assignment rule T (d) by using the pricing rule p(·, T (d)). However, the outcome associated with the assignment described by T (d) is no longer efficient when we change the pricing rule from p(·, d) to p(·, T (d)). For this reason, I use a fixed point theorem to prove the existence of an efficient (and implementable) mechanism in my non-quasilinear setting. The above argument implies that a fixed point of the transformation T defines an efficient mechanism. To see this suppose that cut-off rule d ⇤ 2 D is a fixed point of T . The corresponding pricing rule p(·, d ⇤ ) is such that (1) bidder i demands k units if and only if her rival demands K k units, and (2) bidder i wins her k th unit if and only if her willingness to pay for her k th unit exceeds her rival's willingness to pay for her K k + 1 st unit. Both points follow from the implications of Remark 1 above. Thus, there are no Pareto improving trades where bidder i sells units to bidder j and the auction outcome is efficient. Theorem. 1A. If d ⇤ 2 D is a fixed point of the mapping T , then d ⇤ corresponds to a feasible mechanism that satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) no subsidies. Thus, Theorem 1 shows that in the 2 ⇥ K setting, there is a mechanism that retains the desirable properties of the Vickrey auction. Furthermore, this efficient mechanism can be implemented by a multi-unit Vickrey auction with a restricted bid space. 11 To see this, . We use d ⇤ to construct a multi-unit Vickrey auction where a bidder selects from a single-dimensional family of bid curves. The bid curves are such that if bidder i bids ✓ i for her first unit, then she also bids for her k th unit. Note that if bidder i submits bid curve (✓ i ) and bidder j submits bid curve (✓ j ), then by construction bidder i wins at least k units in the Vickrey auction only if and bidder i wins strictly fewer than k units only if Moreover, if bidder i wins k units in the Vickrey auction with restricted bid space, she pays . Thus, the multi-unit Vickrey auction with restricted bid space implements the outcome of direct revelation mechanism that corresponds to cut-off rule d ⇤ . 12 Corollary 1. The Vickrey auction with restricted bid space satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) no subsidies. Note that the Vickrey auction without any restrictions on the bid space does not satisfy the four aforementioned properties. Baisa (2016) shows that bidders with positive wealth effects misreport their demand for later units in the multi-unit Vickrey auction. I give an it does not guarantee a unique fixed point. Thus, we can say that there is a mechanism that implements an efficient outcome (Theorem 1), but we cannot say whether there is a unique efficient mechanism and assignment rule. 11 See Chapter 12 of Krishna (2010) for a formal description of the standard multi-unit Vickrey auction for homogenous goods. 12 See Section 3.4 for an example of the efficient mechanism for the soft budgets case. explicit description of the efficient mechanism for a setting where bidders have soft budgets in Section 3.1. Finally, note that we assume that bidders are ex-ante symmetric when we proved Theorem 1. Thus, it is natural to ask whether the positive result from Theorem 1 extends to a setting without symmetry of bidder preferences. In the Section A3 of the appendix I show that it is a straightforward exercise to extend Theorem 1 to a setting where bidders are ex-ante asymmetric. In particular, I show that we can construct a nearly identical transformation maps that arbitrary cut-off rule for bidder 1 to a more efficient cut-off rule for bidder 1. In addition, I show that any feasible mechanism that allocates all units and is described by its cut-off rule for bidder 1 could also be described by its corresponding cut-off rule for bidder 2. Thus, my transformation that maps bidder 1's arbitrary cut-off rule to a transformed cut-off rule, also implicitly maps the corresponding cut-off rule for bidder 2 to a transformed cut-off rule for bidder 2. The fixed point of this mapping defines an efficient auction in this asymmetric setting as well. Note on restriction to deterministic mechanisms Like the prior literature on efficient multi-unit auctions without quasilinearity, I restrict attention to deterministic mechanisms. Alternatively, one could also consider the problem of designing an efficient and dominant strategy implementable auction when we can use randomization. Garratt (1999) shows that randomization can increase the gains from trade of associated with exchanging an indivisible good. In addition, Baisa (2017) shows that the efficient stochastic mechanism sells the indivisible good like a perfectly divisible normal good in net supply one. Therefore, the problem of designing an efficient stochastic auction in a single unit setting is equivalent to studying a divisible good version of the problem studied in this subsection. We can then use Theorem 1 to show that there is an approximately efficient stochastic mechanism that sells a single indivisible good, and the approximately efficient stochastic mechanism is dominant strategy implementable. In particular, Theorem 1 shows that there is an efficient auction for K 2 N indivisible goods when there are two bidders who have single-dimensional types. Let the K indivisible homogenous goods each be a lottery ticket that provides a 1 K probability of winning the indivisible good. When K is large, the mechanism is dominant strategy implementable and also approximately efficient. Thus, selling the good as discrete lottery tickets, each with an arbitrarily small probability of winning allows us to construct a mechanism that implements an approximately efficient outcome. 13 The N bidder two units case In this subsection, I show that we can characterize an efficient mechanism when there are two units. The proof is constructive. The mechanism is symmetric and its first-unit cut-off rule implicitly defines the assignment rule, because we construct the mechanism to be such that a bidder wins both units (i.e. her type exceeds her second unit cut-off) if and only if all other bidders have a type below their first unit cut-off. That simplification allows us to characterize the efficient mechanism using a single equation which describes the first unit cut-off. This differs from how results presented in the prior section, where we needed to construct multiple cut-off rules simultaneously in order to define an efficient mechanism. I let q i : ⇥ N ! {0, 1, 2} be the assignment rule for bidder i in the mechanism, . Incentive compatibility implies that If bidder i's type is below her first unit cut-off, then she wins no units. Bidder i wins at least one unit if her type exceeds the first unit cut-off, and bidder i wins both units if her type exceeds the first unit cut-off and none of her rivals have a type that exceeds their first unit cut-off. Thus, Mechanism has a pricing rule p : ⇥ N 1 ! R 2 . The pricing rule states the price a bidder pays for each unit of the good given her rivals' reported types. The pricing rule p implements assignment rule q. The pricing rule is such that bidder i pays nothing if she does not win any units. In addition, bidder i wins at least one unit if and only if her type type exceeds her first unit cut-off. Thus, we set the price of bidder i's first unit to be her first unit cut-off. And finally, the pricing rule is such that bidder i wins both units if and only if her willingness to pay for her second unit exceeds her highest demand rival's willingness to pay for her first must have the same marginal utility of money in the win state and in the loss state. If this did not hold, the auctioneer could offer the bidder a Pareto improving insurance contract that allows her to equalize her marginal utility of income across the win state and the lose state. unit. Thus we have that for any ✓ i 2 ⇥ N 1 , p : ⇥ N 1 ! R 2 is such that It is without loss of generality to define the first unit cut-off for bidder 1 and to assume that that ✓ ✓ 2 ✓ 3 ✓ j 0 8j 2 {4, . . . , N} because mechanism is symmetric. The mechanism's cut-off rule d is implicitly defined by the equation below implies that bidder 1 wins her first unit if and only if her demand for her first unit exceeds both her highest rival's demand for her second unit and her second highest rival's demand for her first unit. The first term on the right hand side of Equation 3 is bidder 3's willingness to pay for her first unit. Bidder 3 is the second highest demand rival of bidder 1. The second term is bidder 2's willingness to pay for her second unit conditional on paying Recall that the price bidder 2 pays for her first unit is is the price bidder 2 pays to win her first unit when bidder 1's type is exactly at the first unit cut-off. Equation 3 is the analog of the demand reduction term in Section 4 of Perry and Reny's (2005) quasilinear interdependent value multi-unit auction model. In the two-unit version of their model, they find a cut-off rule by fixing a bidder's rivals type and finding the signal where the bidder's value for her first unit equals her rival's value for her second unit. My cut-off rule similarly finds the cut-off by finding the type of bidder 1 where her willingness to pay for her first unit equals the second highest willingness to pay of her rivals. In my case, the second highest willingness to pay of bidder 1's rivals is the maximum of bidder 2's willingness to pay for her second unit and bidder 3's willingness to pay for her first unit. Lemma 3 shows that Equation 3 implicitly defines a unique and continuous cut-off rule There is a unique function d : ⇥ N 1 ! ⇥ that is continuous and satisfies Equation 3. Lemma 3 shows that we can use the cut-off rule d to construct a mechanism that satisfies Properties (1)-(4). The mechanism satisfies individual rationality and no subsidies by construction. Incentive compatibility is satisfied because the mechanism is such that a bidder does not misreport her type because she wins a unit if and only if her demand for the unit exceeds the price of a unit. The mechanism is efficient because it only assigns to a bidder if she has one of the two highest types. Moreover, one bidder wins both units if and only if her demand for both units exceeds her highest rival's demand for her first unit. Thus, the mechanism's outcome is such that there are no ex post Pareto improving trades among bidders. Theorem 2. There exists a mechanism that satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) no subsidies. The mechanism has first unit cut-off rule d that is the unique solution to Equation 3 and pricing rule p, where for any ✓ i 2 ⇥ N 1 , While it is straightforward to extend the implications of Theorem 1 to a setting where bidders are asymmetric, that is not true when we consider Theorem 2. To see why note that we construct the efficient auction in the N bidder two object setting by characterizing a bidder's first unit cut-off using Equation 3. We are able to use Equation 3 to implicitly characterize the cut-off rule by comparing a bidder's willingness to pay for her first unit with her highest demand rivals willingness to pay for her second unit, among other competing bids. We calculate the highest demand rival's willingness to pay for her second unit, conditional on the price she paid for her first unit, by assuming that the price her rival paid to win her first unit was determined using the same cut-off rule. If bidders were asymmetric, then naturally, the efficient auction should be asymmetric and have different first unit cut-offs for each bidder. In this case, we would be unable to use a single expression, like Equation 3, to characterize the efficient mechanism's cut-off rule. In the next section, I discuss similar challenges associated with extending my two positive results to the general N ⇥ K setting. The N bidder K object case I use different approaches to prove the existence of a mechanism with Vickrey's desired properties in the two-bidder K 2 object case, and the N 3 bidder 2 object case. In the latter case, I give a constructive argument. I form a single equation (Equation 3) that implicitly describes a bidder's first unit cut-off rule. I form the equation by noting that bidder i wins a unit if and only if her willingness to pay for her first unit exceeds: (1) her second-highest rival's demand for her first unit, and (2) her highest rival's demand for her second unit, when bidder i's type equals her first unit cut-off. The efficient auction design problem simplifies down to finding bidder i's first unit cut-off, because we know her second unit cut-off will be defined to be such that she wins both units if and only if her willingness to pay for her second unit, conditional on buying the first, exceeds her highest rival's willingness to pay for her first unit. We cannot use that same constructive approach and form a single equation that characterizes an efficient auction in the three (or more) unit case. To see this consider the three unit case. We need two cut-off rules for each bidder to define an efficient mechanism -a bidder's first unit cut-off, and her second unit cut-off; the third is implicitly defined by the case where only one bidder exceeds her first unit cut-off. However, with three or more units, there is a simultaneity problem when we determine the cut-off rules. This is because we would need to know the bidders' second unit cut-off rule to determine their first unit cut-off rule and vice versa. In particular, we determine a bidder's first unit cut-off by comparing her willingness to pay for her first unit with her highest demand rival's willingness to pay for her third unit, among other competing bids. However, bidder i's highest demand rival's willingness to pay for her third unit depends on how much that rival paid to win her first and second units. We determine those two prices using the first and second unit cut-off rules. Similarly, when we determine bidder i's second unit cut-off, we compare her conditional willingness to pay for her second unit with higher highest demand rival's willingness to pay for her second unit. And her highest rival's conditional willingness to pay for her second unit depends on the first unit cut-off. Thus, we need to simultaneously determine the first and second unit cut-off rules that each satisfy a version of Equation 3. In subsection 3.1, we assumed there were only two bidders and we faced an equivalent simultaneity problem. In our efficient auction, bidder i's first unit cut-off depends on the amount her rival pays to win her first K 1 units. And the amount her rival pays to win her earlier units depends on the earlier unit's cut-off rules. We were able to overcome this simultaneity issue in the two-bidder case by using a fixed point approach. However, the fixed point argument used in subsection 3.1 cannot be applied to a setting with N 3 bidders because we can not construct an analog to our cut-off rule d(·) that is monotone (coordinatewise) when there are at least three bidders. Recall, in the two-bidder case, we use the monotonicity d to show that the space of all cut-off rules D is compact. The compactness of D is a necessary condition to use Schauder's fixed point theorem. However, when there are at least three bidders, there is no mechanism that satisfies Vickrey's desired properties and also has a cut-off rule that is monotone in the coordinatewise sense. More precisely, there is no Vickrey-like mechanism where a bidder wins a weakly greater number of units if her demand increases and her rivals' reported demands decrease (in the coordinate-wise sense). We define this notion of strong monotonicity below. We then present an impossibility result (Proposition 1) that illustrates why no such mechanism exists, and consequently, why we are unable to reuse the fixed point proof from subsection 3.1 for the general N ⇥ K case. Definition 6. (Strong Monotonicity) A mechanism satisfies strong monotonicity if bidder i's assignment rule q i : Strong monotonicity is related to other practical constraints that have been studied in mechanism design. For example, any mechanism that is non-bossy in the sense of Satterthwaite and Sonnenschein's (1981) and assigns all units also satisfies strong monotonicity. 14 To illustrate the impossibility result lets again consider the single-dimensional types setting and suppose that N 3 bidders compete to win two units. I show that in any such setting there is no mechanism that satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, (4) no subsidies and (5) strong monotonicity. I prove the impossibility result by contradiction. I show that if there is a mechanism that satisfies the five properties, then there is endogenous interdependence in bidders' demands, even in my private value model. I note two important features of the implied endogenous interdependence in my proof. First, I note that the interdependence in bidder demands is only present in bidders' demands for later units. A bidder's willingness to pay for her first unit is her private type ✓ i , and this quantity does not vary with her rivals' types. Second, I note that the interdependence is negative. When a bidder's rivals increase their demands, the price a bidder pays for her first unit increases. Positive wealth effects then imply that the bidder has a lower demand for later units. The presence of negative interdependence leads to the violation of strong monotonicity. There is an identical tension between efficiency and strong monotonicity in a quasilinear setting where bidders' demands for later units are negatively interdependent on rival types. To be more concrete, consider a modified version of Perry and Reny's (2005) quasilinear multi-unit auction setting. 15 However, suppose that a bidder's marginal value for her first unit is independent of her rivals' types and her marginal 14 Recall non-bossiness requires that a change in bidder i's type changes one of her rival's assignment only if it changes her own assignment. Borgs et al. (2005) also study an auction design problem with a similar property that they call independence of irrelevant alternatives. In a quasilinear setting with private values, strong monotonicity is implied by efficiency. Strong monotonicity is implied by efficiency and incentive compatibility in the two-bidder case. 15 In my notation, this would be a case where ✓ i 2 R + , and u i (q, m, value for later units is decreasing in her rivals' single-dimensional types. We can show that efficient auction design is incompatible with strong monotonicity in this setting as well. To see this, suppose that there are two units and three bidders where bidders 1, 2, and 3 have the highest, second highest, and lowest demands, respectively. If bidder 1's marginal value for her second unit, conditional on her rival's types exceeds bidder 2's demand for her first unit by a small amount, then efficiency implies that bidder 1 wins both units. However, an increase in bidder 3's type decreases bidder 1's demand for her second unit and flips the ranking of bidder 1's marginal value for her second unit and bidder 2's marginal value for her first unit. Hence, if bidder 3's type increases, efficiency implies that bidders 1 and 2 now each win a single unit, and that violates strong monotonicity (bidder 2 wins more units even though her rival reports a higher type). The difference between my non-quasilinear private value setting and the interdependent value quasilinear setting is that in my setting, the negative interdependence in bidder demands for later units arises endogenously in the efficient auction design problem. The mechanism constructed in subsection 3.2 shows that we are able to construct a mechanism that satisfies the first four properties, but the mechanism necessarily violates strong monotonicity. In addition, in the Section A2 of the appendix, I construct a mechanism that gives bidders upfront subsidies and satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency and (4) strong monotonicity. The efficient auction with subsidies illustrates an important distinction between efficient auction design problems with quasilinearity and without quasilinearity. With quasilinearity, an upfront subsidy does not expand the scope of implementable social choice rules. Without quasilinearity, the designer can expand the scope of implementable social choice rules because a subsidy can change bidders' preferences endogenously. At the same time, we rarely see auctions that use subsidies in practice. As Morimoto and Serizawa (2015) state, imposing no subsidies is a useful practical constraint for the mechanism designer because "this property prevents agents who do not need objects from flocking to auctions only to sponge subsidies." Lastly, note that when we compare that results from Sections 3.2 and A2, we see that there is not a unique assignment rule that corresponds to the efficient mechanism. This is different from the benchmark quasilinear setting where the efficient assignment rule is uniquely determined by bidder preferences. The assignment rules of the mechanisms described in Sections 3.2 and A2 differ, but both are efficient. As discussed above, this is because we can use upfront subsidies to expand the scope of dominant strategy and efficient implementation when bidders have wealth effects, because upfront subsidies endogenously shift a bidder's demand curve. Numerical example for bidders with soft budgets In this example, I study a setting with two bidders, where bidders have soft budgets and single-dimensional types. I characterize an efficient mechanism. I explicitly characterize an efficient mechanism for the soft budget setting by using a guess and verify approach. There are two bidders 1 and 2 who compete for two homogenous units. Each bidder is described by her single-dimensional type ✓ i 2 ⇥. I assume that a bidder with type ✓ i has a soft budget of ✓ i , and she gets utility ✓ i for her first unit of the good. In addition, each bidder has declining demand for additional units. In particular, bidder i gets utility of ✓ i from her first unit and marginal utility of .9✓ i from her second unit. If a bidder spends an amount p > 0 that exceeds her budget ✓ i , then the bidder must also pay interest r 0 on her debt of p ✓ i . Thus, the bidder i gets disutility r(p ✓ i ) + p from spending p. Thus, I write bidder i's utility function as By construction, bidder i is willing to pay ✓ i 2 ⇥ for her first unit of the good. I use the above expressions to compute bidder i's willingness to pay for her second unit, conditional on paying p for her first unit, which is A (first unit) cut-off rule d : ⇥ ! ⇥ is a fixed point of the transformation T in the 2 ⇥ 2 setting if For a given r 0, I guess and verify that there is a linear cut-off rule d that satisfies the above expression. Moreover, I show that there the linear cut-off rule that satisfies the above expression is such that d(✓) = g(r)✓ 8✓ 2 ⇥, where g : R + ! ( p .1, 1) is states the constant slope of the symmetric first unit cut-off rule for a bidder given the interest rate r 0. We let, g(r) be such that where the final equality follows, because I assume that g 2 (r)✓ 2 (.1✓, ✓). By simplifying the above expression we get that . We can easily confirm that g(r) 2 [ p 5 1 2 , .9] ⇢ (.1, 1) 8r 0. Therefore, the cut-off rule d(✓) = g(r)✓ where g(r) is defined by the fixed point of the transformation T for any given interest rate r 0. Theorem 1A then implies that d is the cut-off rule for an efficient mechanism where 1 if ✓ 1 g(r)✓ 2 , and ✓ 2 g(r)✓ 1 , 2 if ✓ 1 > g(r)✓ 2 , and g(r)✓ 1 > ✓ 2 . and q 2 (✓ 1 , ✓ 2 ) = 2 q 1 (✓ 1 , ✓ 2 ). The mechanism is implemented by pricing rule p : ⇥ ! R 2 + where p 1 (✓) = g(r)✓, and p 2 (✓) = ✓. The figure below illustrates the allocation rule when we assume that bidders pay 100 percent interest (r = 1). In that case the efficient mechanism has first unit cut-off rule satisfying Bidder preferences In this section, I argue that the positive result from Theorem 1 does not extend to any setting where bidders have multi-dimensional types. I study a setting where there are two bidders and two homogenous goods. A bidder's private information is described by a two-dimensional for all ✓ i 2 (0, ✓], q, q 0 2 {0, 1, 2}, s.t. q > q 0 , x 2 R, and t i 2 {s, f}. The second dimension of bidder i's type t i 2 {s, f} represents the steepness of her demand curve -it can either be steep (s) or flat (f ). Bidders with steeper demand curves have relatively lower demand for their second unit. Thus, I assume that is bidder i's willingness to pay for her second unit when she has type i 2 ⇥ ⇥ {s, f} and paid x 2 R for her first unit, then b 2 is such that I assume bidder preferences satisfy (1) declining demand, (2) strictly positive wealth effects, and (3) single-crossing in ✓ (Assumptions 1-3, from Section 2). Again, it is without loss of generality to assume that ✓ i represents bidder i's willingness to pay for her first unit of the good. I refer to ✓ i as bidder i's intercept. Thus, x) is continuous and strictly decreasing in the amount a bidder has paid x 8x 2 R, i 2 ⇥ ⇥ {s, f}. Points 1, 2, and 3 above are direct implications of Assumptions 1, 2, and 3, respectively. A mechanism satisfies incentive compatibility in a multi-dimensional type where bidder preferences are described by the utility function u if An impossibility theorem for the multi-dimensional type case I prove that there is no mechanism that has the Vickrey auction's desirable incentive and efficiency properties, as well as weak budget balance, in any setting where bidders have multidimensional types. More precisely, I assume that bidder i's private information is described by the multi-dimensional parameter i 2 ⇥ ⇥ {s, f} and I assume that i parameterizes bidder i's commonly known utility function u, where u satisfies the conditions described in Section 4.1. Theorem 2 shows that in any such case, there is no mechanism that satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) no subsidies. In other words, there is no mechanism that satisfies these four properties for any multi-dimensional type space and for any choice of utility function that satisfies the conditions described in Section 4.1. Theorem 2 also implies that efficient auction design is impossible on any richer type space because the increase in the dimensionality of bidder private information increases the number of incentive constraints we must satisfy to solve the efficient auction design. It is relevant to note that the prior impossibility results in this literature assume richer type spaces relative to the one studied here and also make specific function form restrictions on bidder preferences. 16 In the proof of Theorem 2, I show that if there is an efficient auction, then there is endogenous interdependence between a bidder's demand for later units and her rival's multidimensional type. This is because in an efficient auction, the price a bidder pays for her first unit depends on her rival's type, and positive wealth effects imply a bidder's willingness to pay for her second unit varies with the amount she paid to win her first unit. Thus, the endogenous interdependence is caused by the combination of multi-unit demands and wealth effects. 17 Theorem 3. There is no mechanism that satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) weak budget balance when bidders have multidimensional types. My proof shows that efficiency implies that a bidder wins at least one unit if and only if her willingness to pay for her first unit exceeds her rival's willingness to pay for her second unit. This necessary condition for efficiency forms a contradiction with incentive compatibility. The contradiction emerges because bidder j pays more for her first unit when bidder i reports a flat demand instead of a steep demand. Thus, bidder i lowers her rival's willingness to pay for her second unit, and hence the price she pays for her first unit, by reporting a flat demand curve instead of a steep demand curve. This violates incentive compatibility because bidder i's report changes the amount she pays for her first unit. 16 See the discussion of Dobzinski, Lavi, and Nisan (2012), Lavi and May (2012), and Goel et al. (2015) in the related literature section. Relatedly, Kazumura and Serizawa's (2016) impossibility theorem requires that only one bidder has multi-item demand, but their type space is again rich relative to the type space studied here. 17 If we considered a model where one of these features is absent, we would again obtain a positive implementation result, even for cases where bidder types are multi-dimensional. In an analogous model where bidders have no wealth effects, then the Vickrey auction is efficient and dominant strategy implementable. Similarly, in the unit demand case, Demange and Gale (1985) construct an efficient auction for non-quasilinear bidders. The formal proof of Theorem 2 follows from Lemma 4, Proposition 2 and Corollary 2 which are explained below. The proof is by contradiction. Suppose there exists a mechanism that satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) weak budget balance. Mechanism has assignment rule q and payment rule x. The taxation principle states that a change in bidder i's reported type only changes her payment if it changes her assignment. Remark 2. (Taxation principle) If satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) weak budget balance, then there exists pricing rules p 1 and p 2 such that Lemma 4 further simplifies the proof. It shows that mechanisms that satisfy Properties (1)-(4) must also satisfy the no subsidy condition. The proof of Lemma 4 shows that we violate weak budget balance if a bidder is paid a positive amount to participate in the auction p i,0 ( j ) < 0. Moreover, individual rationality ensures that p i,0 ( j )  0, because a bidder never regrets participating in the mechanism, even if she wins zero units. Thus, it is the case that p i,0 ( j ) = 0 8 j 2 ⇥ ⇥ {s, f}. I derive a contradiction by placing necessary conditions on a mechanism's assignment rule, and consequently on the mechanism's pricing rule. It is useful to describe a mechanism's assignment rule by cut-off rules. I let d t i i,k : ⇥ ⇥ {s, f} ! ⇥ be the intercept cut-off for bidder i's to win unit k 2 {1, 2} when she has steepness t i 2 {s, f}. Bidder i's n th unit cut-off is then . Remark 3. If satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) weak budget balance, then The first point states that a bidder wins both units if she reports positive demand and her rival reports no demand. The second point states that a bidder faces a greater intercept cutoff when her rival reports greater demand. The final point states that the cut-off intercept for winning both units is weakly greater than the cut-off intercept for winning a single unit. The first point follows from efficiency, and the latter two points follow from incentive compatibility. Proposition 2 places further restrictions on the cut-off rules associated with a mechanism that satisfies Properties (1)-(4). The first point states that bidder i's first unit cut-off intercept is continuous and strictly increasing in her rival's intercept. The second point states that bidder i has a strictly greater cut-off intercept for her second unit than she does for her first unit. This follows from efficiency and declining demand. The third point states that a bidder i's first unit cut-off intercept is independent of her reported steepness. This is because bidder i wins her first unit if and only if her demand for her first unit exceeds the price she pays for her first unit ✓ i > p i,1 ( j ). Thus, bidder i's first unit cut-off is independent of her reported steepness as p i,1 ( j ) = d s i,1 ( j ) = d f i,1 ( j ). Given this result, I drop the superscript on a bidder's first unit cut-off for the remainder of the section. That is, I let d i,1 ( j ) = d s i,1 ( j ) = d f i,1 ( j ). The final point of Proposition 2 states that a bidder's first unit cut-off is greater when her rival has flat demand. This is an intuitive consequence of incentive compatibility and efficiency. If bidder j has a flat demand, then bidder j has a relatively higher demand for her second unit. Incentive compatibility thus implies that bidder j has a lower second unit cut-off when her type is flat because the infimum intercept types where b 2 ((✓ j , t j ), p i,1 ( i )) > p i,2 ( i ) is lower when t j = f versus when t j = s. A direct consequence of this observation is that bidder i faces a higher first unit cut-off when her rival, bidder j, reports a flat demand type versus steep demand type. Corollary 2. If satisfies (1) individual rationality, (2) incentive compatibility, (3) efficiency, and (4) weak budget balance, then if i , j 2 ⇥ ⇥ {s, f} are such that ✓ i = d i,1 ( j ), then Corollary 2 shows that if bidder i's is indifferent between winning 0 and 1 units (✓ i = d i,1 ( j )), then bidder i's willingness to pay for her first unit must equal her rival's (conditional) willingness to pay for her second unit. If the two quantities were unequal, then there would be a Pareto improving trade where the bidder with the higher respective willingness to pay buys a unit from the bidder with the lower willingness to pay. I use Corollary 2 to obtain the contradiction that proves the impossibility theorem. To see the contradiction, fix bidder i's intercept type ✓ i and suppose again that bidder i's is indifferent between winning 0 and 1 units (i.e. ✓ i is such that ✓ i = d i,1 ( j ); see point a. in Figure 1 below). Let's compare the case where bidder i reports a steep demand type (t i = s) with a case where bidder i reports a flat demand curve (t i = f ). Proposition 2 shows that bidder j pays more for her first unit of the good in the latter case relative to the former case (i.e. d j,1 (✓ i , f) = p j,1 (✓ i , f) > p j,1 (✓ i , s) = d j,1 (✓ i , s); see points b. and c. and Figure 1 below). This is intuitive, because bidder j pays more for her first unit when bidder i has higher demand for her second unit. Positive wealth effects then imply that bidder j is willing to pay less for her second unit when bidder i has a flat demand versus a steep demand, However, the above inequality contradicts the implication of Corollary 2 because The contradiction between expressions (3) and (4) proves the impossibility theorem. Thus, there is no mechanism that retains the Vickrey auction's desirable incentive and efficiency properties on any type space that satisfies the conditions given in Section 4.1. Moreover, there is no mechanism that retains the Vickrey auction's desirable incentive and efficiency properties on any richer type space -the increase in dimensionality only increases the number of incentive constraints that our mechanism must satisfy. The proof of Theorem 3 illustrates how the combination of wealth effects and multi-unit demands inhibits efficient auction design. In contrast, in the quasilinear setting, there are no wealth effects and the Vickrey auction is the unique auction that satisfies Properties (1)-(4) (see Holmstrom (1979)). In a 2 ⇥ 2 quasilinear setting, a Vickrey auction is such that the price a bidder pays for her first unit equals her rival's willingness to pay for her second unit. Corollary 2 shows that this is also a necessary condition for efficient auction design in the non-quasilinear setting. Yet, in the non-quasilinear setting, the presence of wealth effects implies that the price a bidder pays for her first unit influences her demand for her second unit. By stating a high demand for her second unit, a bidder forces her rival to pay more for her first unit. This deviation can benefit a bidder in a non-quasilinear setting because when the bidder's rival pays more for her first unit, the rival has lower demand for her second unit. Moreover, a bidder pays less to win her first unit when her rival has lower demand for her second unit. Thus, no mechanism can simultaneously satisfy Properties (1)-(4) when we introduce wealth effects and multi-dimensional heterogeneity. Finally, note we can extend the above proof to where only one bidder has multi-dimensional private information. In particular, Proposition 3 and Corollary 2 would be unchanged if we instead assume that t j was common knowledge, and thus bidder j's private information is one-dimensional. Thus, we get the same contradiction between Expressions (4) and (5) when t j is common knowledge, and bidder i is the only bidder with multi-dimensional private information. 18 Proof. The proof is by induction. When k = 1, p 1 (✓ j , d) is weakly increasing in ✓ j because Before showing the inductive step, it is useful to note that This is because The final inequality implies that because u is increasing in the second argument. Returning to the proof, suppose that P k 1 (✓ j , d) is weakly increasing in ✓ j 8✓ j 2 ⇥, d 2 D and some k 2 {1, . . . , K}. I show that this implies that P k (✓ j , d) is weakly increasing in Let ✓ ✓ h j > ✓j 0. Then, where the equality follows from the definition of p k , and the inequality follows because b k is increasing in the first argument and d k (✓ h j ) d k (✓j). Then, where the inequality is implied by Equation 6 where we let z = P k 1 (✓ h j , d) y = P k 1 (✓j, d) increasing in ✓ j 8✓ j 2 ⇥, then P k (✓ j , d) is weakly increasing in ✓ j 8✓ j 2 ⇥. Proof of Lemma 2 Proof is weakly decreasing in the second argument and Lemma 1 shows Proof of Theorem 1A. Proof. I construct a mechanism ⇤ that follows from the symmetric cut-off rule d ⇤ 2 D. I assume ties (in terms of willingness to pays for additional units) are broken in favor of bidder 1. Thus, the mechanism ⇤ has an assignment rule for bidder 1 where and q 2 (✓ 1 , ✓ 2 ) = K q 1 (✓ 1 , ✓ 2 ) for all ✓ 1 , ✓ 2 2 ⇥. The mechanism has transfer rule x i (✓ 1 , . By construction, the mechanism is feasible, satisfies no subsidies, and individual rationality. In the remainder of the proof I show that the mechanism satisfies incentive compatibility and efficiency. Incentive Compatibility: I show that mechanism ⇤ is incentive compatible by studying two cases. Case 1: Suppose that ✓ 1 , ✓ 2 2 ⇥ are such that q i (✓ i , ✓ j ) k for some number k 2 {1, . . . , K}. Then the construction of mechanism ⇤ implies that where the implication follows from Remark 1. And since b k is increasing in the first argument, In other words, the price of bidder i's k th unit is below her willingness to pay for her k th unit. Thus, bidder i has no incentive to deviate by reporting a lower type and winning fewer units. . . , K}. Then, the construction of mechanism ⇤ implies that Thus, the price of winning an k th unit where k > q i (✓ i , ✓ j ) exceeds bidder i's willingness to pay for her k th unit, conditional on having won k 1 units under pricing rule p(✓ j , d ⇤ ). Therefore, bidder i does not increase her utility by reporting a type ✓ 0 i that allows her to win more units. Thus, the two cases show that the mechanism is incentive compatible. Efficiency: Lastly, I show that mechanism ⇤ is efficient. Fix ✓ 1 , ✓ 2 2 ⇥. In addition, I show that there is no feasible outcome that Pareto dominates the outcome as the 'star' bundle. I prove this by showing that there is no bundle of the form . I show that the mechanism satisfies efficiency by again considering two cases. Case 1: First, suppose thatq i  q ⇤ i for i = 1, 2. Then if the tilde bundle Pareto dominates the star bundle, it must be the case thatx i  x ⇤ i for i = 1, 2, because no bidder is made strictly worse by consuming the tilde bundle. No bidder is strictly better off unless she makes a strictly lower payment. In addition, if any bidder makes a strictly lower payment, the auctioneer gets strictly lower revenue. Thus, the outcome i for all i = 1, 2. Case 2: Next, suppose that the tilde bundle is such that q ⇤ i <q i for some i = 1, 2. Then feasibility implies that q ⇤ j <q j where j = 1, 2 and j 6 = i. In addition, it must be the case that bidder i is made no worse off by consuming the tilde bundle outcome (q i ,x i ). Note that bidder i's willingness to pay for an additional unit when she consumes the star bundle where the first inequality holds because ⇢ b q ⇤ i +1 (✓ i , x ⇤ i ) and the second inequality holds because bidders have declining demand and positive wealth effects. Hence, Thus, if bidder i is made no worse off by the reallocation, we must have that In other words, bidder i pays less than ⇢ for each additional unit when we move from allocation . Moreover, if bidder i is made strictly better off under the latter outcome, then the above expression holds with a strict inequality. When we assume that q ⇤ i <q i for some i = 1, 2, then feasibility implies that q ⇤ j  q j (q i q ⇤ i ) where j = 1, 2 and j 6 = i. In addition, it must be the case that bidder j is made no worse off by consuming the quantity and payment outcome of (q j ,x j ). Note that, for all k 2 {1, . . . , K}, ✓ j 2 ⇥,x, p > 0. Furthermore, recall that bidder j wins q ⇤ j units in mechanism with cut-off rule This implies that . We then have that The final inequality holds because declining demand and positive wealth effects combine to imply that In other words, bidder j's utility does not increase if she sells a unit at price ⇢. This implies that . Therefore, if bidder j is made no worse off by winningq j units and payingx j , theñ where the above inequality is strict if bidder i is made strictly better off under the tilde outcome. Thus, we have thatx where the above holds with a strict inequality if at least one bidder is made strictly better off under the tilde outcome. Thus, when the tilde bundle is such that q ⇤ i <q i for some i = 1, 2. Therefore, our analysis of Case 1 and Case 2 shows that there is no outcome that Pareto , and hence the outcome of mechanism ⇤ is an efficient outcome for all (✓ 1 , ✓ 2 ) 2 ⇥ 2 . because Lemma 2 shows f is strictly increasing in the first argument and thus ✓ h j > ✓j implies that Thus, Next, I show that In addition f is strictly increasing in the first argument. Therefore, (2) Next, I show that T is a continuous mapping. Since D is a metric space (under the uniform norm), it suffices to show that if {d n } 1 n=1 is such that d n 2 D 8n 2 N and lim n!1 d n = d, then lim n!1 T (d n ) = T (d) (see Aliprantis and Border (2006), pg. 36). More formally, assume there is a sequence {d n } 1 n=1 such that d n 2 D, 8n 2 N and lim n!1 d n (✓ j ) = d(✓ j ) 8✓ j 2 ⇥ where d 2 D. I show that this implies that T satisfies lim n!1 T (d n First, I show that The proof is by induction. The above equality is true if k = 1 because For the inductive step of the proof, suppose that there is k 2 {1, . . . , K} is such that I show that this implies that the above expression holds when k 1 is replaced by k. Note that for all ✓ j 2 ⇥. Thus, we have proven that Recall that Since lim n!1 P k (✓ j , d n ) = P k (✓ j , d) 8k 2 {1, . . . , K}, ✓ j 2 ⇥ and b k is continuous in the second argument, it follows that I use the above expression to show that it is also the case that lim n!1 T (d n )(✓ j ) = T (d)(✓ j ) 8✓ j . I separate the remainder of the proof that T is continuous into two cases. First, I show that if ✓ j 2 ⇥ and k 2 {1, . . . , K} are such that lim n!1 f (k, ✓, ✓ j , d n ) = f (k, ✓, ✓ j , d)  0, Then I show that if ✓ j 2 ⇥ and k 2 {1, . . . , K} are such that lim n!1 f (k, For the first case, if ✓ j 2 ⇥ and k 2 {1, . . . , K} are such that lim n!1 f (k, ✓, ✓ j , d n ) = f (k, ✓, ✓ j , d)  0, then for any ✏ > 0, there exists an n ⇤ 2 N such that for all n > n ⇤ , where the first inequality holds because f is strictly decreasing in the second argument. Since where the second equality holds because (1) f is strictly increasing in the second argument and (2) f (k, ✓, ✓ j , d n ) ! f (k, ✓, ✓ j , d) 8✓ j 2 ⇥. Thus, we conclude that T is a continuous mapping over the domain of D because (3) Finally, I show that D is compact. Or equivalently, I show that D is complete and totally bounded. The set D is complete because every Cauchy sequence {d n } 1 n=1 converges to an element d 2 D when I use the L 1 norm as our metric. In addition, the set D is totally bounded. This is because under the L 1 norm any weakly increasing and bounded function can be approximated by a sequence of simple functions and D is a subset of the set of all weakly increasing and bounded functions. Thus, for any ✏ > 0, I can construct a finite set of simple functions {d 1 , . . . , d n }, where for any d 2 D, there is an i such that |d d i | < ✏ according to the L 1 norm. Thus the set of admissible cut-off rules D is covered by a finite number of ✏ measure balls. Thus, D is compact (see Theorem 3.28 in Aliprantis and Border (2006)). Thus, I have shown that T : D ! D is a continuous mapping from a compact space D into itself. Schauder's fixed point theorem then states that the mapping T has a fixed point Proof of Lemma 3 Proof. Recall we consider the decision problem of bidder 1 and suppose that ✓ ✓ 2 ✓ 3 ✓ j 0 8j 6 = 1, 2, 3. We show there is a unique d(·) where Thus, there is a✓ 2 2 ⇥ where✓ 2 > ✓ 3 and a unique d(✓ 2 , ✓ 1,2 ) that solves Equation 7 . Let (1) I show that the cut-off rule d that is defined by Equation 7 is weakly increasing in ✓ 2 for all ✓ 2 2 [✓ 3 , ✓ ⇤ ). I prove this by contradiction. Suppose d was not weakly increasing in ✓ 2 when Thus, for any ✏ > 0 there exists a ✓`, In addition, d(✓`, Since b 2 is increasing in the first argument and ✓ h > ✓`, then it must be the case that However, the above inequality can not hold because where the final inequality holds because d is weakly increasing when ✓ <✓. Thus, we have a contradiction that shows d is weakly increasing. (2) A similar proof by contradiction shows that d is continuous in If d is not continuous over this interval, then there is a✓ 2 (✓ 3 , ✓ ⇤ ) that is the first discontinuity in d. By construction d is continuous when ✓ is such that b 2 (✓, 0) < ✓ 3 . Thus, , which contradicts our assumption that d is discontinuous at✓. ( Yet this contradicts our assumption that ✓ ⇤ < ✓. Thus, it must be the case that In addition✓ b 2 (✓, d(✓, ✓ 1,2 )) > 0 when✓ = ✓ ⇤ ✏ where ✏ > 0 is sufficiently small. This is because by construction✓ ✓ ⇤ < 2✏ and when ✏ is sufficiently small, ✓ 1,2 )). Proof of Theorem 2 Proof. Because mechanism is symmetric, it is without loss of generality to assume that ✓ ✓ 2 ✓ 3 ✓ j 0 8j 6 = 1, 2, 3 and I study the problem from the perspective of bidder 1. By construction, mechanism satisfies (1) IR and (2) no subsidies. Next, I show that the mechanism is incentive compatible. If (✓ 1 , . . . , ✓ N ) 2 ⇥ N are such that d(✓ 1 ) > ✓ 1 , then q 1 (✓ 1 , ✓ 1 ) = x 1 (✓ 1 , ✓ 1 ) = 0. Bidder 1 does not have a profitable deviation in reporting her type because the price of one unit exceeds bidder 1's demand for her first unit p 1 (✓ 1 ) > ✓ 1 . Moreover, the price of the second unit exceeds the price of the first unit. Incentive compatible: I consider two cases to prove incentive compatibility. Case 1: If (✓ 1 , . . . , ✓ N ) 2 ⇥ N are such that ✓ 1 > d(✓ 1 ) and ✓ 2 > d(✓ 2 ), then q 1 (✓ 1 , ✓ 1 ) = 1. Bidder 1 has no incentive to report a lower type that does not win any units because her willingness to pay for the first unit ✓ 1 weakly exceeds the price she pays for the first unit d(✓ 1 ) = p 1 (✓ 1 ). In addition, ✓ 2 > d(✓ 2 ) implies where the first equality holds from the definition of p 2 , the first inequality holds by assumption, the second inequality holds by the construction of d, the third inequality follows because d is weakly increasing in the first argument and b 2 is decreasing in the second argument, and the final equality holds by the construction of p 1 . Thus, we see that bidder 1's willingness to pay for her second unit is below the price she must pay to win a second unit. Thus, bidder 1 does not gain by over-reporting her type and winning an additional unit. Moreover, we can see that the mechanism satisfies feasibility because bidder 1 and bidder 2 each wins and demands exactly one unit under the mechanism's pricing rule. All other bidders win no units and demand no units. Case 2: If (✓ 1 , . . . , ✓ N ) 2 ⇥ N are such that ✓ 1 > d(✓ 1 ) and d(✓ 2 ) > ✓ 2 , then q 1 (✓ 1 , ✓ 1 ) = 2. Bidder 1 has no incentive to report a lower type that does not win any units because her willingness to pay for the first unit ✓ 1 weakly exceeds the price she pays for the first unit d(✓ 1 ) = p 1 (✓ 1 ). Bidder 1 has no incentive to report a lower type that wins only one unit where the first equality holds from the definition of p 1 . The first inequality holds because d is weakly increasing in the first argument and b 2 is decreasing in the second argument. The second equality holds because d( ). Thus, bidder 1's conditional willingness to pay for her second unit exceeds the price of her second unit, and therefore, bidder 1 does not want to deviate and report a type that ensures that she only wins one unit. In addition, q j (✓ 1 , This holds for bidder 2 by assumption. This holds for bidders j 6 = 1, 2 because d(✓ j ) ✓ 3 ✓ j . Thus, the above construction specifies a mechanism that is feasible and incentive compatible. Efficiency: I consider two cases to prove efficiency. Case 1: Consider an outcome that is such that two bidders each win one unit. That is, . . , N}. Again, it is without loss of generality to assume the two bidders are bidders 1 and 2 and that ✓ 1 , ✓ 2 ✓ 3 ✓ j 8j 6 = 1, 2, 3. There are no Pareto improving trades between a winning bidder (without loss of generality, bidder 1) and a losing bidder (without loss of generality, bidder 3) because where the first inequality holds because ✓ 1 d(✓ 1 ) = p 1 (✓ 1 ) and a bidder's willingness to sell her first unit s 1 is decreasing in the second argument (her payment) by positive wealth effects. The first equality holds from the definition of s 1 . Thus, there are no ex post Pareto improving trades between a winning bidder and a losing bidder because the winning bidder's willingness to sell exceeds the losing bidder's willingness to pay. There are no ex post Pareto improving trades where bidder 2 buys a unit from bidder 1 because where the first inequality was shown above, and the second inequality is because the mechanism is incentive compatible, and hence the price bidder 2 pays for her second unit exceeds her willingness to pay for her second unit when she wins one unit. Thus, bidder 1's willingness to sell a unit exceeds bidder 2's willingness to pay for a unit and there is no ex post Pareto improving trades where bidder 1 sells a unit to bidder 2. A symmetric argument shows there are no ex post Pareto improving trades where bidder 2 sells a unit to bidder 1. Thus, there are no ex post Pareto improving trades when two bidders each wins one unit. Case 2: Consider an outcome where one bidder (without loss of generality, bidder 1) wins both units. That is, (✓ 1 , . . . , ✓ N ) 2 ⇥ N is such that q 1 (✓ 1 , ✓ 1 ) = 2 . I show that there are no ex post Pareto improving trades where bidder 1 sells a unit to a losing bidder. Incentive compatibility implies that bidder 1's conditional willingness to pay for her second unit exceeds the price of her second unit. If we continue to assume that ✓ ✓ 2 ✓ j 0 8j 6 = 1, 2 this implies that From the above expression, we then have that where the first inequality holds because positive wealth effects imply that s 2 is decreasing in the second argument. The first equality holds from the definition of s 2 and b 2 . The second inequality holds from incentive compatibility. Thus, there are no ex post Pareto improving trades between bidder 1 and bidder j 6 = 1 because bidder 1's willingness to sell her second unit exceeds any of her rival's willingness to pay for a single unit. Proof of Proposition 1 Without loss of generality, I construct the proof by placing necessary restrictions on the assignment rule of bidder 1 when her rivals have types ✓ 1 2 ⇥ N 1 where ✓ 1 is such that ✓ ✓ 2 ✓ 3 ✓ j 0 8j 6 = 1, 2, 3. The proof of Proposition 1 is by contradiction. I assume that there exists a mechanism that satisfies Properties (1)- (5), and then obtain a contradiction. I obtain Lemmas A1-A3 under this assumption. I then use these three lemmas to draw a contradiction. Lemma A1 shows that bidder 1 wins a unit only if her demand is among the two highest demands reported. Now suppose bidder 3 increases her report to Again, the same argument shows that and bidders have strictly positive wealth effects. Yet ✓ 2  ✓ 0 2 in the coordinate-wise sense. This contradicts with strong monotonicity because strong monotonicity implies that d 2,1 (✓ 2 ) is weakly increasing in ✓ i . Thus, there is no mechanism that satisfies Properties (1)-(5). Proof of Lemma 4 Proof. Individual rationality implies that if i , j 2 ⇥ ⇥ {s, f} are such that q i ( i , j ) = 0, The above expression gives us that The first inequality in Equation 8 holds because q i ( i , j ) = 0 =) q j ( i , j )  2, and hence individual rationality gives us that The second inequality in Equation 8 holds because of declining demand and positive wealth effects. If i = (0, t i ) 2 ⇥⇥{s, f} and j = (✓ j , t j ) 2 ⇥⇥{s, f} is such that ✓ j > 0, then efficiency requires that q j ( i , j ) = 2. In addition, the Equation 8 shows that for all j 2 ⇥ ⇥ {s, f} s.t. ✓ j > 0. Since the above expression must hold for arbitrarily small ✓ j > 0, we have that Thus, if i = (0, t i ) and j = (✓ j , t j ) where ✓ j > 0, then weak budget balance implies However I have already shown that (p j,0 ( i ) + p j,1 ( i ) + p j,2 ( i ))  0 and p i,0 ( j )  0. Thus, Thus, the price bidder i pays to win no units is zero because p i,0 ( j ) = 08 j 2 ⇥ ⇥ {s, f} s.t. ✓ j > 0. We combine this with the taxation principle to get the result. If i , j 2 ⇥ ⇥ {s, f} and i and j are such that q i ( i , j ) = 0, then the taxation principle implies x i ( i , j ) = p i,0 ( j ) = 0, where the final equality follows by the above argument. Proof of Proposition 2 When I prove the first two bullet points of Proposition 2, I proceed with an abuse of notation by dropping t i and t j from the description of bidder types. I study the incentives that bidders have to truthfully report their steepness, given that mechanism provides the bidders with an incentive to truthfully report their steepness type. Thus, I fix t i , t j 2 {s, f} and suppose that a bidder truthfully reports her steepness type. I then find necessary conditions on mechanism that ensure that a bidder truthfully reports her intercept type under the assumption that she truthfully reports her steepness type. Thus, for simplicity, when I prove the first two bullet points of Proposition 2, the domain of bidder i's assignment rule q i is ⇥ 2 , because I only study bidder incentives to report their intercept type. Thus q i (✓ i , ✓ j ) is bidder i's assignment in mechanism that satisfies Properties (1)-(4) when we take as given that bidders i and j truthfully reported their steepness type. I similarly write the cut-off rules d t i i,1 and d t i i,2 as d i,1 and d i,2 to condense notation. Remark A1 below gives necessary conditions that a mechanism must satisfy if satisfies Properties (1)-(4). Remark. A1. Suppose that mechanism satisfies Properties (1)-(4). Then, 1 . I consider two cases. First suppose that mechanism is such that ✓ ⇤ 2 = ✓ ⇤ 1 . Then bidder i gets utility u(2, x ⇤ 2 , ✓ i ) in mechanism if her intercept type ✓ i 2 ⇥ is such that ✓ i > ✓ ⇤ 2 . If bidder i's intercept type is instead such that ✓ ⇤ 2 > ✓ i , then bidder i has utility u(0, 0, ✓ i ). Incentive compatibility implies that bidder i utility is continuous and increasing in her type ✓ i because u is continuous in the third argument. Thus, Thus, For the second case, suppose that mechanism is such that ✓ ⇤ 2 > ✓ ⇤ 1 . Recall, incentive compatibility implies that a bidder's utility is continuous in her type, Thus, Remark A2 follows from combining the implications of Remark A1 and Lemma A4. Lemma. A5. Suppose that mechanism satisfies Properties (1)-(4). Then, Proof. The proof of Lemma A5 is by contradiction. Suppose that there is a mechanism Let ✓ ⇤ j 2 (0, ✓) be such that ✓ ⇤ j := inf{✓ j : Thus, Hence we get that d j, Recall that Equation 9 implies that where the final inequality follows from the definition of ✓ ⇤ j . Combining the above two expressions gives where the second inequality holds because d j, , and the third inequality holds because of declining demand. Thus, Thus, if satisfies Properties (1)-(4) then d i,1 (✓ j ) is continuous in ✓ j 8✓ j 2 ⇥. Strictly increasing: I prove that d i,1 (✓ j ) is strictly increasing in ✓ j 8✓ j 2 ⇥. Again, the proof is by contradiction. Incentive compatibility requires that d i,1 (✓ j ) is weakly increasing is not strictly increasing, there exists an interval (✓j, ✓ h j ) such that where the second equality and the inequality holds from Remark A1, and final equality holds because we showed that d j,1 is continuous. Using the above expression we see that In addition, if ✓ j > ✓j, then Recall that d j,1 (✓ i ) is continuous and d j,2 (✓ i )  ✓j 8✓ i <✓ i . As such, I combine this with Equation 13 to show that However, this contradicts the fact that ✓ h j > ✓j. Thus, if satisfies Properties (1)-(4) then The first two implications of Proposition 2 relate to a bidder's incentive to truthfully report the intercept dimension of her type, given that the bidder truthfully reports her steepness. The final two points of Proposition 2 relate to a bidder's incentive to truthfully report her steepness dimension in a mechanism that satisfies Properties (1)- (4). Now that I study a bidder's incentive to report the second dimension of her type, I again write bidder i's multi-dimensional type as i where i 2 ⇥ ⇥ {s, f}. The final two implications of Proposition 2 follow as Corollaries of the first two implications proven above. Corollary. A1. If satisfies Properties (1)-(4), then The taxation principle states that for all i , j 2 ⇥ ⇥ {s, f}, Similarly, Thus, bidder i wins at least one unit if i and j are such that ✓ i > p i,1 ( j ), and only if ✓ i p i,1 ( j ). This implies that bidder i's first unit cut-off equals p i,1 ( j ) 8 j 2 ⇥⇥{s, f}. Corollary A2 shows the final implication of Proposition 2. A2: An efficient mechanism with subsidies In this section, I consider a setting where there are two homogenous goods and N 3 bidders with single-dimensional types. I present a mechanism sub that satisfies (1) IR, (2) incentive compatibility, (3) efficiency, and (4) strong monotonicity (for the remainder of this subsection, Properties (1)-(4)). The example shows that we can derive a mechanism that satisfies properties (1)-(4), but the mechanism violates the no subsidies condition. Recall, in Subsection 3.2 we constructed a mechanism that satisfied (1) IR, (2) IC, (3) efficiency, and (4) no subsidies. As Proposition 1 implies, the mechanism violates strong monotonicity. To see one example of a strong monotonicity violation, consider the mechanism and suppose that ✓ ✓ 1 > ✓ 2 > ✓ 3 ✓ j 0 8j 2 {4, . . . , N}. In addition, suppose that d(✓ 2 ) = ✓ 2 + ✏ 2 = ✓ 3 + 2✏, where ✏ > 0 is sufficiently small. Thus, we are considering an example where bidder 2's type is ✏ 2 below her first unit cut-off. Moreover, bidder 2's rival, bidder 3 has a type that is just below her type. Bidder 1 wins both units because bidder 2 wins no units when bidder 2's type is below her first unit cut-off. Thus q 1 (✓ 1 , ✓ 1 ) = 2. We construct nm to be such that bidder 1 pays p 1 (✓ 1 ) = ✓ 3 for her first unit and p 2 (✓ 1 ) = ✓ 2 for her second unit. Thus, if ✓ 3 increases by a small amount ✏ > 0, then bidder 1 pays more to win her first unit and thus she is willing to pay less for her second unit because of positive wealth effects. Thus, the increase in bidder 3's type implies that bidder 2 now wins one unit. This is because bidder 2's willingness to pay for her first unit is greater than bidder 1's (now lower) willingness to pay for her second unit. Therefore, we see that mechanism violates strong monotonicity because bidder 2 wins strictly more units even though her rival bidder 3 increased her type. The violation of strong monotonicity occurs in mechanism because there is interdependence between bidder 1's willingness to pay for her second unit and bidder 3's type. The increase in bidder 3's type causes a drop in bidder 1's willingness to pay for her second unit, but not bidder 2's willingness to pay for her first unit. Thus, the two quantities can reverse in rank, and this reversal means that bidder 2 wins more units (she goes from winning zero units to winning one unit) when bidder 3 increases her type. In this section, I show that we can remedy the above violation of strong monotonicity by giving bidders upfront subsidies that depend on their rivals' types. The upfront subsidies are constructed to be such that a bidder's willingness to pay for her second unit conditional on winning her first unit depends only on her demand and her highest rival's demand. In the context of the above example, this would imply that the increase in bidder 3's demand would increase the subsidy given to bidder 1. The increase in bidder 3's demand increases the price bidder 1 pays to win her first unit. The increase in the price of bidder 1's first unit is offset by an increase in her subsidy. In other words, the subsidy is constructed to be such that bidder 1's demand for her second unit is unchanged by the change in bidder 3's demand. This avoids the violation of strong monotonicity described above. The mechanism sub is symmetric. The assignment rule is such that a bidder wins a unit only if her demand type is one of the top two demands of all bidders. The top two bidders are given the same assignment that they are given in the two-bidder mechanism, which we call 2 . Recall mechanism 2 is the two-bidder version of mechanism that was defined in Subsection 3.2. Note that we show that the mechanism violates strong monotonicity if and only if N 3. Because mechanism sub is symmetric, it is without loss of generality to present the mechanism from the perspective of bidder 1. Furthermore, it is without loss of generality to assume that ✓ ✓ 2 ✓ 3 ✓ j 0 8j 2 {4, . . . , N}. I let d 1 and d 2 be the first and second unit cut-offs in mechanism 2 , where d 1 , d 2 : ⇥ ! ⇥. In other words d 1 (✓ 2 ) and d 2 (✓ 2 ) would be the first and second unit cut-offs for bidder 1 if she competed in an auction with only one rival, bidder 2. This implies that assignment rule for bidder 1 is such that q 1 (✓ 1 , ✓ 1 ) = 8 > > > < > > > : If ✓ 3 > d 1 (✓ 2 ), then we find that p 0 (✓ 2 , ✓ 3 ) is the p 0 that solves p 0 = d 1 (✓ 2 ) b 1 (✓ 3 , p 0 ) =) p 0 + b 1 (✓ 3 , p 0 ) = d 1 (✓ 2 ). Each of the above four expressions follow from the construction of Sub . Strong Monotonicity: The mechanism satisfies strong monotonicity because the construction is such that because bidder 1's first and second unit cut-off types are weakly increasing in ✓ 2 and ✓ 3 . Efficiency: I consider two cases. Case 1: Suppose that bidders 1 and 2 each win one unit. First, I show that there are no Pareto improving trades between bidders 1 and 2. Recall that the outcome of the mechanism sub is such that bidder one wins one unit and pays p 0 (✓ 1 ) + p 1 (✓ 1 ) = d 1 (✓ 2 ) in total. of Theorem 1 outlined in Section 3 can be extended to an asymmetric bidder setting in a straightforward way. In the asymmetric setting, I can similarly start with an arbitrary cut-off rule d i 2 D i for bidder i. The set D i is the set of all weakly increasing mappings of the form d i : Given bidder 1's cut-off rule d 1 2 D 1 , there is a corresponding cut-off rule for bidder 2 which I call d 2 (d 1 ) and d 2 (d 1 ) 2 D 2 8d 1 2 D 1 . More formally, if a mechanism has cut-off rule d 1 2 D 1 for bidder 1 then we say that d 1 corresponds to an assignment rule that is such that, Note that it is without loss of generality to break ties in favor of bidder 1. Thus, bidder 1's cut-off rule d 1 2 D 1 determines the assignment rule for bidder 1 q 1 (✓ 1 , ✓ 2 )8(✓ 1 , ✓ 2 ) 2 ⇥ 1 ⇥ ⇥ 2 . We then define bidder 2's assignment rule as being the assignment rule that assigns all remaining units to bidder 2, q 2 (✓ 1 , ✓ 2 ) = k q 1 (✓ 1 , ✓ 2 ) 8(✓ 1 , ✓ 2 ) 2 ⇥ 1 ⇥ ⇥ 2 . I then let d 2 (d 1 ) be the cut-off rule that corresponds to the allocation rule that corresponds to cut-off rule d 1 2 D 1 . Note that d 1 2 D 1 =) d 2 (d 1 ) 2 D 2 . I can also analogously define the pricing rule p i : ⇥ j ⇥ D i ! R k+1 , that implements a given cut-off rule for bidder i. In keeping with the argument given in Section 3, I let p i 0 be such that p i 0 (✓ j , d i ) = 0 8✓ j 2 ⇥ j , d i 2 D. The price bidder i pays to win her first unit equals the first unit cut-off I define the price that bidder i pays for her m th unit given her rivals type and the cut-off rule I also define an analogous function f that states the difference between bidder 1's willingness to pay for her m th unit compared to bidder 2's willingness to pay for her k m + 1 st unit. f (m, ✓ 1 , ✓ 2 , d 1 k m X a=1 p 2 a (✓ 1 , d 2 (d 1 ))) 8m 2 {1, . . . , k}, (✓ 1 , ✓ 2 ) 2 ⇥ 1 ⇥⇥ 2 , d An identical argument to the one presented in Lemma 2 shows that the function f (m, ✓ i , ✓ j , d 1 ) is (1) strictly decreasing in m 8m 2 {1, . . . k}, (2) strictly increasing in ✓ 1 8✓ 1 2 ⇥ 1 , and (3) strictly decreasing in ✓ 2 8✓ 2 2 ⇥ 2 . The remainder of the proof then proceeds identically. I use the function f to define my mapping T . The domain and range of T is now D 1 instead of D as it was in the symmetric case. The fixed point of T corresponds to a mechanism that has an efficient assignment rule, and we can use Schauder's fixed point theorem to show that the mapping has a fixed point.
2017-08-11T23:18:24.078Z
2019-05-08T00:00:00.000
{ "year": 2019, "sha1": "5e1b89dfd55cd80d947a02cacfcf6872d8044483", "oa_license": "CCBYNC", "oa_url": "https://www.econstor.eu/bitstream/10419/217103/1/3430-26197-1-PB.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "288302a02f5ac9102bc210363d897ea98d2df831", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
59219872
pes2o/s2orc
v3-fos-license
Solitary Lesions on Bone Scintigraphy in Patients with Breast Cancer : King Abdulaziz University Medical Centre Experience Citation Batawil NA. Solitary lesions on bone scintigraphy in patients with breast cancer King Abdulaziz University Medical Centre experience. JKAU Med Sci 2015; 22(1): 9-16. DOI: 10.4197/ Med. 22.1.2 Abstract The discrepancy between bone scintigraphy and computed tomography scanning for solitary bony lesion in patients who have breast cancer is challenging to the referral physician. The purpose of this study was to evaluate the risk of malignancy in solitary lesions on bone scintigraphy in patients who had breast cancer diagnosed at King Abdulaziz University Medical Centre, and to compare the results between bone scintigraphy and computed tomography scanning. There were 89 patients who had a solitary bone lesion noted on bone scintigraphy and computed tomography performed within 3 months of bone scintigraphy. The solitary bone lesions were benign in 56 (63%) patients and malignant in 33 patients (37%). There were 15 (17%) malignant lesions in bone scan that had initial computed tomography fi ndings that were negative or equivocal for bone metastasis, but all these lesions had destructive changes on follow-up computed tomography scan. In summary, at this medical center the frequency of malignancy is high (37%) in solitary bone lesions in patients who have breast cancer, regardless of appearance of the lesion on an initial computed tomography scan. Prospective study with a larger group of patients is recommended. INTRODUCTION T he skeleton is the most common site for distant metastasis from breast cancer.The risk of skeletal metastasis as assessed by bone scintigraphy is 0.8% to 2.6% for stage 1 and 2 and 16.8% to 40% for stage 3 and 4 [1,2] .Staging of breast cancer is used to direct therapeutic decisions and provide important prognostic information.When bone metastasis is suspected, bone scintigraphy and computed tomography (CT) are commonly used for proper staging.Bone metastases usually are multiple, and solitary skeletal metastasis is observed in 21% to 41% of patients [3,4] .Radiographic correlation of the site of solitary uptake of radioisotope is necessary and enables precise anatomic and morphologic characterization. Bone scintigraphy is sensitive in detecting skeletal metastatic lesions, but the specifi city is low because benign conditions may cause increased radioisotope uptake including trauma, infection, and arthritis [5,6] .Chest and abdominal CT scans are usually performed to assess visceral and skeletal metastasis.The discrepancy in the metastatic workup for solitary bony lesion between negative CT scan and positive bone scan is challenging, particularly with the advanced CT resolution and low bone scan sensitivity.One study suggests that CT scan of chest and abdomen with the extension of the scan to the pelvis may increase CT scan sensitivity (98%) and specifi city (100%) for skeletal metastasis, and CT scanning may replace the bone scan during the initial evaluation [7] .However, in a systematic review of 16 studies that compared the accuracy of imaging tests for bone metastasis in women who had breast cancer, there was little evidence that CT, magnetic resonance imaging (MRI), or fl uorodeoxyglucose positron emission tomography (FDG-PET) scans may replace bone scan in the initial imaging for bone metastasis [8] .The MRI scans had higher sensitivity (97%) and specifi city (97%) than FDG-PET (sensitivity 83%, specifi city 94.5%) or bone scintigraphy (sensitivity 87%, specifi city 88%) in diagnosis of bone metastasis in patients who had breast cancer [9] .Nevertheless, MRI is not commonly used for initial or follow-up evaluation in patients who have breast cancer because of restricted availability, high cost, and the long duration of whole -body MRI scanning. The purpose of the present study was to evaluate the frequency of malignancy in solitary lesions on bone scintigraphy in patients who had breast cancer, and to compare the results between bone scintigraphy and CT scanning of the chest, abdomen and pelvis for evaluation of bone metastases.This study reviews the clinical history and follow-up CT scan and bone scintigraphy studies to determine the incidence of cancer in solitary bony lesions, the eff ect of clinical factors on the course of solitary metastatic bone lesions, and to compare the accuracy of bone scintigraphy and CT scan for imaging solitary bone metastasis in patients who had breast cancer. Subjects This study retrospectively reviews all bone scintigraphy studies performed for staging in patients who had the clinical diagnosis of breast cancer at the Nuclear Medicine Department, King Abdulaziz University Hospital, between January 2007 and December 2011.Patients were included in the study when they had: (1) A single area of radioisotope uptake on bone scintigraphy that had the appearance of a bone metastasis; (2) A CT scan performed within 3 months of the bone scintigraphy study that evaluated the site of abnormal radioisotope uptake; and (3) ≥ 1 follow-up bone scintigraphy study and CT scan.There were 720 bone scintigraphy studies performed among 434 patients who had breast cancer to screen for metastasis: No metastasis in 215 (50%) patients; solitary bone lesions in 132 (30%) patients; multiple skeletal metastases in 87 (20%) patients.Of the 132 patients who had solitary bony lesions, 43 patients (33%) were excluded (CT correlation not available for 23 patients; 20 patients lost to follow-up), leaving 89 patients included in the study who had solitary bone lesions on bone scintigraphy and an available initial CT scan performed within 3 months of bone scintigraphy. Evaluation Medical records were reviewed for age, tumor markers, follow-up, extraosseous metastasis, and progression to multiple skeletal metastasis. The incidence of malignancy in the solitary bone scintigraphic abnormality was assessed from the follow-up bone scintigraphy study and CT scan.Any available plain radiographs and MRI scans were reviewed for the detected solitary lesion. Bone Scintigraphy and Computed Tomography Whole body bone scintigraphy was performed at 3 to 4 hours after intravenous injection of technetium Tc 99 methyldiphosphonate (740-925 MBq [20-25 mCi]).Anterior and posterior total body, static lateral skull and oblique chest images were obtained.Selected spot views with or without single photon emission CT/CT of the abnormal site were acquired as needed.Images were acquired with a dual head gamma camera (Skylight, Philips, Best, The Netherlands) or hybrid single photon emission CT/CT dual head gamma camera (Symbia T6, Siemens, Munich, Germany).Images were acquired with a low-energy, high-resolution collimator (photopeak 140 kev, 20% symmetrical window, matrix size 256 × 1024). The CT scans were performed after oral and intravenous contrast administration (Somatom, Siemens 64 slices and Defi nition AS, Siemens 128 slices).Images were acquired with multislice technique (slice thickness, 5 mm) and reconstruction protocol for the chest (1.5 mm) and abdomen and pelvis (2 mm).The diagnostic criteria for solitary skeletal metastasis included a single area of increased uptake of radioisotope on bone scintigraphy and a CT abnormality at the same anatomic location that was either (1) a destructive lesion on the CT scan performed within 3 months of bone scintigraphy or (2) a normal or equivocal initial CT scan and a destructive lesion on the follow-up CT scan consistent with metastasis.The diagnostic criteria for a benign solitary lesion included a single area of increased uptake of radioisotope on bone scintigraphy and either (1) corresponding CT changes consistent with benign disease such as arthritis or trauma or (2) improved or stable uptake on follow-up bone scintigraphy and normal CT scan appearance throughout follow-up. Data Analysis Pearson chi-square (χ²) test and Fischer exact test were used to evaluate the relation between the incidence of solitary metastatic lesions and other clinical factors such as age, tumor markers, stage, and visceral and extra skeletal metastasis.The relation between a solitary area of increased uptake on bone scintigraphy and CT fi ndings (initial CT normal or abnormal) was evaluated.Statistical signifi cance was defi ned as P ≤ .05. Results The 89 patients who had abnormal solitary lesions on bone scintigraphy had a total 193 bone scintigraphy studies and 222 CT scans performed during followup (mean 3.2 yrs., range 1-5 yrs.).Most lesions were benign (Table 1).Malignant lesions were associated more frequently with elevated levels of tumor markers, visceral metastases, and progression to multiple skeletal metastases than benign lesions (Table 1).Mean age was similar between patients who had benign or malignant lesions (Table 1). Most solitary lesions were located in the spine, ribs, or sternum, and diff erences in frequency between benign and malignant lesions in these locations were not signifi cant (Table 1).Most metastatic lesions were located in the spine, ribs, or sternum (Table 2).38 (43%) patients had solitary lesions with equivocal fi ndings for metastasis in the initial CT scan.At follow-up 15 (39%) lesions were malignant and 23 (61%) lesions were benign (not signifi cant).The follow-up CT scan showed destructive changes consistent with metastasis in all 33 malignant lesions, with high overall sensitivity (100%) and specifi city (100%) for bone metastasis.In 15 (45%) malignant lesions the initial CT fi ndings were equivocal for bone metastasis, but follow-up CT scan showed destructive changes and bone scintigraphy showed persistent increased scintigraphic uptake (Figs. 1 and 2).Progression to multiple skeletal metastasis was observed in 15 (45%) patients, including 9 patients who had initial negative or equivocal CT and 6 patients who had a positive initial CT scan for metastasis (diff erence not signifi cant) (Table 2).There was no signifi cant diff erence in the frequency of elevated tumor marker levels or the frequency of visceral metastasis between patients who had positive or negative initial CT scan (Table 2).There were 31 patients (35%) who had single photon emission CT/CT and 11 (12%) patients who had MRI to analyze the solitary area of increased uptake on bone scintigraphy. Discussion The present study showed that the frequency of solitary skeletal metastasis in breast cancer patients was 37% (Table 1), which is comparable to data from one previous study (41%) [3] and higher than data from another study (21%) [4] .The diff erence in the frequency between studies may be attributed to the short followup period between bone scintigraphy and CT scanning in the present study (3-6 months).Clinical analysis alone is not adequate to diagnose skeletal metastasis because 30% to 50% of patients do not have symptoms of metastatic disease when they present with breast cancer [10] .The tumor markers are simple, commonly available, cost-eff ective tests for follow-up in patients who have breast cancer metastasis, but data vary about the correlation between tumor markers and bone metastasis [11,12] .In the present study, patients who had a malignant solitary lesion had elevated levels of tumor markers (cancer antigen 15 or carcinoembryonic antigen) over patients who had benign lesions (Table 1).Patients who initially had a malignant solitary lesion more frequently developed multiple skeletal metastases and visceral metastasis than patients who initially had a benign lesion (Table 1). Most bone metastases begin as intramedullary lesions, and most red marrow is located in the axial skeleton (vertebrae, pelvis, and proximal femurs).Breast cancer may metastasize preferentially to these regions because the spine is highly vascularized and contains 75% body bone marrow.Metastatic tumor cells may infi ltrate the vertebral body bone marrow without destroying the cortical bone [13] .The spine also may develop facet joint arthritis, fractures, and Solitary Lesions on Bone Scintigraphy in Patients with Breast Cancer: King Abdulaziz University Medical Centre Experience N.A. Batawil degenerative disk disease.In the present study 58% of vertebral lesions were benign and 42% of vertebral lesions were malignant (Table 1).The anatomic distribution of the solitary bone lesion throughout the skeleton was similar between benign and malignant lesions (Table 1).Metastatic vertebral lesions were observed in most malignant solitary lesions (Table 2), and spine metastases frequently originated from the lumbar spine (50%).One previous study of 289 solitary metastatic lesions in breast cancer patients showed that 32% of lesions were in the vertebrae [3] and another study showed that 50% of vertebral lesions were metastatic [10,14] . The CT scan is a powerful method to delineate bone metastasis with high sensitivity and specifi city [7,8] .With a high CT resolution, it was suggested that routine bone scans are not required in staging patients with advanced breast cancer, as the bone scan did not identify any metastatic lesion that was missed by CT scan [7] .However, a CT scan may not show early bone metastasis or may show equivocal changes that can be missed by the interpreter.In the present study 38 patients (43%) presented with initially negative or equivocal CT correlation, corresponding with the solitary scintigraphic fi nding; 15 of those patients (39%) had metastatic disease in follow-up bone scintigraphy and CT scans.Both CT-negative and CT-positive groups had destructive changes consistent with metastasis in the follow-up CT scan and had comparable frequency of visceral metastasis and conversion from single metastasis to multiple skeletal metastasis (Table 2).Bone metastasis may start as an intramedullary lesion, and the CT scan may be normal in patients who have early bone metastasis and may not detect structural changes [13] .Early detection of skeletal metastasis may enable early treatment and possibly longer survival time, but the early CT scan may be negative.Therefore in patients who have early metastasis, more aggressive therapy may be recommended solely on the basis of positive scintigraphy [10] . Limitations of the present study include the retrospective design and small number of patients that may have limited statistical power.Nevertheless, this study confi rms that the frequency of metastasis at this medical center is high in patients who have breast cancer and a solitary bone lesion detected with bone scintigraphy (37%), and the spine is the most common site of metastases (61%).Bone metastasis with initial negative or equivocal CT scan and positive scintigraphy occurred in 15 of 89 patients (17%).A prospective study with a larger group of patients is recommended. Figure 1 . Figure 1.Bone scintigraphy (BS) in a patient who had breast cancer for initial staging.(a) Base line BS demonstrated a solitary uptake at the left 9th rib posteriorly (arrow).(b) Follow up BS at 6 months demonstrated stable skeletal metastasis at the left 9th rib (arrow).(c) Follow up BS at 22 months showed multiple skeletal metastasis at axial and appendicular skeleton. Figure 2 . Figure 2. Correlative computed tomography of the chest in a patient who had breast cancer (Fig 1).(a) The initial computed tomography scan for staging (8 weeks after BS) had a faint cortical irregularity at the left 9th rib that was interpreted as likely traumatic.(b) Follow-up computed tomography scan at 6 months showed stable appearance of the left 9th rib.(c, d) Follow-up computed tomography scan at 24 months showed a stable appearance of the left 9th rib (c) and a lytic skeletal metastasis at the lumbar spine(d). Solitary Lesions on Bone Scintigraphy in Patients with Breast Cancer: King Abdulaziz University Medical Centre Experience N.A. Batawil Table 1 . Clinical characteristics of patients who had breast cancer and solitary uptake of radioisotope on bone scintigraphy*. Table 2 . Clinical and computed tomography characteristics of patients who had breast cancer and solitary malignant bone lesions on bone scintigraphy*.
2019-01-25T15:59:22.506Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "3224173b41fad4fe3203066e803ee482aeaf50a8", "oa_license": "CCBYNC", "oa_url": "http://jkaumedsci.org.sa/index.php/jkaumedsci/article/download/487/487", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3224173b41fad4fe3203066e803ee482aeaf50a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
130189556
pes2o/s2orc
v3-fos-license
Bearing capacity of embedded piles in rocks considering their contribution to friction This article has the objective to show a study of different existing theories that consider the friction contribution of embedded piles in rock. It initially summarizes the adopted criteria for such theories and the coefficient ranges considered for the application of their basic expressions. The results of these theoretical analyses were applied in the solution of the foundations of a hotel, and its marine, built on the beach in the city of Varadero, Cuba. The study area presented a high geologic and engineering complexity, with highly variable stratification characteristics together with distinct soil and rock properties, where the presence of a calcareous stratum stands out in the deposit. The calcareous layer is found in a variable depth within the strata and has a quality ranging from very poor to good, where all the pile tips were founded. In order to obtain the bearing capacities from the designed piles, it was necessary to take into account their friction contribution within the rock, which was done by validating the existing theories through the execution of in situ pile load tests, combined with the use of theoretical models. This exercise allowed the establishment of practical coefficient values that were required by the theories in such particular site conditions. It was finally possible to yield design solutions for the deep foundations of this case history, which comprised over 2000 driven piles.. Introduction As known classical theories of Soil and Foundation Mechanics state, when a pile manages to become embedded in rock at least one time its diameter, that pile will work with support only at the tip and its friction contribution will not be taken into account, on the basis that the displacements at the tip of the pile will be negligible and thus pile-rock friction will not be generated.The above statement is true when the piles rest on high quality rocks.However, for cases where the piles are supported by low quality rocks, where they can be embedded in rock several times their diameter, the above statement is not true, and it is necessary to take the friction contribution within the rock into account when determining the ultimate bearing capacity of the pile. In international literature, there are several works studying this issue, including among the most recent: Serrano (2008), Perez Carballo (2010) and Olmo (2011).All these studies agree that there are two groups of theories to establish ultimate resistance to friction in piles in rock τ ult • Theories that consider the τ ult as a linear function of the compressive strength of rock R c. • Theories that consider the τ ult as a quadratic function of the compressive strength of rock R c. In this research, all these theories were studied, establishing basic expressions and intervals of the coefficients proposed therein.This study was necessary to apply to the solution of the foundations over piles in a hotel on the beach in Varadero, Cuba.The engineering geological conditions of the area were very complex and variable, together with the existing soil and rock properties.This can be simplified in a stratification consisting of a series of soft soils, with variable thickness of fill, peat, silt and other very loose soils, none of which contribute to the bearing capacity of the piles, and a stratum of calcarenite, which appears at different depths and with very variable quality, from good to very poor.There, the piles achieve embedded length (EL) from only 1.5 times its diameter to more than 14 times its diameter, depending on the quality of the rock. To determine the bearing capacity of piles, determinations for existing models, including the analyzed theories that consider the friction contribution within the rock where necessary, were combined with in situ pile load tests, which served to calibrate the models and set the interval at which coefficients taken into account in the conditions for the case study could vary.A very good engineering calibration of the models was achieved, and from it the resistant capabilities of the piles, for various conditions of support, were established.This allowed for a significant reduction in the length of piles, resulting in cost savings for construction. Characterization of the engineering geology of the study area In order to characterize the engineering geological conditions of the study area, which was necessary for the determination of the bearing capacity of the piles, several in situ studies were carried out by the companies GeoCuba (2008( ) and ENIA (2010( , 2011)).This research included surveys, description of strata, geophysical studies, and laboratory studies in order to obtain the main physical and mechanical properties of soils and rocks. The following types of soil and rocks, of varying thickness, positions, and engineering properties, were found in the general zone: The tested piles were always embedded, with different EL embedded lengths, in calcarenite, with one of the above qualities.The result was when it was embedded in poor or very poor quality calcarenite, it was necessary to consider the contribution to friction, whereas when it was embedded in good quality calcarenite, only the contribution in the tip was taken into consideration. Analysis of different theories to take into account the contribution to friction of embedded piles in rock As was outlined in the introduction of the article, in international literature, there are several works studying this issue, including among the most recent: Serrano (2008), Perez Carballo (2010) and Olmo (2011).All these studies agree that there are two groups of theories to establish ultimate resistance to friction in piles in rock τ ult • Theories that consider the !"# as a linear function of the compressive strength of rock R c. The general expression posed by this theory to obtain the ultimate friction resistance of pile in rock is defined by According to Pérez Carballo (2010), the expressions proposed by the main authors are those that are shown in the following formulas: Ø Hooly and Lefroy Ø Theories that consider the τ ult as a quadratic function of the compressive strength of rock R c. The general expression posed by this theory to obtain the ultimate friction resistance of pile in rock is defined by Formula 7. All the authors of these studies agree that k=0.5, so what varies in each one is the range of values of α that were considered.According to Pérez Carballo (2010), the expressions proposed by the main authors are those that are shown in the following formulas: Ø Hooley and Lefroy All previous theories can be expressed by Formula 7, considering the following variation intervals for variables k and α. Determination of bearing capacity of piles. The procedure for determining the bearing capacity of piles, given the complexity of the study area, is set forth in Figure 2. From the above methodology, there has already been a very general overview of the first 3 steps in Sections 1 and 2 of this article.While we will not analyze those related to steps 4 and 5, due to the length of the article, we note that 36 pile driving tests were initially conducted, controlled for various conditions of pile length and stratification characteristics.From the results of these tests, it was possible to have an initial approximation of the bearing capacity of the piles, with the application of different formulas of dynamic methods.Together with the initial geological engineering characterization of the area, this made it possible to determine the piles that needed to be tested in situ Conducting situ load tests on chosen piles From the analysis of the results obtained from the first 5 steps of the methodology defined in Figure 2, it was determined that 8 in-situ load tests were needed for piles of different lengths and embedded with different EL embedded lengths, in calcarenite of different qualities.In this article we will discuss the results of 5 load tests: one of a pile embedded in very poor quality calcarenite, two of piles embedded in poor quality calcarenite, and two of a pile embedded in good quality calcarenite. The load tests were performed on individual piles, applying three load cells of 1000 kN capacity each, and placing a reaction pile above them, always greater than the load applied to the pile.Figure 3 shows 2 photos of the characteristics of the load tests.The first of these load tests was carried out on Pile 026, 11 m long, 9 m of which were driven, in a stratification whose Zone 1, composed of soft ground, with a capacity of 4 m, embedding of 5 m in the poor quality calcarenite.Figure 4 shows the results of the load deformation curve obtained from the load test. .As shown in Figure 4, the pile failed with an ultimate load of 1305 kN and with a final deformation of 41 mm. After unloading, a yield stress of 25.8 mm was maintained, corroborating the occurred failure. The second load test was carried out on Pile 006, 9 m length, 7.2 m of which were driven, in a stratification whose Zone 1 had 6 m, embedding the pile 1.2 m in good quality rock.Figure 5 shows the results of the load deformation curve obtained from the load test. As shown in Figure 5, the pile was able to carry a load of up to 1902 kN, but for that load, it had almost a linear behavior, with a final deformation of less than 10 mm.After unloading, it had a yield stress of only 2 mm, which indicates that it did not reach failure.To obtain the ultimate load reaching failure on Pile 006, it was necessary to apply the graphic analytical method defined in the Proposed International Standard for the design of pile foundations (Ibañez, Quevedo 2011).The result an estimated ultimate load capacity of 2300 kN, as shown in 6. Determination of bearing capacity of piles applying theoretical models determine the bearing capacity of the pile, expressions defined in the Proposed International for the design of pile foundations (Ibañez, Quevedo 2011) are utilized.As what is being determined is the ultimate vertical bearing capacity of the pile, !"#$ , safety factors equal to the unit are used in all cases.Therefore, the ultimate bearing capacity of the pile is defined by Formula 14, shown below: where: !"#$% : Ultimate vertical load capacity at tip !"#$% : Ultimate vertical load capacity for friction In the case of contribution at tip, Formula 15 is applied, considering the contribution only when the pile is embedded in good quality rock.In the case that it is embedded in poor quality or very poor quality rock, the friction contribution within the rock will be considered. wheree: !: Area of pile !: Average simple compressive strength of the rock !" : Factor that takes the quality of the rock into account, determined according to RQD !: Factor that takes the EL embedded length of the pile in the rock into account, determined by Formula ( 16). where: D: Diameter of pile In the case of friction contribution in the rock (Formula 17), everything analyzed in Section 3 of this article will be included when obtaining the !"# . where: !: Perimeter of pile Applying these expressions and determining the deformations of the pile foundations for the classic expressions for the same intervals of the load test, the theoretical models can be calibrated with the values of the load test.Figures 7 and 8 show the results of such adjustments for the load tests on Pile 026 and Pile 006 respectively.As shown in Figures 7 and 8, the adjustments obtained between theoretical models and load tests were very good from an engineering standpoint. Table 1 shows the numerical results of adjusting the values of α for both methods considering the linear relationship R c and that considered the quadratic relationship, for the determination of !"#$% . 2 shows the numerical results obtained in the determination of the contribution at the tip of all of the analyzed piles. Table 3 shows the numerical results obtained in determining the !"#$ by theoretical models, considering the linear and quadratic methods when determining the !"#$% , and the values of the ultimate bearing capacity obtained from the load test !"#$%& . Table 3 shows the good calibration achieved between the !"#$ values obtained from theoretical models and the !"#$%& values obtained from load tests.In the case of piles 010, 026, and 037, where the friction contribution in the rock was considered, it was not possible to establish what method would be the best suited to the analyzed case: that which considers the linear function or that which considers the quadratic function.However, in general, both are very well suited to the cases. Table 4 shows the intervals of values of α obtained in the adjustments made when determining the !"#$% for both methods, as well as the intervals obtained from the analysis of international literature.As shown here, α intervals obtained in the investigation are among those mentioned in the literature, adjusting to their lower limits. Definition of the bearing capacity and pile length Following the methodology presented in Figure 2 and performing similar analyses to those set forth in this article for all the conditionals of the piles in the building, combining the results of the application of the calibrated theoretical models, the dynamic formulas from the results of pile driving tests, and the results obtained from all the load tests that were carried out, and applying the appropriate safety coefficients, it was possible to determine the bearing capacity of the piles for all the defined zones, the pile length, and the number of blows in the last foot that must be secured during pile driving to ensure said bearing capacity Figure 9 shows a plant of the building and the different zones defined as typical for piles from the geological engineering conditions thereof, the bearing capacity of !"#$% , and the length of the piles Similarly, Table 5 summarizes the values of !"#$% , the length of the piles, and the number of blows in the last foot that must be secured during the pile driving of each of the established zones.The above results brought about a rationality in the solution of pile foundations for the analyzed building, with consequent cost savings.This allowed verification during pile driving that the obtained results were valid, having a very low loss of piles than 3%), with a behavior in each zone as established in the investigation. Conclusions general methodology is established to address solution of pile foundations in highly variable and complex geological engineering conditions, using a combination of correct characterization of the different existing stratifications, and the physical and mechanical properties of soils and rocks that compose them.Using theoretical methods of determining !"#$ , where those that consider friction contribution in rock are included as well as dynamic formulas from the pile driving test results and in situ load tests, !"#$ can be put as close to reality as possible. • Through studying the literature about methods for taking friction contribution in piles embedded in rock into account, it was concluded that there are two groups of methods.The first considers a linear relationship between !"# and R c from the α coefficient, finding that this value, according to the different authors consulted, takes values in the range of 0.05 ≤ α ≤ 0.4.The other considers a quadratic relationship between !"# and R c from coefficient α, finding that the value, according to the different authors consulted, takes values in the range of 0.1≤ α ≤ 0.4. • From the results of the load tests, it was possible to calibrate the theories and coefficients used to consider friction contribution in the rock, so that for the methods that consider the linear relationship between !"# and R c , the interval of variation α was 0.09 ≤ α ≤ 0.12, while for the methods that consider the quadratic relationship, the interval was 0.1 ≤ α ≤ 0.15, both being within those values defined in the literature, always tending to their lower values. • From determining !"#$% , including therein the friction contribution in the rock by the analyzed methods, and the !"#$% by classic methods, it was possible to obtain a satisfactory engineering adjustment between the load deformation curves obtained from load tests and those proposed by theoretical methods, demonstrating the validity of the latter. • With the application of the general methodology proposed in the solution of pile foundations of the analyzed building, the following was achieved: rational results for the foundation, proposed zoning depending on geological engineering characteristics, allowing to define the value of !"#$% , the length of the piles, and the minimum number of blows needed to secure the last foot during pile driving for each of the zones.All of this was corroborated satisfactorily during pile driving. Figure 2 . Figure 2. Proposed methodology to determine the bearing capacity of piles Figure 7 . Figure 7. Results of the load tests and modeling of Pile 026 Figure 8 . Figure 8. Results of load test and modeling of Pile 006 Table 1 . Values of α and !"#$% obtained for all the analyzed piles Table 2 . Values of !"#$% obtained for all the analyzed piles Table 5 . Values of !"#$% , length of piles and the number of blows in the last foot that must be secured during the pile driving of each of the established zones.
2018-12-12T06:27:13.745Z
2015-12-30T00:00:00.000
{ "year": 2015, "sha1": "386270bc59fe6b52c422981b6d5cdcc87e062a91", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4067/s0718-50732015000300004", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "aea9e19627f337099498ddcf924770284abdbf81", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
119196772
pes2o/s2orc
v3-fos-license
Pion-nucleon scattering in the Roper channel from lattice QCD We present a lattice QCD study of $N\pi$ scattering in the positive-parity nucleon channel, where the puzzling Roper resonance $N^*(1440)$ resides in experiment. The study is based on the PACS-CS ensemble of gauge configurations with $N_f=2+1$ Wilson-clover dynamical fermions, $m_\pi \simeq 156~$MeV and $L\simeq 2.9~$fm. In addition to a number of $qqq$ interpolating fields, we implement operators for $N\pi$ in $p$-wave and $N\sigma$ in $s$-wave. In the center-of-momentum frame we find three eigenstates below 1.65 GeV. They are dominated by $N(0)$, $N(0)\pi(0)\pi(0)$ (mixed with $N(0)\sigma(0)$) and $N(p)\pi(-p)$ with $p\simeq 2\pi/L$, where momenta are given in parentheses. This is the first simulation where the expected multi-hadron states are found in this channel. The experimental $N\pi$ phase-shift would -- in the approximation of purely elastic $N\pi$ scattering -- imply an additional eigenstate near the Roper mass $m_R\simeq 1.43~$GeV for our lattice size. We do not observe any such additional eigenstate, which indicates that $N\pi$ elastic scattering alone does not render a low-lying Roper. Coupling with other channels, most notably with $N\pi\pi$, seems to be important for generating the Roper resonance, reinforcing the notion that this state could be a dynamically generated resonance. Our results are in line with most of previous lattice studies based just on $qqq$ interpolators, that did not find a Roper eigenstate below $1.65~$GeV. The study of the coupled-channel scattering including a three-particle decay $N\pi\pi$ remains a challenge. I. INTRODUCTION Pion-nucleon scattering in the J P = 1/2 + channel captures the information on the excitations of the nucleon (N = p, n). The N π scattering in p-wave is elastic only below the inelastic threshold m N + 2m π for N ππ. The main feature in this channel at low energies is the socalled Roper resonance with m R = (1.41 − 1.45) GeV and Γ R = (0.25 − 0.45) GeV [1] that was first introduced by L.D. Roper [2] to describe the experimental N π scattering. The resonance decays to N π in p-wave with a branching ratio Br 55 − 75% and to N ππ with Br 30 − 40% (including N (ππ) I=0 s−wave , ∆π and N ρ), while isospin-breaking and electromagnetic decays lead to a Br well below one percent. Phenomenological approaches that considered the N * (1440) resonance as dominantly qqq state, for example quark models [3][4][5], gave a mass that is too high and a width that is too small in comparison to experiment. This led to several suggestions on its nature and a large number of phenomenological studies. One possibility is a dynamically generated Roper resonance where the coupled-channel scattering N π/N σ/∆π de-scribes the N π experimental scattering data without any excited qqq core [6][7][8][9]. The scenarios with significant qqqqq Fock components [10,11] and hybrids qqqG with gluon-excitations [12,13] were also explored. The excited qqq core, where the interaction of quarks is supplemented by the pion exchange, brings the mass closer to experiment [14,15]. A similar effect is found as a result of some other mechanisms that accompany the qqq core, for example a vibrating πσ contribution [16] or coupling to all allowed channels [17]. These models are not directly based on QCD, while the effective field theories contain a large number of low-energy-constants that need to be determined by other means. The rigorous Roy-Steiner approach is based on phase shift data and dispersion relations implementing unitarity, analyticity and crossing symmetry; it leads to N π scattering amplitudes at energies E ≤ 1.38 GeV that do not cover the whole region of the Roper resonance [18]. The implications of the present simulation on various scenarios are discussed in Section IV. All previous lattice QCD simulations, except for [19], addressed excited states in this channel using three-quark operators; this has conceptual issues for a strongly decaying resonance where coupling to multi-hadron states is essential. In principle multi-hadron eigenstates can also arise from the qqq interpolators in a dynamical lattice QCD simulation but in practical calculations the coupling to qqq was too weak for an effect. Another assumption of the simple operator approach is that the energy of the first excited eigenstate is identified with the mass of N * (1440), which is a drastic approximation for a wide resonance. The more rigorous Lüscher approach [20,21] assuming elastic scattering predicts an eigenstate in the energy region within the resonance width (see Fig. 7). The masses of the Roper obtained in the recent dynamical lattice simulations [22][23][24][25][26][27][28] using the qqq approach are summarized in [29]. Extrapolating these to physical quark masses, where m u/d m phys u/d , the Roper mass was found above 1.65 GeV by all dynamical studies except [22], so most of the studies disfavour a low-lying Roper qqq core. The only dynamical study that observes a mass around 1.4 GeV was done by the χQCD collaboration [22]; it was based on the fermions with good chiral properties (domain-wall sea quarks and overlap valence quarks) and employed a Sequential Empirical Bayesian (SEB) method to extract eigenenergies from a single correlator. It is not yet finally settled [22,[29][30][31] whether the discrepancy of [22] with other results is related to the chiral properties of quarks, use of SEB or poor variety of interpolator spatial-widths in some studies 1 . Linear combinations of operators with different spatial widths allow to form the radially-excited eigenstate with a node in the radial wave function, which was found at r 0.8 fm in [22,28,32]. An earlier quenched simulation [33] based on qqq interpolators used overlap fermions and the SEB method to extract eigenenergies. The authors find a crossover between first excited 1/2 + state and ground 1/2 − state as a function of the quark mass, approaching the experimental situation. A more recent quenched calculation [34] using FLIC fermions with improved chiral properties and variational approach also reported a similar observation. In continuum the N * (1440) is not an asymptotic state but a strongly decaying resonance that manifests itself in the continuum of N π and N ππ states. The spectrum of those states becomes discrete on the finite lattice of size L. For non-interacting N and π the periodic boundary conditions in space constrain the momenta to multiples of 2π/L. The interactions modify the energies of these discrete multi-hadron states and possibly render additional eigenstates. The multi-hadron states have never been established in the previous lattice simulations of the Roper channel, although they should inevitably appear as eigenstates in dynamical lattice QCD. In addition to being important representatives of the N π and N ππ continuum, their energies and number in principle provide phase shifts for the scattering of nucleons and pions. These, in turn, provide information on the Roper resonance that resides in this channel. In the approximation when N π is decoupled from other channels the N π phase shift and the 1 The χQCD collaboration [30] recently verified that SEB and variational approach with wide smeared sources (r 0.8 fm) lead to compatible E 1.9 GeV for Wilson-clover fermions and mπ 400 MeV. scattering matrix are directly related to eigenenergies via the Lüscher method [20,21]. The determination of the scattering matrix for coupled two-hadron channels has been proposed in [35,36] and was recently extracted from a lattice QCD simulation [37,38] for other cases. The presence of the three-particle decay mode N ππ in the Roper channel, however, poses a significant challenge to the rigorous treatment, as the scattering matrix for three-hadron decay has not been extracted from the lattice yet, although impressive progress on the analytic side has been made [39]. The purpose of the present paper is to determine the complete discrete spectrum for the interacting system with J P = 1/2 + , including multi-hadron eigenstates. Zero total momentum is considered since parity is a good quantum number in this case. In addition to qqq interpolating fields, we incorporate for the first time N π in p-wave in order to address their scattering. The N σ in s-wave is also employed to account for N (ππ) I=0 s−wave . We aim at the energy region below 1.65 GeV, where the Roper resonance is observed in experiment. In absence of meson-meson and meson-baryon interactions one expects eigenstates dominated by N (0), N (0)π(0)π(0), N (0)σ(0) and N (1)π(−1), in our N f = 2 + 1 dynamical simulation for m π 156 MeV and L 2.9 fm. The momenta in units of 2π/L are given in parenthesis. N and π in N π need at least momentum 2π/L to form the p-wave. The PACS-CS configurations [40] have favourable parameters since the non-interacting energy m 2 π + (2π/L) 2 + m 2 N + (2π/L) 2 1.5 GeV of N (1)π(−1) falls in the Roper region. The number of observed eigenstates and their energies will lead to certain implications concerning the Roper resonance. In the approximation of elastic N π scattering, decoupled from N ππ, the experimentally measured N π phase shift predicts four eigenstates below 1.65 GeV, as argued in Section IV A and Figure 7. Further analytic guidance for this channel was recently presented in [8], where the expected discrete lattice spectrum (for our L and m π ) was calculated using a Hamiltonian Effective Field Theory (HEFT) approach for three hypotheses concerning the Roper state (Fig. 8). All scenarios involve channels N π/N σ/∆π (assuming stable σ and ∆) and are apt to reproduce the experimental N π phase shifts. The scenario which involves also a bare Roper qqq core predicts four eigenstates in the region E < 1.7 GeV of our interest, while the scenario without Roper qqq core predicts three eigenstates [8]. 2 The Roper resonance in the second case is dynamically generated purely from the N π/N σ/∆π channels, possibly accompanied by the ground state nucleon qqq core. As already mentioned, our aim is to establish the expected low-lying multi-particle states in the positiveparity nucleon channel. This has been already accomplished in the negative-parity channel, where N π scattering in s-wave was simulated in [41]. An exploratory study [42] was done in a moving frame, where both parities contribute to the same irreducible representation. The only lattice simulation in the positive-parity channel that included (local) qqqqq interpolators in addition to qqq was recently presented in [19]. No energy levels were found between m N and 2 GeV for m π 411 MeV. The levels related to N (1)π(−1) and N (0)σ(0) were not observed, although they are expected below 2 GeV according to [8]. This is possibly due to the local nature of the employed qqqqq interpolators [19], which seem to couple too weakly to multi-hadron states in practice. This paper is organized as follows. Section II presents the ensemble, methodology, interpolators and other technical details to determine the eigenenergies. The resulting eigenenergies and overlaps are presented in Section III, together with a discussion on the extraction of the N π phase shift. The physics implications are drawn in Section IV and an outlook is given in the conclusions. A. Gauge configurations We perform a dynamical calculation on 197 gauge configurations generated by the PACS-CS collaboration with N f = 2 + 1, lattice spacing a = 0.0907(13) fm, lattice extension V = 32 3 ×64, physical volume L 3 (2.9 fm) 3 and κ u/d = 0.13781 [40]. The quark masses, m u = m d , are nearly physical and correspond to m π = 156(7)(2) MeV as estimated by PACS-CS [40]. Our own estimate leads to somewhat larger m π as detailed below (we still refer to it as an ensemble with m π 156 MeV). The quarks are non-perturbatively improved Wilson-clover fermions, which do not respect exact chiral symmetry (i.e., the Ginsparg-Wilson relation [43]) at non-zero lattice spacing a. Most of the previous simulations of the Roper channel also employed Wilson-clover fermions, for example [23,24,[26][27][28]. Closer inspection of this ensemble reveals that there are a few configurations responsible for a strong fluctuation of the pion mass, which is listed in Table I. Removing one or four of the "bad" configurations changes the pion mass by more than two standard deviations. The configuration-set "all" indicates the full set of 197 gauge configurations, while "all-1" ("all-4") indicate a subset with 196 (193) configurations where one (four) configuration(s) leading to the strong fluctuations in m π are removed 3 . I. The single hadron masses obtained for the full ("all") set of configurations and for the sets with one ("all-1") or four ("all-4") configurations omitted. Interpolators, fit type and fit range are like in Table II. As discussed in the text our final results are based on set "all-4". We tested these three configuration-sets for a variety of hadron energies, and we find that only m π varies outside the statistical error, while variations of masses for other hadrons (mesons with light and/or heavy quarks and nucleon) are smaller than the statistical errors. This also applies for the nucleon mass listed in Table I. The energies of the pions and other hadrons with non-zero momentum also do not vary significantly with this choice. The Roper resonance is known to be challenging as far as statistical errors are concerned, especially for nearly physical quark masses. The error on the masses and energies is somewhat bigger for the full set than on the reduced sets in some cases, for example m π and m N in Table I. Throughout this paper, we will present results for the reduced configuration-set "all-4", unless specified differently. The final spectrum was studied for all three configuration-sets, and we arrive at the same conclusions for all of them. B. Determining eigenenergies We aim to determine the eigenenergies in the Roper channel, and we will need also the energies of a single π or N . Lattice computation of eigenenergies E n proceeds by calculating the correlation matrix C(t) for a set of interpolating fields O i (Ō i ) that annihilate (create) the physics system of interest with overlaps Z n i = Ω|O i |n . All our results are averaged over all the source time slices t src = 1, .., 64. The E n and Z n j are extracted from C(t) via the generalized eigenvalue method (GEVP) [44][45][46][47] and we apply t 0 = 2 for all cases except for the single pion correlation where we choose t 0 = 3. The large-time behavior of the eigenvalue λ (n) (t) provides E n , where specific fit forms will be mentioned case by case. The give the overlap factors in the plateau region. For fitting E n from λ (n) (t) we usually employ a sum of two exponentials, where the second exponential helps to parameterize the residual contamination from higher energy states at small t values. For the single pion ground state we have a large range of t-values to fit and there we combine cosh[E n (t − N T /2)] also with such an exponential. Correlated fits are used throughout. Singleelimination jackknife is used for statistical analysis. C. Quark smearing width and distillation The interpolating fields are built from the quark fields and we employ these with two smearing widths illustrated in Fig. 1. Linear combinations of operators with different smearing widths provide more freedom to form the eigenstates with nodes in the radial wave function. This is favourable for the Roper resonance [22,28,32], which is a radial excitation within a quark model. Quark smearing is implemented using the so-called distillation method [48]. The method is versatile and enables us to compute all necessary Wick-contractions, including terms with quark-annihilation. This is made possible by pre-calculating the quark propagation from specific quark sources. The sources are the lowest k = 1, .., N v eigenvectors v k xc of the spatial lattice Laplacian and c is the color index. Smeared quarks are provided by q c (x) ≡ x c ,xc q c point (x ) [48] with the smearing op- In previous work we used stochastic distillation [49] on this ensemble, which is less costly but renders noisier results. For the present project we implemented the distillation 4 with narrower (n) smearing N v = 48 and wider (w) smearing N v = 24, illustrated in Fig. 1. Two smearings are employed to enhance freedom in forming the eigenstates with nodes. Most of the interpolators and results below are based on narrower smearing which gives better signals in practice, although both widths are not very different. The details of our implementation of the distillation method are collected in [50] for another ensemble. D. Interpolators and energies of π and N Single particle energies are needed to determine reference energies of the non-interacting (i.e., disregarding interaction between the mesons and baryons) system, and 4 Sometimes referred to as the full distillation. The profile Ψ(r) of the "narrower" (Nv = 48) and the "wider" (Nv = 24) smeared quark, where Ψ(r) = also to examine phase shifts (see Subsection III B). The following π and N annihilation interpolators are used to extract energies of the single hadrons with momenta n 2π/L (these are also used as building blocks for interpolators in the Roper channel): Three standard choices for Γ 1,2 are used. The 3rd quark is q = u for the proton and q = d for the neutron. Equation (5) is in Dirac basis and the upper two components N µ=1,2 of the Dirac four spinor N µ are the ones with positive parity at zero momentum. The spin component m s in N ms is a good quantum number for p = 0 or p ∝ e z , which is employed to determine energies in Table II. It is not a good quantum number for general p and it denotes the spin component m s of the corresponding field at rest. The "non-canonical" fields N ms (n) (5) built only from upper-components have the desired transformation properties under rotation R and inversion I, which are necessary to build two-hadron operators [51]: Interpolators with narrower quark sources are used for the determination of the masses and energies of π and N . Those are collected in Table II, where they are compared to energies E c expected in the continuum limit a → 0. E. Interpolating fields for the Roper channel Our central task is to calculate the energies of the eigenstates E n with J P = 1/2 + and total momentum zero, including multi-particle states. We want to cover the energy range up to approximately 1.65 GeV, which is relevant for the Roper region. The operators with these quantum numbers have to be carefully constructed. Although qqq interpolators in principle couple also to multihadron intermediate states in dynamical QCD, the multihadron eigenstates are often not established in practice unless the multi-hadron interpolators are also employed in the correlation matrix. We apply 10 interpolators O i=1,...,10 with P = +, S = 1/2, (I, I 3 ) = (1/2, 1/2) and total momentum zero [51] (P and m s are good continuum quantum numbers in this case). For m s = 1/2, we have where these are the annihilation fields and The momenta of fields in units of 2π/L are given in parenthesis with e x , e y , and e z denoting the unit vectors in x, y, and z directions, while the lower index on N = p, n is m s . All quarks have the same smearing width (narrower or wider in Fig. 1) within one interpolator. The O N π was constructed in [51], while factors with squareroot are Clebsch-Gordan coefficients related to isospin. We restrict our calculations to zero total momentum since parity is a good quantum number in this case. The positive parity states with J = 1/2 as well as J ≥ 7/2 appear in the relevant irreducible representation G + 1 of O 2 h . The observed baryons with J ≥ 7/2 lie above 1.9 GeV, therefore this does not present a complication for the energy region of our interest. We do not consider the system with non-zero total momenta since 1/2 + as well as 1/2 − (and others) appear in the same irreducible representation [53], which would be a significant complication especially due to the negative parity states N (1535) and N (1650). The 10 × 10 correlation function C ij (t) (1) for the Roper channel is obtained after evaluating the Wick contractions for any pair of sourceŌ j and sink O i . The number of Wick contractions involved in computing the correlation functions between our interpolators (eqn. 7) are tabulated in Table III. The O N ↔ O N contractions have been widely used in the past. 6 The 19 Wick-contractions O N π ↔ O N π and 4 Wick contractions O N ↔ O N π are the same as in the Appendix of [41], where the negative-parity channel was studied. The inclusion of O N σ introduces additional 2 · 7 + 2 · 19 + 33 Wick contractions, while the inclusion of three hadron interpolators like N ππ would require many more. We evaluate all necessary contractions in Table III using the distillation method [48] discussed in Section II C. Appendix A illustrates how to handle the spin components in evaluating C(t), where one example of the Wick contraction Ω|O N πŌN |Ω is considered. A. Energies and overlaps Our main result are the energies of the eigenstates in the J P = 1/2 + channel, shown in Fig. 2a. These are based on the 5 × 5 correlation matrix (1) for the subset 6 Footnote added after publication: the N π contribution to correlators O N ↔ O N with local operators has been determined via ChPT in [60]. of interpolators (7) complete interpolator set : which we refer to as the "complete set" since it contains all types of interpolators. Adding other interpolators to this basis, notably O 2,4,7,10 , which include the N i=2 interpolator 7 , makes the eigenenergies noisier. The eigenenergies E n are obtained from the fits of the eigenvalues λ (n) (t) (2), with fit details in Table IV. The horizontal dashed lines represent the energies of the expected multi-hadron states m N + 2m π and E N (1) + E π(−1) in the non-interacting limit (the individual hadron energies measured on our lattice and given in Table II are used for this purpose throughout this work). The study of this channel with almost physical pion mass is challenging as far as statistical errors are concerned. This can be seen from the effective energies in Fig. 3 which give eigenenergies in the plateau region. The ground state (n = 1) in Fig. 2a represents the nucleon. The first-excited eigenstate (n = 2) lies near m N + 2m π and appears to be close to N (0)π(0)π(0) in the non-interacting limit. The next eigenstate n = 3 lies near the non-interacting energy E N (1) + E π(−1) . It dominantly couples to O N π and we relate it to N (1)π(−1) in the non-interacting limit. Further support in favor of this identification for levels n = 2, 3 will be given in the discussion of Figs. 4 and 5. The most striking feature of the spectrum is that there are only three eigenstates below 1.65 GeV, while the other eigenstates appear at higher energy. The overlaps of these eigenstates with various operators are presented in Fig. 2b. The nucleon ground state n = 1 couples well with all interpolators that contain N 1 . The operator O N π couples well with eigenstate n = 3, which gives further support that this state is related to N (1)π(−1). The operator O N σ couples best with the nucleon ground state, which is not surprising due to the presence of the Wick contraction where the isosinglet σ (8) annihilates and the remaining N 1 couples to the nucleon. Interestingly, the O N σ has similar couplings to the eigenstates n = 2 and n = 3, which are related to N (0)π(0)π(0) and N (1)π(−1) in the noninteracting limit. One would expect | Ω|O N σ |n = 2 | | Ω|O N σ |n = 3 | if the channel N π were decoupled from N σ/N ππ. Our overlaps Z n=2,3 i=9 suggest that the channels are significantly coupled. The scenario where the coupled-channel scattering might be crucial for the Roper resonance will discussed in Section IV. The features of the spectrum for various choices of the interpolator basis are investigated in Fig. 4. The complete set (11) with all types of interpolators is highlighted as choice 1. If the operator O N π is removed (choice 3) the eigenstate with energy E N (1) + E π(1) disappears, so the N π Fock component is important for this eigen-state. The eigenstate with energy m N + 2m π disappears if O N σ is removed (choice 4), which suggests that this eigenstate is dominated by N (0)π(0)π(0), possibly mixed with N (0)σ(0). Any interpolator individually renders the nucleon as a ground state (choices 5,6,7). All previous lattice simulations, except for [19], used just qqq interpolators. This is represented by the choice 5, which renders the nucleon, while the next state is above 1.65 GeV; this result is in agreement with most of the previous lattice results based on qqq operators, discussed in the Introduction. No interpolator basis renders more than three eigenstates below 1.65 GeV. The most striking feature of the spectra in Figs. 2 and 4 is the absence of any additional eigenstate in the energy region where the Roper resonance resides in experiment. The eigenstates n = 2, 3 lie in this energy region, but two eigenstates related to N (0)π(0)π(0) and N (1)π(−1) are inevitably expected there in dynamical QCD, even in absence of the interactions between hadrons. A further indication that eigenstate n = 2 is dominated by N (0)π(0)π(0) is presented in Fig. 5, where the spectrum from all configurations is compared to the spectrum based on configuration sets "all-4" (shown in other figures) and "all-1". The horizontal dashed lines indicate non-interacting energies obtained from the corresponding sets. Only the central value of E 2 and m N + 2m π visibly depend on the configuration set. The variation of m N + 2m π is due to the variations of m π pointed out in Section II A. The eigenstate n = 2 appears to track the Fig. 2a and Table IV. It is based on the complete interpolator set (11) and configuration set "all-4". The fits of λ (n) (t) that render En are also presented. Non-interacting energies of N (0)π(0)π(0) and N (1)π(−1) are shown with dashed lines. threshold m N + 2m π , which suggests that its Fock component N (0)π(0)π(0) is important. Note that the full configuration set gives larger statistical errors, as illustrated via effective masses in Fig. 9 of Appendix B. B. Scattering phase shift In order to discuss the N π phase shift, we consider the elastic approximation where N π scattering is decoupled from the N ππ channel. In this case, the N π phase shift δ can be determined from the eigenenergy E of the interacting state N π via Lüscher's relation [20,21] δ(p) = atan (12) where E H(p) = m 2 H + p 2 applies in the continuum limit. The eigenenergy E (E 3 from basis O N π,N,N σ or which gives smaller statistical errors than set (11) for "all" and "all-1". The horizontal lines present non-interacting energies of N (0)π(0)π(0) and N (1)π(−1) for the corresponding configuration sets. E 2 from O N π,N ) has sizable error for this ensemble with close-to-physical pion mass. It lies close to the noninteracting energy E N (1) + E π(1) , as can be seen in Figs. 2, 3 and 9. We find that the resulting energy shift ∆E = E − E N (1) − E π(1) is consistent with zero (modulo The experimental phase shift δ and inelasticity 1 − η 2 as extracted by the GWU group [55] (solution WI08). The dot-dashed line is a smooth interpolation that is used in Section IV A. π) within the errors. This implies that the phase shift δ is zero within a large statistical error. We verified this using a number of choices to extract ∆E and δ. The interpolator set O N π,N rightmost column of Fig. 9) that imitates the elastic N π scattering served as a main choice, while it was compared to other sets also. Correlated and uncorrelated fits of E as well as E N (1) + E π(1) were explored for various fit-ranges. Further choices of dispersion relations E π (p) and E N (p) that match lattice energies at p = 0, 1 in Table II (e.g., interpolation of E 2 linear in p 2 ) were investigated within the Lüscher analysis to arrive at same conclusions. IV. DISCUSSION AND INTERPRETATION Here we discuss the implications of our results, in particular that only three eigenstates are found below 1.65 GeV. These appear to be associated with N (0), N (0)π(0)π(0) and N (1)π(−1) in the noninteracting limit. The experimental N π scattering data for the amplitude T = (ηe 2iδ −1)/(2i) for this (P 11 ) channel are shown in Fig. 6 [55] 8 . The channel is complicated by the fact 8 The experimental data comes from the GWU homepage that N π scattering is not elastic above the N ππ threshold and the inelasticity is sizable already in the energy region of the Roper resonance. The presence of the N ππ channel prevents rigorous investigation on lattice at the moment. While the threebody channels have been treated analytically, see for example [39,56], the scattering parameters have not been determined in any channel within lattice QCD up to now. For this reason we consider implications for the lattice spectrum based on various simplified scenarios. By comparing our lattice spectra to the predictions of these scenarios, certain conclusions on the Roper resonance are drawn. A. N π scattering in elastic approximation Let us examine what would be the lattice spectrum assuming experimental N π phase shift in the approximation when N π is decoupled from the N ππ channel. In addition we consider no interactions in the N ππ channel. The elastic phase shift δ in Figure 6 allows to obtain the discrete energies E as function of the spatial lattice size L via Lüscher's equation (12) . Figure 7a shows the non-interacting levels for N (0) (black), N (0)π(0)π(0) (blue), and N (1)π(−1) (red). These are shifted by the interaction. Also plotted are the eigenstates (orange) in the interacting N π channel derived from the experimental elastic phase shift with help of eqn. (12). The elastic scenario should therefore render four eigenstates below 1.65 GeV at our L 2.9 fm, indicated by the violet circles in Figures 7a and 7b. Three non-interacting levels 9 below 1.65 GeV turn into four interacting levels (violet circles) at L 2.9 fm. The Roper resonance phase shift passing π/2 is responsible for the extra level. Our actual lattice data features only three eigenstates below 1.65 GeV, and no extra low-lying eigenstate is found. Comparison in Figure 7b indicates that the lattice data is qualitatively different from the prediction of the resonating N π phase shift for the low-lying Roper resonance, assuming it is decoupled from N ππ. B. Scenarios with coupled N π − N σ − ∆π scattering Our analysis does not show the resonance related level. One reason could be that the Roper resonance is a truly coupled channel phenomenon and one has to include further interpolators like ∆π, Nρ and an explicit N ππ three hadron interpolator. The scattering of N π − N σ − ∆π in the Roper channel was studied recently using Hamiltonian Effective Field Theory (HEFT) [8]. The σ and ∆ gwdac.phys.gwu.edu 9 These are three intercepts of dashed curves with vertical green line at L = 2.9 fm. (a) Analytic prediction for the eigenenergies E as a function of the lattice size L, according to (12). The N π and N ππ are assumed to be decoupled, and N ππ is non-interacting. The curves show: non-interacting N π (red dashed), interacting N π based on experimental phase shift [55] (orange dotted), N ππ threshold (blue dashed), proton mass (black), Roper mass (cyan band). The experimental masses of hadrons are used. (b) Left: energy values from our simulation. (b) Right: the full violet circles show the analytic predictions for the energies at our L = 2.9 fm based on the experimental phase shift data and elastic approximation (same as violet circles in upper pane). We show only the energy region E < 1.7 GeV where we aim to extract the complete spectrum (there are additional multi-hadron states in the shaded region and we did not incorporate interpolator fields for those). were assumed to be stable under the strong decay, which is a (possibly serious) simplification. The free parameters were always fit to the experimental N π phase shift and describe the data well. Three models were discussed: I The three channels are coupled with a low-lying bare Roper operator of type qqq. II No bare baryon; the N π phase shift is reproduced solely via coupled channels. III The three channels are coupled only to a bare nucleon. The resulting Hamiltonian was considered in a finite volume leading to discrete eigenenergies for all three cases, plotted in Fig. 8 for our parameters L = 2.9 fm and m π = 156 MeV [8]. In Fig. 8 we compare our lattice spectra with the prediction for energies of J P = 1/2 + states in three scenarios. The stars mark the high-lying eigenstates N (1)σ(−1), ∆(1)π(−1) and N (2)π(−2) [8], which are not expected to be found in our study since we did not incorporate corresponding interpolators in (7). The squares denote predictions from the three scenarios that can be qualitatively compared with our lattice spectra. Our lattice levels below 1.7 GeV disagree with model I based on bare Roper qqq core, but are consistent with II and (preferred) III with no bare Roper qqq core. In those scenarios the Roper resonance is dynamically generated from the N π/N σ/∆π channels, coupled also to a bare nucleon core in case III. Preference for interpretations II,III was reached also in other phenomenological studies [6][7][8][9] and on the lattice [19], for example. Analytic predictions for the lattice spectra at mπ = 156 MeV and L = 2.9 fm from the Hamiltonian Effective Field theory. These are based on three scenarios concerning the Roper resonance [8]. Our lattice spectrum is shown with circles on the left. Qualitative comparison between the energies represented by squares and circles can be made, as discussed in the main text. C. Hybrid baryon scenario Several authors, for example [12,13], have proposed that the Roper resonance might be a hybrid baryon qqqG with excited gluon field. This scenario predicts the longitudinal helicity amplitude S 1/2 to vanish [57], which is not supported by the measurement [58]. Our lattice simulation cannot provide any conclusion regarding this scenario since we have not incorporated interpolating fields of the hybrid type. Let us discuss other possible reasons for the missing resonance level in our results, beyond the coupledchannel interpretation offered above. We could be missing the eigenstate because we might have missed important coupling operators. One such candidate might be a genuine pentaquark operator. A local five quark interpolator (with baryon-meson color structure) has been used by [19] who, however, also did not find a Roper signal. The local pentaquark operator with color structure abcqa [qq] b [qq] c ([qq] c = cde q c q d q e ) can be rewritten as a linear combination of local baryon-meson operators BM = ( abc q a q b q c )(q e q e ) by using abc ade = δ bd δ ce − δ be δ cd . Furthermore, the local baryon-meson operators are linear combinations of B(p)M (−p). Among various terms, the N (1)π(−1) and N (0)σ(0) are the essential ones for the explored energy region and those were incorporated in our basis (7). So, we expect that our simulation does incorporate the most essential operators in the linear combination representing the genuine localized pentaquark operator. It remains to be seen if structures with significantly separated diquark (such as proposed in [59] for P c ) could be also be probed by baryon-meson operators like (7). It could also be that -contrary to our expectation -using operators with different quark smearing widths is not sufficient to scan the qqq radial excitations. One might have to expand the interpolator set to include non-local interpolators [26] so as to have good overlap with radial excitations with non-trivial nodal structures. There has been no study that involved use of such operators along with the baryon-meson operators and within the single hadron approach such operators do not produce low lying levels in the Roper energy range [26]. Finally, our results are obtained using fermions that do not obey exact chiral symmetry at finite lattice spacing a, like in most of the previous simulations. It would be desirable to verify our results using fermions that respect chiral symmetry at finite a. V. CONCLUSION AND OUTLOOK We have determined the spectrum of the J P = 1/2 + and I = 1/2 channel below 1.65 GeV, where the Roper resonance appears in experiment. This lattice simulation has been performed on the PACS-CS ensemble with N f = 2 + 1, m π 156 MeV and L = 2.9 fm. Several interpolating fields of type qqq (N ) and qqqqq (N σ in s-wave and N π in p-wave) were incorporated, and three eigenstates below 1.65 GeV are found. The energies, their overlaps to the interpolating fields and additional arguments presented in the paper indicate that these are related to the states that correspond to N (0), N (0)π(0)π(0) and N (1)π(−1) in the non-interacting limit (momenta in units of 2π/L are given in parenthesis). This is the first simulation that finds the expected multi-hadron states in this channel. However, the uncertainties on the extracted energies are sizable and the extracted N π phase shift is consistent with zero within a large error. One of our main results is that only three eigenstates lie below 1.65 GeV, while the fourth one lies already at about 1.8(1) GeV or higher. In contrast, the experimental N π phase shift implies four lattice energy levels below 1.65 GeV in the elastic approximation when N π is decoupled from N ππ and the later channel is non-interacting. Our results indicate that the low-lying Roper resonance does not arise on the lattice within the elastic approximation of N π scattering. This points to a possibility of a dynamically generated resonance, where the coupling of N π with N ππ or other channels is essential for the existence of this resonance. This is supported by comparable overlaps of the operator O N σ to the second and third eigenstates. We come to a similar conclusion if we compare our lattice spectrum to the HEFT predictions for N π/N σ/∆π scattering in three scenarios [8]. The case where these three channels are coupled with the low-lying bare Roper qqq core is disfavored. Our results favor the scenario where the Roper resonance arises solely as a coupled channel phenomenon, without the Roper qqq core. Future steps towards a better understanding of this channel include simulations at larger m π L, decreasing the statistical error and employing qqq or qqqqq operators with greater variety of spatially-extended structures. Simulating the system at non-zero total momentum will give further information but will introduce additional challenges: states of positive as well as negative parity contribute to the relevant irreducible representations in this case. It would also be important to investigate the spectrum based on fermions with exact chiral symmetry at finite lattice spacing. Our results point towards the possibility that Roper resonance is a coupled-channel phenomenon. If this is the case, the rigorous treatment of this channel on the lattice will be challenging. This is due to the three-hadron decay channel N ππ and the fact that the three-hadron scattering matrix has never been extracted from lattice QCD calculations yet. The simplified two-body approach to coupled-channels N σ/∆π (based on stable σ and ∆) cannot be compared quantitatively to the lattice data at light m π where σ and ∆ are broad unstable resonances. This is manifested also in our simulation, where O N σ operator renders an eigenstate with E m N + 2m π and not E m N + m σ . Pion-nucleon scattering has been the prime source of our present day knowledge on hadrons. After decades of lattice QCD calculations we are now approaching the possibility to study that scattering process from first principles. This has turned out to be quite challenging and our contribution is only one step of more to follow.
2017-01-31T11:36:38.000Z
2016-10-05T00:00:00.000
{ "year": 2016, "sha1": "1a2b8a478d592fd97b0a94a6f7067d0a60147323", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.95.014510", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "1a2b8a478d592fd97b0a94a6f7067d0a60147323", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271309205
pes2o/s2orc
v3-fos-license
Exploring career choices of specialist nurse students: Their decision‐making motives. A qualitative study Abstract Aims To explore Registered Nurses' motives to undergo specialist training and to choose a particular speciality. Design A descriptive qualitative interview study. Methods Semi‐structured interviews were conducted during 2021 with 20 Swedish specialist nurse students from different specialisation areas. Qualitative content analysis was used. The COREQ checklist was used to report the study. Results Specialist nurse students' motivations for further training were divided into three main categories with two sub‐categories each. The main categories were ‘toward new challenges and conditions in work life’, ‘contributions to the development and higher competencies in health care’ and ‘personal work and life experiences as ground for choice’. Conclusion Our study demonstrates the importance of motivating factors in the career choices of Specialist nurse students, such as personal challenges, desirable working conditions, career growth opportunities and personal experiences in the career choices. Creating a supportive work environment that helps to prioritise work‐life balance and offers the development of new skills might help retain nurses. No Patient or Public Contribution No patient or public contribution was used. However, if more nurses would choose to undergo specialist training, especially in areas facing significant shortages, it would most likely lead to improved health‐related outcomes for patients or populations. management, better critical thinking skills and the ability to contribute to evidence-based practice by integrating results from research in clinical work (Chang et al., 2011;Dahlberg et al., 2022;Rudman et al., 2020).Consequently, the worsened shortage of SNs worldwide threatens the quality of health care (Buchan & Aiken, 2008;Knap et al., 2020;WHO, 2020).Therefore, this study aims to explore RNs' motives for undertaking specialist training and choosing a particular speciality. | BACKG ROU N D Although almost 70% of the countries in the world offer continuous development programmes for RNs, the definition and educational background of SNs, the number of specialist areas and the length of the specialist training vary (Knap et al., 2020;WHO, 2020). The definition of SNs also varies in different organisations and countries.The International Council of Nurses (ICN) defines an SN as someone more prepared to work in a specialist area than an RN with a licence.This preparation is typically acquired through an advanced training programme (Affara, 2009).Additionally, The European Society of Specialist Nurses (ESNO) includes Advanced Nurse Practitioners (ANP) and Nurse Practitioners under the term SN (Knap et al., 2020).However, in Sweden, where this study was conducted, the term SN is used for RNs with a 1-year master's degree, while ANP programmes require a 2-year master's degree and are therefore considered a higher level (Swedish Higher Education Ordinance, 1993:100, Annex 2).The SN's educational background varies between countries worldwide and even in Europe (Dury et al., 2014;Ranchal et al., 2015;WHO, 2020).Most of the countries in Europe have similar educational levels for SN training as Sweden, where SN training is offered at a postgraduate level for RNs who hold a bachelor's degree in nursing (Dury et al., 2014;Ranchal et al., 2015;Swedish Higher Education Ordinance, 1993:100, Annex 2).However, in some countries, SN training is part of a bachelor's degree, while in other countries, the SN title can be obtained by working in a specific field without the need for formal training.In addition, some countries offer different levels and types of training within their countries (Dury et al., 2014;Ranchal et al., 2015). In Europe, RNs can choose from various specialisation fields; the most common fields are psychiatric and mental health care, paediatrics, critical care and community health care (Dury et al., 2014;Ranchal et al., 2015).In Sweden, 11 defined specialisation areas are offered for SNs, including Anaesthetic nursing, Elderly care, Intensive care nursing, Medical nursing, Oncology, Operating room nursing, Paediatric nursing, Prehospital care, Psychiatric care, Public health care and Surgical nursing.Also, nursing colleges can introduce other specialisation areas depending on the community's needs (Swedish Higher Education Ordinance, 1993:100, appendix 2;Governmental inquiry, 2018). Further confusion is added in becoming an SN with different study durations.Although the terms for specialisation fields are similar in other countries, the length of the training can vary from a few days to 2 years for the same field (Dury et al., 2014;Ranchal et al., 2015).In Sweden, SN training lasts 1-1.5 years (60-75 credits) (Swedish Higher Education Ordinance, 1993:100, appendix 2;Governmental inquiry, 2018). Due to the lack of coordination of the training and definition of SNs, labour mobility within Europe is complex, and SNs are mainly limited to the labour market in their own country.Since SNs do not have a protected title in most European countries, they compete for the same positions as RNs.This means that SNs can be outcompeted by RNs, as they expect higher salaries (Dury et al., 2014;Ranchal et al., 2015).Even in Sweden, most RNs can work in almost all specialist areas except anaesthetics, intensive care and operating room nursing, where formal specialist skills are required (Governmental Inquiry, 2018). Adding to the shortage of SNs, many RNs choose not to start further training.Whereas many RN students initially intend to pursue additional training, this number tends to decrease after a few years of working as RNs (Rognstad & Aasland, 2007). The cost of pursuing further training is a significant barrier for many individuals, especially regarding financial considerations.In several countries, RNs face financial constraints that hinder their ability to continue with further training and education (Morgenthaler, 2009).In Sweden, SN students do not pay tuition fees but must deal with the other costs of SN training and living expenses.The Swedish government has recently allocated additional funds to provide wage benefits in some specialisation areas to reduce these economic barriers.The government intends to ensure RNs maintain their salaries during specialist training (Governmental Inquiry, 2018).However, not all RNs can take advantage of the wage benefit and, therefore, must cover their living expenses with loans, study grants or savings.Despite these efforts, the Swedish nurses' union reports that the government's initiatives have not yet resulted in a significant increase in the number of SNs (Andersson, 2021). Overall, the proportion of RNs resigning exceeds that of new RNs entering the health sector, with an average of 2% of RSs leaving annually (Abrahamsen, 2019).More RNs and SNs might be available, but they are unwilling to work in health care (Buchan & Aiken, 2008). Moreover, although more SNs are trained yearly, the number of SNs does not increase (The National Board of Health and Welfare, 2016).Also, the SN population is ageing, and many are expected to retire in the coming decade (WHO, 2020).In Sweden, 10% of nurses no longer work in nursing due to poor working conditions, and almost half of the employed SNs are 55 years or older (Governmental Inquiry, 2018). To increase the number of SNs, we need to know why they choose to enter specialist training and why they choose a particular speciality.As there is a lack of consensus across specialist nursing training in Europe, most of the research about speciality choices has been conducted based on nursing students, not SN students.Hence, more research is needed regarding SN students. | Aims This study aimed to explore specialist nurse students' motives to undergo specialist training and choose a particular speciality. | Design The study has a descriptive qualitative design, and qualitative content analysis was used.The study followed the Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist Tong et al. (2007). | Sampling and recruitment From January to April 2021, all first-year SN students in postgraduate SN training programmes at four nursing colleges in Stockholm, Sweden, were invited to participate in an interview study.These colleges provide 15 different areas of specialisation. All SN specialisation areas included in this study are listed in Table 1. Midwife students were excluded from the study as they are not considered SNs in Sweden. An invitation to participate in the interview and a study description was posted as an open link on the SN students' learning platform or sent to their student email by a contact person at the nursing colleges.Researchers and SN students did not have any prior relations.In total, 29 SN students expressed interest in participating.Of these, 20 SN students were interviewed, two SN students withdrew their request to participate and seven did not respond to the contact attempts.Most participants were female (N = 15), with an age range of 27 to 59 years (mean 43.6) and had been working as RNs for an average of 10.1 years (range 1-35). | Data collection The research group developed the semi-structured interview guide before the data collection.After the first interview was conducted and transcribed, the research group reviewed the interview to ensure that the questions were appropriate for achieving the study's purpose.Some questions were rephrased, the order of the questions was slightly changed and an additional question was added.The revised interview guide (Table 2) was used for all remaining interviews, with other clari- | Data analysis All interviews were transcribed verbatim shortly after they were conducted.The data were analysed using QSR International NVivo version 12, and a five-step qualitative content analysis inspired by Graneheim and Lundman (2004) was used.First, all the interviews were transcribed word-for-word and read to gain an overall sense of the data.Then, the first author identified, condensed and coded meaning units.Similar codes were grouped into more comprehensive subcategories and categories based on their similarities and differences.Categories and subcategories were reviewed and compared to detect inconsistencies, similarities and differences.The data analysis focused on the manifest content of the interviews, and the presented findings closely reflect the text.The analysis was discussed within the research group until consensus was reached. | Ethical considerations This study was approved by The Swedish Ethical Board (2020-05194).It was also in accordance with the World Medical Association Declaration of Helsinki (2013). TA B L E 1 SN student's specialisation areas. Total N = 20 Anaesthetic nursing 2 Emergency care nursing 1 Intensive care nursing 1 Oncology 1 Operating room nursing 3 Paediatric nursing 2 Palliative care nursing 3 Prehospital care 1 Psychiatric care 3 Public health care 3 TA B L E 2 Interview guide. Why did you decide to pursue specialist nursing training at this time? What were your considerations when selecting this specialisation area? What factors influenced your choice? What would have led you to choose a different specialisation area instead? Can you recall any other factors that influenced your decision that we have not discussed? | Trustworthiness To increase trustworthiness, the recommendations given by Graneheim et al. (2017) were followed. To ensure credibility, a deliberate selection of SN students as participants was made to achieve diversity in participants and specialisation areas, as shown in Table 1 and under the heading 'Sampling and Recruitment'.Data collection and analysis were conducted systematically to ensure dependability.The interview questions were predetermined to ensure all participants were asked the same questions.All authors read the first interview to determine whether the guide contained relevant questions aligned with the study's aim. Afterwards, all subsequent interviews followed the guide, as shown in Table 2. To ensure the authenticity of the findings, continuous crossreferencing was performed between the data and generated codes, | FINDING S The data analysis resulted in three categories and six subcategories (see Table 3) describing RNs' motives to undergo specialist training and choose a particular speciality. | Toward new challenges and conditions in work life The SN student described the reason for specialist training as a need to move toward new challenges and conditions in their work life.[P3] Some SN students felt motivated to seek new nursing opportunities due to their age.They wanted to explore new areas before it was too late and were concerned about their current job's physically demanding and inconvenient working hours as they aged. | A desire to do something new I started thinking that you're getting on in years, uh how long are you going to be able to work clinically, get up at night and work 24 hours a day and so on. [P11] | New working conditions and opportunities Within this subcategory, SN students discussed the new conditions and opportunities they anticipated after completing their specialist training. Some SN Students wanted to leave high-stress jobs due to a decline in the working environment and a lack of opportunities beyond [P16] Several SN students recognised the benefits of having dual competencies that could be combined effectively in their current workplace or undergraduate teaching.One student mentioned that they became aware of the advantages of dual competencies after meeting an SN who possessed them. When I started as a nurse, I worked in home health care, and there actually worked a nurse who was 60 years old, and he was actually both a psychiatric nurse and anaesthesia nurse…and then I thought that I would be like him when I grow up…. [P5] SN students pursued specific specialisations to change their working hours.Irregular shifts, including evenings and weekends, were physically and mentally exhausting, while a structured routine from Monday to Friday would allow for more time with family and friends.Some students believed that regular sleep at night could contribute to a healthier lifestyle. I work a week in a row, around the clock, so it will be tiring for the body too, I think.You don't feel so good mentally or physically with the erratic working hours. Get a bit more, a weekday with normal working hours, perhaps.Eventually.Better health. [P11] Some SN students desired a more manageable work environment where they could control their work, manage their own time and have more time for patient conversations.Another student mentioned the need for a break from heavy daily work. That you hope perhaps with a specialisation that you can get a more tolerable working environment, and that is also one of the reasons why I chose public health care, it is to be able to be responsible for my own time. [P10] Some students emphasised the importance of obtaining new and advanced tasks when choosing a specialisation.They felt that selecting a specialisation without variation in tasks or unchanged job descriptions was not compelling. When I even read job advertisements, they are looking for district nurses or nurses; well, if there is no difference, then why should you study to become a nurse in public health care?There are certainly very competent nurses who have, who have gained very high competence in public health care, without having gone further training. [P3] SN students desired greater career flexibility, including the ability to work in various settings, public or private, and choosing a specialisation close to home.One student aimed to establish a public health care nursing clinic near their home, while others were interested in working abroad. I want good further training that I can take with me when I go to, or my husband and I are planning to work in Zambia in the future.We have a, a, the family has a children's village there that we support and work with, and so I want something that I can contribute, a little bit more than just a nurse. [P12] Some SN students were motivated by future wages when pursuing specialist training, while others were not.Higher education leading to higher wages would help repay study loans and cover living expenses for those motivated by wages.However, some SN students did not consider wages a significant factor and were inspired by other reasons to study, such as a better working environment and colleagues.Despite this, some SN students acknowledged that completing their training would result in a higher wage in their workplace. So I think it goes without saying that if you improve your skills, it should be reflected in your pay envelope too. [P13] Overall, I can say that wages have strangely never been a strong motivator for me…never ever. [P1] | Contribute to the development and higher competencies in health care The SN students described that they had started their specialist training either because they wanted to contribute to the development and acquire higher competencies in health care or because their employers had encouraged them to do so. | A desire to contribute to the development of health care In this subcategory, SN students described that they pursued specialist training to enhance their knowledge and skills, improve patient care and contribute to the development of health care. SN students wanted to continuously improve themselves and expand their knowledge to do their jobs to the best of their abilities, follow the latest research and contribute to the highest level of competence in their workplace.Pursuing specialist training was deemed necessary for further developing expertise beyond the level of RNs.SN students believed that this was not only beneficial for their personal growth but also for the safety and well-being of their patients. I'm a bit at a basic level, even though I've been working for so long, it's kind of that stayed at a basic level of knowledge, which is just required.This will be a, clearly a deepening, that's how I feel. [P11] SN students noticed differences in the quality of care and wanted to improve the health care system.They aimed to contribute to health care development by suggesting solutions to minor issues, running projects at a higher level or conducting research. What I might want to contribute is the improvement of wound dressings in primary care.It um…in some places, it doesn't work so well, that's when they come to us with wound infections. [P10] Some SN students mentioned their desire to use specialist training to educate and care for students.They believe their skills can be improved by caring for students in a new way, and they want to educate students at different levels of training, including RNs and SNs. I will be better suited to take on undergraduate students; I will be able to take specialist students.I think that's great.I think it's fun to have students. [P17] In addition, the SN students wanted to spread knowledge about their specialist area to other specialist areas. I'm passionate about palliative care, and I want it to be much better and to be able to spread that approach to care much more/…/it is well that we have seen how little knowledge there is about palliative care; there is so much you can do for families, for the patient in palliative care. [P20] SN students emphasised that their work experience and specialist training provide a valuable foundation for future development. They believed that ongoing training offers the opportunity to exchange experiences and gain a broader perspective, leading to quality improvements.The specialist training also equipped them with tools to understand and apply current research findings, which is essential in a constantly evolving field such as nursing. You get so much exchange on how to work and do palliative care in so many ways, which is very interesting.You also get a lot of input, plus you can then give back where we work, so it's also an important part. [P18] | The employer encourages or requires specialist training In this second subcategory, SN students described being influenced by their employers' encouragement and demand for higher skills and receiving support for their studies. Some SN students had supportive managers who offered study opportunities and encouraged their staff to develop their skills to the best of their ability.These managers did not require specific specialist training, leaving the choice more open for the employees. There was no offer.It was rather the opposite; it is my chief who is very, yes, active.I didn't know about it; he was the one who told me…. [P15] SN students in some workplaces said that specific specialist training was necessary to continue working in their current position or to be hired for more attractive positions within the organisation. This motivated them to pursue the training required to advance their careers and remain competitive in the job market. And it is now standard practice to have this training if you are going to work in the ambulance service.If nothing else, it will probably be compulsory in the future. [P11] Some SN students had support from their employers for their specialised training through adjusted work schedules.This included concentrating work in a certain period or involving more inconvenient working hours with higher pay, creating more extended periods of continuous leave that could be used for studying.Some employers also allowed a certain percentage of working time for studies while maintaining the wages, possibly increasing the time in connection with an exam. I have a dialogue with my boss, that when it's exam time I might need more time off than when it's not exam time but only lectures. [P19] | Personal work and life experiences as a ground for choice The SN students described that different work and life experiences influenced their motivation to start specialist training and their choice of specialisation area.These experiences created either interest, disinterest or barriers for the students.It was also just because I liked that working environment so much and got an excellent impression when I worked there. [P7] NEVER, NEVER WILL NOT…in and out with patients, overcrowding in the corridors, eeh no one knows what to do, everyone running around like mad chickens. [P1] SN students' interest in a particular speciality area varied in timing but was a significant factor in their choice of specialisation.It could stem from before nursing training, during clinical placements, or after working as an RN. I started working as an occasional caretaker in psychiatry when I was 18, attending high school in Gothenburg.Then I've had periods where I've always been drawn to psychiatry. [P17] | Barriers to starting specialist training in a particular speciality In the second subcategory, SN students discussed various challenges that hindered their motivation to choose a specialisation for further studies. SN students faced age-related challenges while specialising.Some felt too old to learn new things, while others believed their age gave them an advantage, such as better self-awareness. Negative comments about age, particularly for advanced fields like anaesthesia, made some students feel guilty about pursuing further training. When you study a specialist competence, you must do something with it afterwards.Will I have time to do it, will I have the energy to do it, or am I selfish taking place just for my high pleasure, because I find it fun with the knowledge. [P13] Several SN students stated that they opted to remain in their current working area and specialise there due to their fear of the complexity and difficulty of other areas.The concerns expressed by the students included fear of the unknown, fear of causing harm to patients and fear of inadequacy. I have to admit that the intensive care specialist, even though I actually work with the same things with the little ones, I find it a bit scary. [P14] Time management was another significant factor in SN students' pursuit of further training.Some students struggled to balance their busy work schedules with studying, and some had to discontinue their studies due to family responsibilities.However, one student's situation changed as her children grew older, allowing her to resume her studies. It's been ten years.The kids are almost out of the house, so it's less of a struggle.No, absolutely, now you have all the time in the world, it feels like…. [P3] SN students considered maintaining part or total wages a crucial factor for pursuing further training to fund their studies and avoid additional loans.One student mentioned that she would not have been able to complete her studies without the option of maintaining her wages during her studies. I have the opportunity to work half-time and with fulltime wages, study half-time, so it also felt like an offer that was almost a bit stupid to turn down, I think. [P15] SN students encountered various barriers when pursuing specialised training.Among the challenges were the lack of a bachelor's degree, fear of the final essay, difficulty in the application process and unsuccessful multiple applications.Overcoming these obstacles required more compelling motivations. I have thought for many years that I had to get a bachelor's degree, and it's also been like because I haven't had it, I haven't been able to do any further training. [P9] I've applied for anaesthesia and was denied, that I haven't come in, and maybe I did a year ago, and I got frustrated that I didn't come in, so I've started to think like, should I do something else? [P4] | DISCUSS ION The findings of this study indicate that SN students strongly desire personal growth, seeking out new tasks and challenges while considering their experiences and preferences when making career decisions. Like the current study, Rahimi et al. (2019) andTiliander et al. (2023) have demonstrated a shared recognition among RNs of the importance of career advancement and professional development.In the current study, some SN students expressed aspirations to hold higher positions, such as leaders or managers within health care organisations, while others aimed for academic careers in higher education.Similarly, Rahimi et al. (2019) found that RNs believe having more organisational influence is crucial for their career development.Furthermore, the study by Cleaver (2003) showed negative experiences discouraged them from selecting that specialisation (Alexander et al., 2015;Kloster et al., 2007;McKenna et al., 2010). It is important to note that individual preferences vary and that some SN students preferred a slower pace and a greater focus on patient care.In comparison, others desired a faster pace and advanced tasks.These findings underscore the influence of personal experiences on career decisions within the nursing profession. Furthermore, the passion for a specific field can develop before and during studies, as Alexander et al. (2015) highlighted.The preferred working areas may also change over time, as observed by Kloster et al. (2007).These findings suggest that individuals may already possess an interest in a particular speciality before embarking on their educational journey, and this interest can further grow and develop throughout their academic pursuits. | Strengths and limitations of the work The participants in the current study were intentionally selected to ensure diversity regarding the characteristics of the SN students and their chosen specialisation areas.This deliberate selection allowed us to include a broad range of participants and gather insights from various perspectives within the nursing field.Although the nursing colleges involved were from the same region, the distance training format allowed students from other geographical areas to participate in the study. A significant challenge in the current study was the constraints imposed by the COVID-19 pandemic, making it impossible to conduct face-to-face interviews.As a result, we had to adapt and use the online platform Zoom to conduct the interviews.This transition to remote interviews presented a new experience for researchers and participants.Also, the pandemic may have affected participants' views and statements.At the same time, the pandemic was a major worldwide event, and we were all likely affected in one way or another. | CON CLUS ION The current study provides valuable insights into the factors influencing the career choices of SN students.We found that personal challenges, desirable working conditions, career growth opportunities and the ability to contribute to health care development were significant factors in SN students' decision-making process. Additionally, personal experiences played a crucial role in affecting SN students' choices and maintaining their passion for a particular specialisation area. Creating a supportive work environment prioritising work-life balance, offering skill development and acknowledging nurses' contributions is essential.Such environments may have the potential to improve nurse retention and job satisfaction. In summary, the current study reveals the importance of the work environment, career advancement prospects, personal motivations and the desire to make a meaningful contribution to health care in the career choices of SNs.By recognising and addressing these factors, health care organisations can create an environment where nurses can thrive personally and professionally. AUTH O R CO NTR I B UTI O N S All authors have made substantial contributions to the conception and design, analysis and interpretation of the data; they have been involved in drafting the manuscript or reviewing it critically for important intellectual content; they have approved the final version and agreed to be accountable for all aspects of the manuscript. fying and redirecting questions used as needed.All interviews (N = 20) were conducted and recorded via Zoom Video Communications Inc. due to the ongoing epidemic of COVID-19, with each interview lasting approximately 20-40 min (mean 27 min).The first author conducted the interviews individually with each participant. subcategories and categories.Quotes from the SN students were used to indicate that the content was derived from the data.All research team members had varying degrees of academic or non-academic preunderstanding of the topic, whether through their training, work experience or personal life experience.The team has a diverse educational background, including one member with SN training.All team members are female and have worked in various health care and academic contexts.They gained new knowledge and a deeper understanding of the phenomenon during the study, further developing their preunderstanding.According to Alvesson and Sandberg (2022), preunderstanding is not only a source of bias but also a significant asset in knowledge production.The team drew inspiration from their prior topic knowledge when choosing the research design, formulating the aim and interview questions.However, they ensured that the analysis remained transparent and close to the text to prevent preunderstanding from affecting the results. The SN students were willing to change and try new things in this subcategory.They worked as RNs in different areas but wanted to move forward and avoid stagnation.Some were restless and wanted to explore other options, including changing specialist areas in the future.I'm pretty fickle about what I do…I work in the same place for a while, then I want to do something new.[Participant (P) 1] SN students pursued specialist training immediately after their RN training or after gaining work experience.They chose to specialise after finding a stable workplace or due to life circumstances.Some were placed in different specialised areas during the COVID-19 pandemic and wanted to continue working there, while others had to decide their future after completing managerial contracts.I was working at Covid-intermediate care unit in the spring, I was loaned out, and then I got the urge again, and now I'm going to do something about it. 5. 3 .1 | Interest or disinterest in a particular speciality due to previous experience In this subcategory, SN students describe how their experiences in different speciality areas have created a personal interest in a particular area or positively or negatively impacted their choice of speciality training.SN students chose their speciality based on their interest in the area or a particular aspect of the work.The exciting elements varied depending on the SN student's preference, such as deeper conversations with patients, maintaining distance, technology, structure and control.Some SN students found certain areas slow-paced, while others found them too stressful.I like, uh, like speed, and it should be cool, and that it should be; in the emergency room/…/it was very slow, and there was a lot of waiting and a lot of talking, and I like to have a list in my head of these things I need to do, so I prioritise accordingly, and that list was often very short in psychiatry.[P4] I enjoy creating relationships and but like getting to know the patient more/…/If you work in, for example, the maternity ward or in a medical ward, or like in a hospital, you don't even have time to talk to the patients.You never have time to put yourself in the patient's shoes and get to know them.[P6] SN students' positive experiences in a particular area, including enjoyable work, exciting tasks and a good working environment, increased their likelihood of pursuing specialist training in similar areas.Conversely, negative experiences, such as threats and violence, high workload, poor professional relationships and inadequate skill development, deterred SN students from pursuing training in those areas. for RNs, after 1 year in a new position, work often becomes routine with a lack of opportunities for further skill and knowledge development.Maintaining RNs' engagement and job satisfaction requires ongoing professional development and growth opportunities.To improve health care, the current study andRahimi et al. (2019) emphasise the value of having a wide range of skills and combining different areas of expertise, including involvement in research and teaching.Rahimi et al. (2019) also acknowledge the impact of integrating different roles and skills, which aligns with the current study's findings.Also, the current study andRahimi et al. (2019) emphasise the importance of personal growth and continuous professional development, underscoring the RNs' understanding of the significance of constant learning and improvement in nursing.Some SN students in the current study actively chose to undertake specialist training to improve their working conditions and work-life balance in a different workplace.Our results confirm the research byTamata and Mohammadnezhad (2023), which showed that RNs with high workloads and unattractive working conditions, combined with poor job satisfaction, are more likely to explore alternative employment opportunities.Conversely,Gunn (2015) reports that RNs who experience reasonable job satisfaction are more likely to remain in their positions, especially when they find their work fulfilling, exciting and free from monotony.Our findings demonstrate the significance of the work-life balance for SN students when making career decisions.Some SN students specifically chose fields that allow them to escape physically and mentally demanding work conditions, enabling them to prioritise quality time with their loved ones.These findings correspond to the research conducted byPrice et al. (2018), which underscores RN students' strong desire to achieve a healthy worklife balance.In contrast, the study byRognstad and Aasland (2007) reveals a shift in RNs' priorities over time.Initially, during their student years, flexible working hours, part-time work options and creative job opportunities were of greater importance.However, as RNs gained experience in the field, their priorities shifted toward wages, job security and a decreased interest in flexibility and promotion opportunities.Furthermore, work and life experiences significantly influence the choice of a specialist area.Consistent with the current study, Alexander et al. (2015) and McKenna et al. (2010) identified specific factors that influenced RNs' decisions regarding their speciality, which were shaped by their prior personal experiences and preferences.Clinical placements during the study period allowed SN students to become familiar with and reflect on working conditions and the nature of the work, as McKenna et al. (2010) observed.Similarly, in the current study, SN students had the chance to explore different nursing fields through clinical placements or subsequent work experiences as RNs.The SN students' perceptions of a positive or negative working environment played a significant role in their choice of specialisation.Positive experiences motivated students to stay or return to a particular field, while Future research could investigate how work environment and worklife balance impact RNs' career choices and workplace preferences.It would be valuable to explore specific factors contributing to a positive work environment and its effects on RNs' well-being, motivation and engagement.In addition, future research would provide essential knowledge about SNs' experiences after completing their specialist training to see if the SN students' expectations before the training have remained crucial and have been met.Furthermore, it would be helpful to further investigate how training programmes and external support can attract and retain nurses in areas with labour shortages.These research directions can contribute to a better understanding of nurses' career choices, work environment and factors influencing their professional development, ultimately enhancing recruitment, retention and job satisfaction and thus contributing to the overall quality of care within the nursing profession. Categories and subcategories. TA B L E 3It was my goal when I started in the emergency room to want to be a nurse in charge, and I didn't want to be a charge nurse until I was a specialist/…/there are so many elements that I think are very important to have before becoming a charge nurse.I think that the specialist training helps me in all these areas.
2024-07-22T05:08:52.908Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "b7b9c7102eadb9862af25a6fc8418cf48fcc8abb", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d36dcdbb6d2b9f3aee8acb0e91d3489736fac9f5", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
260351228
pes2o/s2orc
v3-fos-license
CONSTRUCT: A Program Synthesis Approach for Reconstructing Control Algorithms from Embedded System Binaries in Cyber-Physical Systems We introduce a novel approach to automatically synthesize a mathematical representation of the control algorithms implemented in industrial cyber-physical systems (CPS), given the embedded system binary. The output model can be used by subject matter experts to assess the system's compliance with the expected behavior and for a variety of forensic applications. Our approach first performs static analysis on decompiled binary files of the controller to create a sketch of the mathematical representation. Then, we perform an evolutionary-based search to find the correct semantic for the created representation, i.e., the control law. We demonstrate the effectiveness of the introduced approach in practice via three case studies conducted on two real-life industrial CPS. I. BACKGROUND AND THE PROBLEM Cyber-physical systems (CPS) consist of heterogeneous physical and computational components which combine to enable critical functionalities, e.g., providing energy to a city or flying an autonomous plane for delivering packages [1]. A key part of a CPS is the embedded software that implements the control algorithms and is in form of binaries. For forensic investigations, especially in mission-and safety-critical industrial domains (e.g., military, energy, medical, transportation, and agriculture), it is crucial to reverse-engineer these algorithms as representation models to better understand their behavior and thus, investigate cause of failure in industrial accidents, detect counterfeit products or even infringement of intellectual property. The current techniques for construction of such representations heavily rely on intelligent experts (e.g., professional programmers) to manually review the decompiled binaries and create an abstraction of the software understandable by the code inspectors [2]. This is a non-trivial, timeconsuming, and error-prone task. To address these challenges, we introduce a novel approach for automatically synthesizing a mathematical representation of the controller software in a CPS. This representation enables subject matter experts to easily analyze the behavior, identify deviations from expected behavior, and avoid damaging threats. The synthesized representation follows the syntax and semantic of Modelica modeling language [3], which is widely used in industry and is easily understandable by subject matter experts. II. APPROACH Figure 1 shows an overview of our approach, called Code-based Model Synthesis Platform for re-Constructing Control Algorithms (CONSTRUCT), which automatically constructs mathematical representations of controller algorithms from CPS binaries. The input to CONSTRUCT is a Functional Mock-up Unit (FMU) which is a file produced by CPS designers and contains the controller binaries as well as a textual description about the controller's I/O variables. The output of CONSTRUCT is a mathematical representation of the controller of CPS in Modelica language. The main idea is to decompile the given binary, locate controller-related instructions in the decompiled code, construct the structure of the mathematical representation, and finally, add the semantic to the created structure. Below, we provide more details on each step. 1 Decompile Binaries: From description files in the FMU, CONSTRUCT identifies variable names and their attributes (e.g., type and initial value) that should be used in the mathematical model. It also decompiles the binary files inside the FMU using the popular Ghidra decompiler. The decompiled code contains symbolic (not actual) variable names. 2 Isolate Mathematical Primitives: CONSTRUCT performs static program analysis and localizes the mathematical primitives that are used in controller parts of the CPS. Some mathematical primitives differ in the decompiled version of the code compared to the original source code. CONSTRUCT incorporates a rule-based engine that is able to identify these complicated mathematical primitives in the decompiled code. 3 Code-level AST to Model-level AST: Next, CON-STRUCT creates ASTs of the isolated mathematical primitives in the decompiled binaries (C files) and translates them into algebraic equations in form of Modelica ASTs. Note that due to the decompilation of the binaries, the translated ASTs contain symbolic names, not the actual variable names used in the original source code of the controller. 4 Modelica Model Synthesis: CONSTRUCT uses Genetic Algorithm (GA) as an evolutionary search method to find the correct mapping between the symbolic names in the Modelica AST and the original I/O variable names retrieved from the description file inside the FMU. In this setting, the ith gene in a chromosome represents an I/O variable name that should be assigned to the ith symbol in the created Modelica AST. The baseline GA approach stochastically manipulates chromosomes and then tests whether the altered chromosome results in a syntactically and semantically correct Modelica AST (i.e., correct-by-testing (CbT)). In contrast, we follow a correct-by-construction (CbC) paradigm where we carefully design GA operators (i.e., first population generation, mutation, and cross-over) such that every generated chromosome complies with Modelica syntax and semantic. This not only prunes the search space and reduces the number of trial and errors (and hence, makes the approach scalable), it also results in significantly more accurate mathematical representations. To find the error of the generated mathematical representation, we provide the same time series input to the given binary as well as the synthesized mathematical model, find their output distance, and compute the mean squared error (MSE). III. EVALUATION To showcase the performance of our approach, we present the results of our experimental studies on synthesizing mathematical representations from binaries of three different builtin controllers shipped with Modelica, namely, PI, PID, and LimPID, that are used in two real-world industrial CPS: (1) a Turtlebot Waffle Pi which is a four-wheeled robot used in ground-mission applications, e.g., surfing the area and building maps, and (2) a PX4 Quadcopter which is an autonomous flying robot used in different industrial applications, e.g., package deliveries. The computational complexity of the embedded controllers (and thus, their to-be-synthesized mathematical representations) increases from PI controller where it has only proportional (P) and integral (I) blocks, to PID where there are proportional (P), integral (I), and derivative (D) blocks, and finally, LimPID where it has more complexity compared to the two previous controllers, e.g., has multiple inputs. In our experiments, the population size for the GA was considered 400 and the maximum number of generations was 10. Figure 2 summarizes the results of our comparison study for our CbC approach against the baseline CbT approach. In this figure, X-axis denotes the generation number when we run the GA for finding the correct symbol to variable name mapping. Moreover, the Y-axis shows the MSE calculated based on the distance between the outputs of each of CbT and CbC approaches and the output of the actual binary, given the same input. Even in the simplest controller (PI), our CbC approach is able to find more accurate mathematical representations compared to the baseline (CbT) approach. Interestingly, while in complex cases (PID and LimPID) the CbT approach is not able to generate even one representation that complies with Modelica language, not only our CbC approach can generate models that comply with Modelica syntax and semantic, but also the synthesized representations have relatively low error rates. The results show that the CbC approach outperforms the commonly used baseline approach, is able to generate accurate results to support subject matter experts in their investigations in industrial cases that the baseline CbT does not work, and is scalable to more complex controllers. IV. CONCLUSION AND FUTURE WORK In this paper, we introduced CONSTRUCT, a novel program synthesis approach that automatically creates mathematical representations of control algorithms in cyber-physical systems. This approach leverages a genetic algorithm for injecting the semantics into the created representation. In future work, we are interested in evaluating the utility of constraint-based solvers, such as SMT solvers, for making the model synthesis more efficient.
2023-08-02T06:42:36.528Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "895c76a04056ec9d3c814b26a2c702dc32442509", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "895c76a04056ec9d3c814b26a2c702dc32442509", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
253760289
pes2o/s2orc
v3-fos-license
Focused Ultrasound Stimulation of Microbubbles in Combination With Radiotherapy for Acute Damage of Breast Cancer Xenograft Model Objective: Several studies have focused on the use of ultrasound-stimulated microbubbles (USMB) to induce vascular damage in order to enhance tumor response to radiation. Methods: In this study, power Doppler imaging was used along with immunohistochemistry to investigate the effects of combining radiation therapy (XRT) and USMB using an ultrasound-guided focused ultrasound (FUS) therapy system in a breast cancer xenograft model. Specifically, MDA-MB-231 breast cancer xenograft tumors were induced in severe combined immuno-deficient female mice. The mice were treated with FUS alone, ultrasound and microbubbles (FUS  +  MB) alone, 8 Gy XRT alone, or a combined treatment consisting of ultrasound, microbubbles, and XRT (FUS  +  MB  +  XRT). Power Doppler imaging was conducted before and 24 h after treatment, at which time mice were sacrificed and tumors assessed histologically. The immunohistochemical analysis included terminal deoxynucleotidyl transferase dUTP nick end labeling, hematoxylin and eosin, cluster of differentiation-31 (CD31), Ki-67, carbonic anhydrase (CA-9), and ceramide labeling. Results: Tumors receiving treatment of FUS  +  MB combined with XRT demonstrated significant increase in cell death (p  =  0.0006) compared to control group. Furthermore, CD31 and Power Doppler analysis revealed reduced tumor vascularization with combined treatment indicating (P < .0001) and (P  =  .0001), respectively compared to the control group. Additionally, lesser number of proliferating cells with enhanced tumor hypoxia, and ceramide content were also reported in group receiving a treatment of FUS  +  MB  +  XRT. Conclusion: The study results demonstrate that the combination of USMB with XRT enhances treatment outcomes. Introduction Tumor blood vessels play an important role in providing rapidly dividing tumor cells with oxygen and nutrients. 1 Hence, damage to tumor vasculature can greatly affect tumor growth. [1][2][3] Several studies have investigated the use of ultrasound-stimulated microbubbles (USMB) to induced vascular damage which can directly impact tumor growth or sensitize tumor cells to certain cancer treatment modalities such as chemotherapy and radiation therapy (XRT). [4][5][6][7][8] Ultrasound imaging often uses gas-filled microbubbles as a contrast agent due to their high echogenicity. 9 When exposed to ultrasound, microbubbles oscillate in response to the mechanical pressure exerted on them, this process is known as acoustic cavitation. There are 2 types of acoustic cavitation: stable and inertial cavitation. 10 Stable cavitation occurs at low ultrasound pressures and can be linear or nonlinear depending on ultrasound frequency and pressure amplitude, while inertial cavitation occurs at higher pressures and results in microbubble implosion. 10 The cavitation of the microbubble can induce shear stress, affecting the surrounding tissue. This has been suggested to have potential therapeutic applications both in vitro and in vivo. 11 The shear stress induced by USMB within the tumor microvasculature can damage endothelial cells lining the blood vessels. These effects may lead to increased vascular permeability, decreased vascular integrity, and vasoconstriction, subsequently causing vascular shutdown. 12 In addition, exposure to USMB can result in a decrease in cell viability and an increase in endothelial cell membrane permeability through a process known as sonoporation. 13 These effects were found to be dependent on treatment parameters such as ultrasound pressure, frequency, exposure time, and microbubble concentration. 14 The bioeffects of USMB open the door to a range of potential therapeutic applications including targeted drug and gene delivery into cancer cells, 15,16 induction of vascular damage or vasoconstriction to starve tumor cells, and the sensitization of tumors to anticancer treatments such as chemotherapy and XRT. 17,18 Recent studies on the radiosensitizing effects of USMB have suggested that the disruption of microvascular endothelial cells results in the activation of cell-death signaling pathways. 5 The membrane-perturbation caused by USMB can lead to an increase in acid sphingomyelinase (ASMase) activity in endothelial cells, which results in ceramide accumulation and leads to increased cell death through apoptosis. This increase in ASMase-mediated ceramide production is believed to increase the sensitivity of tumors to XRT. 4,5,20 In the study here, power Doppler imaging was used in combination with immunohistochemical analysis to investigate the effects of combining XRT and USMB using an ultrasound-imaging guided focused ultrasound (FUS) therapy system in a breast cancer xenograft model. The motivation for this work is to build on previous studies suggesting that the stimulation of microbubbles within tumor microvasculature can induce endothelial cell damage that enhances the effects of XRT. 18 Image-guided ultrasound therapy has previously been used to enhance the Spatio-temporal control of ultrasound therapy. 21 The treatment system used here improves spatial specificity by using a FUS transducer that allows for concentrating ultrasound energy in a small focal area and delivering a well-characterized ultrasound therapy beam that is precisely focused at a treatment target with the guidance of a low-frequency ultrasound imaging transducer. The main hypothesis guiding this study is that the local stimulation of microbubbles within the tumor microvasculature using FUS can enhance the effects of XRT in a breast cancer model. Tumor response assessed at 24 h following treatment demonstrated that the combination of FUS + MB with XRT improved the outcome of treatment by reducing tumor vascularization, blood flow, tumor oxygenation, and tumor cell proliferation. Furthermore, increased ceramide labeling and cell death levels were also observed in the combined FUS + MB + XRT treated group. Materials and Methods The reporting of this study confirms to ARRIVE 2.0 guidelines. 22 All experimental procedures were conducted in compliance with protocols approved by the Sunnybrook Research Institute Institutional Animal Care and Use Committee (SRI ACC, protocol 447). Animal Model Adequate care of the animals was taken following guidelines. 23 A total number of 25 animals with 5 mice were used per treatment condition. Four-week to 6-week-old female severe combined immunodeficiency mice (Charles River Canada, Saint-Constant, QC, Canada) received an injection of 100 μL of the MDA-MB-231 cell suspension in the right hind leg using a 27-gauge needle. Tumors were allowed to grow and reach an approximate diameter of 7-9 mm with a maximum diameter of 10 mm. Oxygen ventilated isoflurane (2%) was used to anesthetize mice during tail vein cannulation with 25-gauge catheters for microbubble injection. The animals were then injected subcutaneously with 100 µL of a ketamine and xylazine mixture (150 mg/kg ketamine mixed with 10 mg/kg xylazine in saline) prior to ultrasound imaging and treatment. Post-treatment imaging was performed at 24 h. The animals were subsequently sacrificed and tumors were excised for histology and immunohistochemistry. Animals received either no treatment or one of the following treatments: focused ultrasound (FUS) only, radiation (XRT) only, ultrasound and microbubbles (FUS + MB), or a combination of ultrasound, microbubbles, and radiation (FUS + MB + XRT). Throughout the experiments, mice were visually monitored. To maintain regular body temperature and limit vasoconstriction due to hypothermia during treatment, animals were placed under heat lamps or kept over warmed pads. Oxygen was administered if irregular respiratory rates were noticed in animals. Microbubble Preparation Definity microbubbles (Lantheus Medical Imaging, Billerica, MA, USA) were used in this study. The microbubbles were left at ambient room temperature for 30 min before being activated using a Vialmix (Lantheus Medical Imaging) for 45 s. Subsequently, the microbubbles were diluted with saline to a concentration of 1% (v/v) of mean mouse blood volume, which corresponds to 1 mL/kg. A volume of 100 µL of the diluted microbubble solution was injected into each animal immediately prior to sonication via a tail vein catheter, followed by a 150 µL saline (supplemented with 0.2% heparin) flush. Ultrasound Treatment An RK100 FUS therapy system (FUS Instruments, Toronto, Ontario, Canada) was used in this study. The device consists of a waveform generator, an electronics box containing a power meter, and an amplifier that is connected to a spherically focused piezoelectric therapy transducer. The therapy transducer has a 1.18-inch diameter, a 2.36-inch radius of curvature, a 488 kHz center frequency, and delivered pulses with 570 kPa peak negative pressure. The therapy transducer was positioned facing upwards next to a 10 MHz L14-5/8 imaging transducer connected to an Ultrasonix (BK Ultrasound, MA, USA) imaging system ( Figure 1). The imaging transducer was used to locate the center of the tumor where a treatment target was selected. The therapy transducer was then electronically guided by a computer-controlled, 3-axis motorized, positioning system, such that the transducer focus was placed at the center of the selected treatment target. Pulses, each lasting 32 µs, and with a 3 kHz pulse repetition frequency were sent in 50 ms tone bursts followed by a 1.95 s delay. This pulsing sequence was repeated for a total treatment time of 5-min. Within the 5-min treatment duration, a total of 150 bursts were sent, which resulted in a total insonation time of 750 ms and a 0.25% duty cycle. During treatment, the mouse was secured in an upright position with the tumor submerged in water. Once the therapy transducer was focused at the center of the treatment target, microbubbles were administered through the tail-vein catheter followed by a heparinsupplemented saline flush. Immediately upon microbubble injection, the tumors were exposed to ultrasound for 5 min. Radiation Therapy Tumors were exposed to 160-kVp X-rays for a dose of 8 Gy at 200 cGy/min dose rate using a cabinet irradiator (Faxitron Xray, IL, USA) immediately after the FUS + MB treatment. During irradiation, the animal's body was covered with a 3 mm-thick lead sheet, with the tumor exposed through a circular cut-out. Micro-Ultrasound Doppler Imaging In this study, power Doppler imaging was used to detect blood flow in tumor vasculature pretreatment and at 24 h post-treatment. Data was acquired using a VEVO-770 system (VisualSonics, Toronto, Canada) with a VEVO RMV 710B transducer with a central frequency of 25 MHz. Three-dimensional (3D) power Doppler imaging was carried out with a step size of 0.2 mm, a wall filter of 2.5 mm∕s, a scan speed of 2 mm∕s, medium velocity, and a 20-dB gain setting. In-house software developed in MATLAB (Mathworks Inc, MA, USA) was used to analyze power Doppler data and calculate a vascularization index (VI). The VI is defined as the fraction of tumor volume that is occupied by the Doppler signal. The animals were anesthetized with the ketamine and xylazine mixture during tumor imaging, and body temperature was maintained by resting the animal on a heating pad. The tumor bearing leg was stretched through an opening on the side of a weighing boat and secured with surgical tape, while deionized water was used as a coupling medium for ultrasound propagation. The water was heated to 37°C to ensure normal blood flow. Histology Preparation Twenty-four-hour after treatment administration, mice were sacrificed by cervical dislocation, and tumors were excised and fixed in 10% neutral-buffered formalin for 24 h at room temperature. The fixed tissue samples were then embedded in paraffin and sectioned into 5 µm slices for staining. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) was used to mark regions of apoptotic cell death by labeling fragmented DNA. Hematoxylin and eosin (H&E) staining was used to evaluate gross tumor destruction. The cluster of differentiation 31 (CD31) staining was used to assess tumor vascularization by marking endothelial cells lining the blood vessels within the tumor. In addition, Ki-67 labeling, which marks a nuclear protein present only in actively dividing cells, 24 was used to identify the fraction of proliferating cells in tumors. Furthermore, carbonic anhydrase 9 (CA-9) labeling, a protein that is expressed in an acidic environment, which is associated with hypoxia, 25 was used to identify regions of hypoxia within tumors under different treatment conditions. Finally, to investigate the mechanism of enhanced cell death, ceramide labeling was performed. Statistical Analysis Statistical significance was determined using Prism (GraphPad Software Inc., La Jolla, CA, USA) one-way analysis of variance followed by Šidák comparison test. A P-value of *P < .05, **P < .01, ***P < .001, ****P < .0001 was considered to be statistically significant. Each treatment condition was compared to the control (untreated) group. The statistical results for power Doppler and immunohistochemistry comparing each group are presented in supplementary data (S1-S6 Tables). Results In this study, the effect of combining localized ultrasound and microbubble treatment with XRT was assessed using power Doppler imaging and immunohistochemistry. A schematic of the experimental setup is depicted in Figure 1. Tumors treated with a combination of FUS-stimulated microbubbles (FUS + MB), or XRT demonstrated a significant increase in TUNEL staining compared to the untreated control, indicating an increase in apoptotic cell death 24 h after treatment administration. This effect was also observed in H&E sections (Figure 2A). The tumor area with positive TUNEL staining was quantified and is presented in Figure 2B. In comparison to control, tumors treated with a single 8 Gy doses of XRT alone or a combined treatment of (FUS + MB + XRT) demonstrated significant increase in cell death by 1.84 and 2.57 fold, respectively. In order to estimate the proliferation activity of tumor cells, Ki-67 immunolabeling was conducted (Figure 3). Tumors that received no treatment showed a Ki-67 labeling index of 23 ± 4% (mean ± SE), while tumors treated with FUS alone or a single dose of 8 Gy yielded a labeling index of 15 ± 3% and 14 ± 2%, respectively. Furthermore, tumors treated with FUS + MB yielded a Ki-67 labeling index of 21 ± 3% whereas those receiving the combined FUS + MB + XRT treatment demonstrated a significant decrease in Ki-67 labeling index to 13 ± 3% compared to control. CD-31 immunohistochemical analysis was used to mark endothelial cells and the degree of tumor vascularization. Intact appearing endothelial cells were counted in 5 randomly selected regions of interest per tumor section. The results demonstrated a decrease in intact, normal-appearing vascularization in treated groups compared to the untreated control group (Figure 4). The normalized vascular index decreased from a value of 1.00 ± 0.14 in the untreated control group to 0.5 ± 0.1 (P = .0006), 0.34 ± 0.03 (P < .0001), and 0.6 ± 0.1 (P = .003) in the FUS only, XRT only, and FUS + MB groups, respectively. The vascular index in the samples that received a combination of FUS + MB + XRT significantly decreased to a value of 0.2 ± 0.02 (P < .0001). The relative change in the power Doppler vascular index before and at 24 h after treatment was assessed in this study. The results demonstrated a reduction in the vascular index in treated samples compared to the untreated control. The vascular index in the untreated control was 22 ± 5%. The FUS-only group demonstrated a vascular index of 17 ± 5%. Treatment with 8 Gy single-dose XRT resulted in a −12 ± 8% decrease in power Doppler vascular index (P = .0029) while a FUS + MB treatment alone yielded a −13 ± 10% (P = .0023) decrease in vascular index. The combination of FUS + MB + XRT resulted in a −25 ± 8% decrease at 24 h post-treatment (P = .0001) ( Figure 5). In order to investigate regions of hypoxia in the tumor section, CA-9 labeling was performed. The group that were exposed to FUS only, XRT only and FUS + MB demonstrated a CA-9 labeling index of 17 ± 4%, 17 ± 7% and 7 ± 3.2%, respectively ( Figure 6). The highest level of hypoxic areas (28 ± 7%) occurred in tumors treated with FUS + MB + XRT. This was 5.6-fold higher than the percentage of hypoxia resulting from control group (5 ± 3%). Ceramide labeling was conducted in order to evaluate the production of ceramide in tumor sections (Figure 7). Compared to the control group, tumors treated with FUS + MB + XRT exhibited a significant increase in ceramide labeling index by 2.5-fold. The ceramide labeling indices of tumors exposed to FUS alone, XRT alone, or FUS + MB remained at 17 ± 3%, 14 ± 5% or 17 ± 7%, respectively. Discussion This study investigated the effects of using acoustically driven microbubbles in combination with XRT and tested the hypothesis that a combination of FUS-stimulated microbubbles and XRT treatment can enhance the therapeutic outcomes in breast cancer xenografts in vivo. It is believed that the main mechanism of FUS-stimulated microbubble-enhanced XRT is the mechanical disruption of tumor microvasculature through acoustic cavitation. 12 The results obtained upon exposure to USMB and chemotherapy or radiotherapy caused the highest amount of tumor cell death and vascular damage at 24 h, indicating greater tumor response. 26,27 Therefore, in our present study, we anticipated that incorporating 24 h to monitor tumor response would ensure maximum effectiveness of the treatment. In the study here, treatment with FUS-stimulated microbubbles combined with radiation demonstrated a significantly increased tumor cell death. These results were consistent with previous studies conducted with prostate, bladder, and breast cancer xenografts. [4][5][6]19 The observed increase in cell death (Figure 2) was also accompanied by a decrease in the proliferative fraction of tumor cells demonstrated by a decrease in Ki-67 labeling in the treated tumors ( Figure 3). This was expected based on previous work and consistent with TUNEL results, where the combined treatment resulted in the highest cell death index and correspondingly had the lowest proliferative fraction. Comparable results were reported in previous studies on prostate cancer xenografts that were treated with similar treatment conditions and assessed 24 h after treatment. 4 This decrease in Ki-67 labeling also supports the observation made by Lai et al, 19 in breast cancer xenografts where tumors that were treated with a combination of USMB and radiation had slower tumor growth rates compared to untreated tumors in long term studies. 19 Several studies have demonstrated the feasibility of using high-frequency power Doppler imaging to assess tumor vascular response. 18,28 A reduction in power Doppler signal indicates a reduction in blood flow within the tumor. 18 In the work here, a significant decrease in blood flow compared to the untreated control was observed in tumors that were treated with FUS and microbubbles only and radiation only ( Figure 5). However, the most significant decrease was observed in the tumors that received a combination of the 2 treatments. These results were consistent with previous studies conducted on bladder cancer xenografts. The decrease in blood flow was consistent with CD31 immunohistochemical analysis, where a significant reduction in tumor vascularization was observed ( Figure 4). The decreases in tumor vascularization and blood flow observed here were accompanied by an increase in CA-9 labeling compared to the untreated control ( Figure 6). The CA-9 protein is over-expressed in hypoxic cells. The increased hypoxia could be a direct result of vascular disruption and reduced perfusion in tumor vasculature. 29 One of the hypothesized mechanisms of FUS-stimulated microbubble-enhanced XRT is that, when injected intravenously and stimulated by ultrasound, microbubbles can exert shear stress on neighboring endothelial cells lining blood vessels causing membrane damage. This disruption of tumor vascular endothelial cells can lead to the activation of a ceramide-mediated cell signaling pathway that triggers apoptosis, hence enhancing tumor cell killing in response to XRT. 30 To verify this mechanism, ceramide labeling was conducted in this study. The results indicated an increase in ceramide levels in treated samples compared to the untreated control (Figure 7). However, the increase in ceramide production was only statistically significant in the group that received the combined treatment. This is consistent with the general results obtained from TUNEL staining linked to cell death and suggests that the increased ceramide levels are a potential cause of increased cell death. It has been demonstrated by previous studies that ceramide production can increase significantly in cancer cells as well as in endothelial cells in response to XRT and USMB exposure. 20,30 Combining USMB treatment with XRT can result in vascular disruption by damaging endothelial cells lining blood vessels and decreasing blood flow to the tumor. This, in turn, can result in decreased tumor oxygenation and tumor cell proliferation, and increased cell death. In addition, endothelial cell damage induced by both USMB exposure and radiation increases ceramide production and enhances the ceramide-mediated apoptosis pathway, leading to further increases in cell death. The involvement of ceramide in vascular disruption subsequently accompanied by cell death has been extensively investigated. 31 A study conducted by Al-Mahrouki et al 4 explored the signaling pathway involved in response to ceramide activation/production causing substantial damage to vasculature followed by USMBs and XRT. In other work, a gene responsible for membrane biogenesis and repair involved in the transfer of galactose to ceramide, UDP glycosyltransferase 8 (UGT8) was experimentally upregulated or downregulated in prostate cancer xenografts. Results demonstrated that xenografts with down-regulated UGT8 gene exhibited a higher accumulation of ceramide followed by significant cell death leading to a reduction in blood flow and oxygen saturation level compared to control (untreated). On contrary, the reverse phenomenon was observed in xenografts with upregulated UGT8 levels. 31 Another explored mechanism of radiation-induced cancer cell death is by overcoming tumor hypoxia. The treatment outcome with radiotherapy is known to be greatly influenced by hypoxia. [32][33][34] Preclinical data suggests radiation activates and upregulates hypoxia-inducible factor 1 (HIF-1) levels, promoting radioresistance. The activation and accumulation of HIF-1 is known to be caused due to reoxygenation after irradiation. 35 Several attempts have been made to restore the oxygen content in the tumor cells, one of which includes delivery of microbubbles carrying oxygen. An increase of 20 mmHg oxygen content in the breast tumor model has been documented using the delivery of ultrasoundtriggered oxygen-filled microbubbles. Defeating hypoxia using this technique prior to radiotherapy demonstrated greater radiosensitivity. 36,37 It is still unclear if oxygen carrying microbubbles have any influence in radiation-induced ceramide production. It would be interesting to see if microbubble carrying oxygen improves the response to XRT is associated with ceramide production. The results obtained from the current study are consistent with the findings of previous studies done using more simplistic ultrasound therapy on breast, prostate, and bladder cancer. [4][5][6]19 However, the current study improves the spatial specificity of the treatment by using image guidance and FUS therapy, which allows for concentrating ultrasound energy in a small focal area and improves the penetration of the ultrasound beam. This demonstrates initial workings toward a framework to treat deeper targets. It has to be pointed out that our previous studies 5, 19 have shown synergistic effects following USMB and radiation in an in vivo xenograft model however, in this study no synergistic effect of FUS-stimulated microbubbles and radiation was detected. The rationale for not observing synergy here could be treatment dependent. In those studies, different xenograft types and different concentrations of microbubbles were used. However, this needs to be validated in future work. Thus, overall the study here demonstrates enhanced tumor response with combined treatment of FUS-stimulated microbubble and XRT. Even though the outcomes hold a promising future for clinical settings, the limitations of this study cannot be overlooked. Several limitations to this study are included in the following points. In the present work, the impact of FUS+MB and XRT on tumor blood vessels was detected using CD31 immunohistochemistry and power Doppler ultrasound. However, both these techniques are unable to differentiate between perfused vessels from nonperfused ones. It is therefore essential to consider perfusion assays or perfusion imaging techniques to better understand the tumor vascular architecture in a more precise manner. Another limitation of this study is the usage of the xenograft model, which does not completely recapitulate human tumor biology. Instead, using more clinically relevant orthotopic models, specifically patient-derived cell xenografts might help mimic human tumor vasculature closely and can help predict clinical outcomes more accurately. Another limitation of the current work is the assessment of treatment response acutely (at 24 h). Even though enhanced tumor response with increase tumor cell death and vascular damage following treatments have been validated in our study, this does not directly translate into clinical settings. A longitudinal study including multiple treatment regimens to examine the treatment effects and its potential risk factors should be included in future work. In addition, monitoring tumor growth over a longitudinal period might help in treatment response prediction and switch treatments if required at its earliest. Conclusion This study demonstrated that combining a single dose of XRT with USMBs improved treatment effects in a breast cancer xenograft model. The results indicate the possibility that, lower doses of radiation when combined with USMBs, may have the same effect as higher doses of radiation in breast tumors. Targeted stimulation of microbubbles at the tumor site can be achieved using FUS and improved precision of treatment targeting can be enhanced using image guidance. The research presented in this paper is the foundation for future research that examines the use of image-guided FUS and microbubble treatment in combination with XRT in larger tumors grown in more complex animals.
2022-11-23T06:17:32.939Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2ff5e04481a30c0fd5134a8f171179dff9adc152", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/15330338221132925", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc67ef806c00848276e8979b3bb41c8753adbcd2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248392334
pes2o/s2orc
v3-fos-license
On a class of interpolation inequalities on the 2D sphere We prove estimates for the $L^p$-norms of systems of functions and divergence free vector functions that are orthonormal in the Sobolev space $H^1$ on the 2D sphere. As a corollary, order sharp constants in the embedding $H^1\hookrightarrow L^q$, $q<\infty$, are obtained in the Gagliardo--Nirenberg interpolation inequalities. Introduction The following interpolation inequality holds on the sphere S d (see [1] and also [2]): Here dµ is the normalized Lebesgue measure on S d : , so that µ(S d ) = 1 (the gradient is calculated with respect to the natural metric). Next, q ∈ [2, ∞) for d = 1, 2, and q ∈ [2, 2d/(d − 2)] for d ≥ 3. The remarkable fact about (1.1) is that the constant (q − 2)/d is sharp for all admissible q. The inequality clearly degenerates and turns into equality on constants. The fact that the constant (q − 2)/d is sharp is verified by means of the sequence ϕ ε (s) = 1 + εv(s) as ε → 0, where v(s) is an eigenfunction of the Laplacian on S d corresponding to the first positive eigenvalue d, see [3] and the references therein. However, in applications (for instance, for the Navier-Stokes equations on the 2D sphere) the functions ϕ usually play the role of stream functions of a divergence free vector functions u, u = ∇ ⊥ ϕ, and therefore without loss of generality ϕ can be chosen to be orthogonal to constants. In this work we consider the two-dimensional sphere S 2 only and are interested in writing the Sobolev embedding H 1 (S 2 ) ֒→ L q (S 2 ) as a multiplicative inequality of Gagliardo-Nirenberg type involving the L 2 -norms of ϕ and ∇ϕ on right-hand side: ϕ L 2 (S 2 ) =: ϕ and ∇ϕ L 2 (S 2 ) =: ∇ϕ . It is also well known that in the case of R d interpolation inequalities in the additive form and in the multiplicative form are equivalent and the passage from the former to the latter is realized by the introduction of the parameter m in the inequality (by scaling x → mx) and subsequent minimization with respect to m. To go other way round one can use Young's inequality (with parameter) for products to obtain the interpolation inequality in the additive form. This scheme obviously does not work on a manifold due to the lack of scaling. One possible way to introduce a parameter in the Sobolev inequality is to consider the Sobolev space H 1 with norm and scalar product depending on a parameter m > 0, and then to trace down the explicit dependence of the embedding constant on m. In this work this is done in much more general framework of the inequalities for H 1 -orthonormal families proved in [4]. We can now state and discuss our main result. Theorem 1.1. Let a family of zero mean functions {ϕ j } n j=1 ∈Ḣ 1 (S 2 ) be orthonormal with respect to the scalar product (1. 2) Then for 1 ≤ p < ∞ the function These inequalities were proved in the case of R d in [4] for p = ∞ (d = 1), 1 ≤ p < ∞ (d = 2), and for the critical p = d/(d−2) (d ≥ 3). No expressions for the constants were given, the dependence on m is again uniquely defined by scaling, and the main interest there was in the dependence of the right hand side on n. For p = 2 this inequality has played an essential role in finding explicit optimal bounds for the attractor dimension for the damped regularized Euler-Bardina-Voight system for various boundary conditions both in the two and three dimensional cases, see [5,6,7]. More precisely, it was shown in [5,7] that B 2 ≤ (4π) −1/2 for T 2 , S 2 , and R 2 based on the following two inequalities for the lattice sum over Z 2 0 = Z 2 \ {0, 0} and the series with respect to the spectrum of the Laplacian on S 2 that were proved there for the special case, when p = 2 The case p = 2 is not at all specific in the general scheme of the proof of Theorem 1.1 and the general case in the theorem both for T 2 and S 2 immediately follows once we have inequality (1.5), (1.6) for all 1 < p < ∞. Inequality (1.5) and therefore Theorem 1.1 for the torus T 2 has recently been proved in [8], and the main result of this work is the proof of (1.6) and Theorem 1.1 for the sphere. We point out that in the case of R 2 , instead of (1.5) and (1.6) we simply have the equality (1.7) For one function (n = 1) Theorem 1.1 is equivalent to the Sobolev inequality with parameter H 1 ֒→ L q , q = 2p ∈ [2, ∞), which can equivalently be written as a Gagliardo-Nirenberg inequality which holds for R 2 , T 2 and S 2 , see Corollary 2.1. For the torus T 2 inequality (1.8) can be proved in a direct way [8] by using the Hausdorff-Young inequality for the discrete Fourier series and again estimate (1.5). In the case of R 2 this approach is well known and with the additional use of the Babenko-Beckner inequality [9,10] for the Fourier transform (and equality (1.7)) gives the following improvement of inequality (1.8) for R 2 with the best to date closed form estimate for the constant [11]: see also [12,Theorem 8.5] where the equivalent result is obtained for the inequality in the additive form. Of course, inequality (1.9) for R 2 and inequality (2.4) for T 2 both are a special case of Gagliardo-Nirenberg inequality. For R 2 the best constant is known for every q ≥ 2 and is expressed in terms of a norm of the ground state solution of the corresponding nonlinear Euler-Lagrange equation [13]. However, not in the explicit form. As mentioned above, inequality (1.9) was known before, while inequality (2.4) (more precisely, the estimate for the constant in it) for the torus T 2 was recently obtained in [8]. As far as the case of the sphere S 2 is concerned we do not know how to prove (1.8) in a way other than the one function corollary of the general Theorem 1.1. The main difference from the case of T 2 is that the orthonormal spherical functions are not uniformly bounded in L ∞ . Our approach makes it possible to prove similar inequalities in the vector case. Namely, we show that for u ∈ H 1 0 (Ω) ∩ {div u = 0} it holds Here Ω ⊆ S 2 is an arbitrary domain on S 2 . This inequality looks very similar to (1.8), the important difference being that, unlike the scalar case, the vector Laplacian on S 2 is positive definite, and we can freely use the extension by zero. Finally, it is natural to compare inequalities (1.1) with d = 2 and (1.8) for functions with mean value zero. To do so we go over to the natural measure on S 2 in (1.1) and then use the Poincare inequality ϕ 2 ≤ 2 −1 ∇ϕ 2 to obtain: The constant here is marginally smaller, since Since inequality (1.1) turns into equality on constants, this inequality may not be sharp on the subspace of zero mean functions on S 2 , and the constant in (1.8) is not sharp. However, looking at (1.8) and (1.9) for T 2 , S 2 and for R 2 , respectively, one can suggest that that the sharp constant here is The expression on the right-hand side here curiously coincides with sharp constant in the Sobolev inequality for the limiting exponent, see [14,15]: if we formally set d = 2. Of course, this inequality does hold in R 2 , since d ≥ 3 in it. Theorem 1.1 and the similar result in the vector case are proved in the next Section 2, and the key estimate for the series (1.6) is proved in Section 3. Proof of the main result Proof of Theorem 1.1. We first recall the basic facts concerning the spectrum of the scalar Laplace operator ∆ = div ∇ on the sphere S 2 (see, for instance, [16]): Here the Y k n are the orthonormal real-valued spherical harmonics and each eigenvalue Λ n := n(n + 1) has multiplicity 2n + 1. The following identity is essential in what follows: for any Since inequality (1.3) with (1.4) clearly holds for p = 1 we assume below that 1 < p < ∞. Let us define two operators where V ∈ L p , is a non-negative scalar function and Π is the projection onto the space of functions with mean value zero: Then K = H * H is a compact self-adjoint operator in L 2 (S 2 ) and for r = where we used the Araki-Lieb-Thirring inequality for traces [17,18,19]: and the cyclicity property of the trace together with the facts that Π commutes with the Laplacian and that Π is a projection: Π 2 = Π. Using the basis of orthonormal eigenfunctions of the Laplacian (2.1) and identity (2.2), in view of the key estimate (3.1) proved below we find that We can now argue as in [4]. We observe that where ψ j = (m 2 − ∆) 1/2 ϕ j , j = 1, . . . , n. Next, in view of (1.2) the ψ j 's are orthonormal in L 2 and in view of the variational principle where λ i > 0 are the eigenvalues of the operator K. Therefore Corollary 2.1. The following interpolation inequality holds for ϕ ∈Ḣ 1 (S 2 ): Proof. For n = 1 inequality (1.3) goes over to Minimizing with respect m we obtain The inequality for H 1 -orthonormal divergence free vector functions on S 2 and the corresponding one function interpolation inequality are similar to the scalar case. Proof. The case p = 2 was treated in [7]. Once we now have (3.1) for all 1 < p < ∞ the proof of the theorem is completely analogous. To make the paper self contained we provide some details. Next, by the vector Laplace operator acting on (tangent) vector fields on S 2 we mean the Laplace-de Rham operator −dδ − δd identifying 1-forms and vectors. Then for a two-dimensional manifold we have [23] ∆u = ∇ div u − rot rot u, where the operators ∇ = grad and div have the conventional meaning. The operator rot of a vector u is a scalar and for a scalar ψ, rot ψ is a vector: rot u := div(u ⊥ ), rot ψ := ∇ ⊥ ψ, where in the local frame u ⊥ = (u 2 , −u 1 ), that is, π/2 clockwise rotation of u in the local tangent plane. Integrating by parts we obtain (−∆u, u) = rot u 2 L 2 + div u 2 L 2 . Corresponding to the eigenvalue Λ n = n(n + 1), where n = 1, 2, . . . , there is a family of 2n + 1 orthonormal vector-valued eigenfunctions w k n (s) of the vector Laplacian on the invariant space of divergence free vector-functions, that is, the Stokes operator on S 2 w k n (s) = (n(n + 1)) −1/2 ∇ ⊥ Y k n (s), −∆w k n = n(n + 1)w k n , div w k n = 0; where k = 1, . . . , 2n + 1, and (2.5) implies the following identity: We finally observe that −∆ is strictly positive −∆ ≥ Λ 1 I = 2I. Turning to the proof we first consider the whole sphere Ω = S 2 , and as in (2.3) define two operators where Π is the orthogonal Helmholtz-Leray projection onto the subspace {u ∈ L 2 (S 2 ), div u = 0}. From this point, using (2.6), we can complete the proof as in the scalar case. Finally, if Ω S 2 is a proper domain on S 2 , we extend by zero u j outside Ω and denote the results by u j , so that u j ∈ H 1 (S 2 ) and div u j = 0. We further set ρ(x) := n j=1 | u j (x)| 2 . Then setting ψ i := (m 2 − ∆) 1/2 u i , we see that the system { ψ j } n j=1 is orthonormal in L 2 (S 2 ) and div ψ j = 0. Since clearly ρ L 2 (S 2 ) = ρ L 2 (Ω) , the proof reduces to the case of the whole sphere and therefore is complete. and gives the estimate of the constant c Lad ≤ 1/π. However, a recent estimate of it in [20] in the terms of the Lieb-Thirring inequality is slightly better: c Lad ≤ 3π/32. On the other hand, (2.4) works for all q ≥ 2 and provides a simple expression for the constant. Remark 2.2. The rate of growth as q → ∞ of the constant both in (2.4) and (1.9), namely q 1/2 , is optimal in the power scale. If we had not imposed the zero mean condition for the sphere, it would have immediately followed from (1.1) with d = 2. In the general case, if in (2.4) and (1.9) the rate of growth was less than 1/2, then the Sobolev space H 1 in two dimensions would have been embedded into the Orlicz space with Orlicz function e t 2+ε − 1, ε > 0, which is impossible [21]. Furthermore, while for every fixed q < ∞ the constant in (2.4) and (1.9) is not sharp, we think, as mentioned before, that the sharp constant c q behaves like Proof of estimate (1.6) Proposition 3.1. The following inequality holds for p > 1 and m ≥ 0 2n + 1 (m 2 + n 2 + n) p < 1. (3.1) Proof. A general argument shows that inequality (1.6) holds for all sufficiently large m. In fact, we observe that we can write I p (m) in the form The following asymptotic expansion as m → ∞ holds for this type of series (see [25,Lemma 3.5]) Therefore for a fixed p > 1 there exists a sufficiently large m = m p such that inequality (3.1) holds for all m ≥ m p . The proof that it holds for all p > 1 and m ≥ 0 requires some specific work. We will use the Euler-Maclaurin summation formula (see, for example, [24]). Namely, we use the formula with remainder term where B 4 (x) is the periodic Bernoulli polynomial. The remainder term R 4 in this formula can be estimated as where ζ(4) = π 4 90 and ζ(s) is the Riemann zeta function. We will use this formula for relatively big m and We now change the sign of the second term in the above expression and set Then, obviously, |f ′′′′ m (x)| ≤ g(x) for all x. On the other hand, the integral of g(x) can be computed explicitly (since g(x) contains odd powers of (x−1/2) in the numerators, hence the corresponding antiderivatives are expressed in elementary functions): Thus, the Euler-Maclaurin formula (3.2) gives us the estimate We now consider two cases: p ∈ (1, 2] and p > 2. So let p ∈ (1, 2]. The maximum value of m 0 (p) on p ∈ (1, 2] is attained at p = 2, so we have proved the desired inequality Thus, we only need to verify the desired inequality for m < m 0 . We single out the first term in the series and drop the the dependence on m in the remaining terms. We obtain We again apply the Euler-Maclaurin formula to the series R(p) (taking into account that the summation now starts with n = 2). Setting and f ′′′′ (n) = (2n + 1)p(p + 1) (n 2 + n) p+4 (16p 2 − 4)n 4 + (32p 2 − 8)n 3 + +(24p 2 + 20p + 4)n 2 + (8p 2 + 20p + 8)n + p 2 + 5p + 6 . This completes the proof of inequality (3.1) for p ∈ (1, 2]. We are now ready to verify inequality (3.1) for p > 2 as well. The key idea here is to use the fact that I p (m) is monotone decreasing with respect to p for 0 ≤ m ≤ m 1 (p), where m 1 (p) is given below. Indeed, let Then and we see that the derivative is negative for all n ∈ N if Let now p > 2 be fixed. Two cases are possible m 0 (p) < m 1 (p) and m 1 (p) ≤ m 0 (p). In the first case inequality (3.1) holds for all m, since if m > m 0 (p), it holds in view of (3.4), (3.5), while if m < m 0 (p) < m 1 (p), it holds in view of the established monotonicity with respect to p and the fact that (3.1) holds for p = 2. In the second case we first find the interval with respect to p where the inequality m 1 (p) ≤ m 0 (p) actually holds. Namely, it holds for p ∈ [2, p * ], p * = 2.10915 . . . , see Fig. 1, where the unique p * is found numerically. Thus, inequality (3.1) holds for p > p * and we only need to look at the interval p ∈ [2, p * ]. Furthermore, since m 0 (p) in (3.5) is monotone increasing, we only need to check where z = m 2 /2 ≤ z * = m 2 * /2 = 0.6832 < 1, and m * = 1.169 < √ 2. Inequality (3.1) is now proved for the whole range of parameters and the proof is complete. Remark 3.1. The case p = 2 important for applications was treated by more elementary means in [7]. Remark 3.2. Calculations show that for each p tested, the function I p (m) is monotone increasing with respect to m. We are not able to prove it rigorously at the moment. However, it was shown in [8] that the lattice sum J p (m) in (1.5) is monotone increasing in m, which obviously implies inequality (1.5), since J p (∞) = 1.
2022-04-27T06:47:52.907Z
2022-04-26T00:00:00.000
{ "year": 2022, "sha1": "c0e2a67ce6b394e2d21b68fe8d3ff0633edf3278", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c0e2a67ce6b394e2d21b68fe8d3ff0633edf3278", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
3810290
pes2o/s2orc
v3-fos-license
Comparative analysis of HPV16 gene expression profiles in cervical and in oropharyngeal squamous cell carcinoma Human papillomavirus type 16 (HPV16) is the major cause of cervical cancer and of a fraction of oropharyngeal carcinoma. Few studies compared the viral expression profiles in the two types of tumor. We analyzed HPV genotypes and viral load as well as early (E2/E4, E5, E6, E6*I, E6*II, E7) and late (L1 and L2) gene expression of HPV16 in cervical and oropharyngeal cancer biopsies. The study included 28 cervical squamous cell carcinoma (SCC) and ten oropharyngeal SCC, along with pair-matched non-tumor tissues, as well as four oropharynx dysplastic tissues and 112 cervical intraepithelial neoplasia biopsies. Viral load was found higher in cervical SCC (<1 to 694 copies/cell) and CIN (<1 to 43 copies/cell) compared to oropharyngeal SCC (<1 to 4 copies/cell). HPV16 E2/E4 and E5 as well as L1 and L2 mRNA levels were low in cervical SCC and CIN and undetectable in oropharynx cases. The HPV16 E6 and E7 mRNAs were consistently high in cervical SCC and low in oropharyngeal SCC. The analysis of HPV16 E6 mRNA expression pattern showed statistically significant higher levels of E6*I versus E6*II isoform in cervical SCC (p = 0.002) and a slightly higher expression of E6*I versus E6*II in oropharyngeal cases. In conclusion, the HPV16 E5, E6, E6*I, E6*II and E7 mRNA levels were more abundant in cervical SCC compared to oropharyngeal SCC suggesting different carcinogenic mechanisms in the two types of HPV-related cancers. INTRODUCTION Cancers of the cervix and of the head and neck region are among the most common tumors in the world accounting for approximately 528,000 and 686,000 new cases in 2012, respectively [1].Although cervical cancer incidence has decreased over the last decades in many high-resource countries, due to the introduction of cervical screening programs, a stable or increasing incidence has been reported in low-income countries [2][3][4].In parallel, the incidence of head and neck tumors, including larynx, hypopharynx and oral cavity carcinoma, has decreased as a consequence of smoking and alcohol use decline [5,6].However, during the same time period an increased incidence of oropharyngeal cancer has been observed in European countries, in Australia, Canada, and in the United States suggesting a new and emerging risk factor particularly among young men [7,8]. Twelve high risk human papillomaviruses (HPV) have been recognized as the necessary cause of cervical cancer and of a subgroup of head and neck tumors.Indeed, the viral DNA has been identified in more than Research Paper: Pathology 99% of cervical carcinoma and in approximately 30% of head and neck tumors with a greater prevalence in the oropharyngeal SCC (45.8%) [9][10][11][12].The HPV16 is the predominant cause of infection in both cervical and oropharyngeal SCC accounting for 65-70% and 82-87% of all HPV-positive cases, respectively, across the world [12][13][14]. The pathogenesis of oncogenic HPVs has been well studied in cervical neoplasia.The HPV infection initiates at the basal layer of the epithelium with the expression of early proteins E5, E6 and E7 which stimulate cell growth and viral DNA replication [15][16][17][18][19].During the productive phase of HPV infection the viral genomes markedly replicate in the spinous layers and are encapsidated in the upper terminally differentiated epithelia where the late viral genes L1 and L2 are highly expressed [15,20].In cervical intraepithelial neoplasia grade 3 (CIN 3) and cervical carcinoma the virus undergoes abortive infection in which early genes E6 and E7 are expressed in all epithelial layers and cause the alteration of a number of cellular pathways involved in cell cycle regulation and apoptosis [15,20,21].Specifically, the E6 protein binds to p53 oncosuppressor causing its ubiquitination and proteasomal degradation [22][23][24], and E7 oncoprotein abrogates pRB activity [25][26][27].The HPV16 E5 protein induces dimerization of the epidermal growth factor receptor (EGFR) on the cell membrane concurring to the activation of mitogenic signals [18,28]. The abortive infections are often accompanied by the integration of viral genome into human chromosomes determining the over expression of E6 and E7 oncoproteins either by the interruption of E2-mediated transcriptional repression or by functional alterations in the long control region [29][30][31][32]. Transcription of HPV16 E6 and E7 as well as E1, E2, E4 and E5 early viral genes, driven by enhancer elements located in the long control region and early promoter P97, produces multiple polyadenylated mRNAs containing exonic and intronic sequences which are alternatively spliced [33].The alternative splicing of full-length polycistronic E6/E7 mRNAs, through the differential use of the splice donor at nucleotide 226 and splice acceptors at nucleotides 409 and 526, produces the E6*I and E6*II isoforms, respectively [33][34][35].The E6*I and E6*II transcripts have shown to be more abundant in high grade neoplasia and invasive cancers than low grade cervical lesions [36].The post-transcription regulation of HPV16 genes is controlled by several cellular splicing factors, including serine-arginine-rich (SR) proteins, heterogeneous nuclear ribonucleoproteins (hnRNPs), cleavage stimulation factor 64 kDa subunit (CSTF64) and CUG triplet repeat RNA-binding protein1 (CUGBP1), which are highly expressed in basal and middle layers cells of the cervical epithelium [37,38].No study has compared the HPV16 gene expression profiles in cervical and oropharyngeal SCC. The aim of our study was to analyze the viral load and expression levels of HPV16 early genes, particularly E5, E6, E6*I, E6*II and E7 genes as well as L1 and L2 late genes in cervical carcinoma and in oropharyngeal cancer.We sought to identify similarities and differences in viral related transformation mechanisms between the two types of tumor. HPV16 gene expression analysis Viral gene expression profiles were analyzed in all HPV16-positive samples by real time PCR using SiHa cell line cDNA as a positive control.The GAPDH cDNA was amplified in all samples to normalize the viral gene expression levels.The RNA quality was suitable for the analysis in 10 cervical SCC, six CIN, seven oropharyngeal SCC and one oropharyngeal dysplasia, along with three cervical and five oropharyngeal paired non-tumor tissues and one paired non-dysplastic tissue.Oligonucleotide pairs encompassing the untranslated region upstream the HPV16 promoter P97 were used to exclude the presence of HPV16 DNA in the cDNA samples (Figure 2). The HPV16 E6 full length mRNA was detected in all cervical SCC, in 14.3% (1 out of 7) of oropharyngeal SCC and 17% (1 out of 6) of CIN.The full E6 transcripts were also detected in 33.3% (1 out of 3) and in 20% (1 out of 5) of cervical and oropharyngeal paired nontumor tissues, respectively (Figure 3).The E6*I and E6*II isoforms were both expressed in all cervical SCC, in 50% (3 out of 6) of CIN, in 14.3% (1 out of 7) of oropharyngeal SCC, in 33.3% (1 out of 3) and 20% (1 out of 5) of cervical and oropharyngeal paired non-tumor tissues, respectively (Figure 3).The analysis of E6 mRNA patterns showed that E6*I was more expressed than fulllength E6 and E6*II in all samples, but the difference between E6*I and E6*II levels was statistically significant only in cervical SCC (p = 0.002).The oncogene E7 was highly expressed in all cervical SCC and paired non-tumor tissues, in 83% (5 out of 6) of CIN, in 14.3% (1 out of 7) of oropharyngeal SCC and in 40% (2 out of 5) of paired non-tumor tissues (Figure 3).The E6, E6*I, E6*II and E7 levels were significantly higher in cervical SCC than in paired non-tumor tissues, CIN and oropharyngeal SCC (p < 0.02).A positive linear correlation was observed in cervical SCC between E6*I and E7 expression levels (R = 0.68, p = 0.04) and between E6*II and E7 (R = 0.78, p = 0.01), (Figure 4).In oropharyngeal dysplasia only E6*I and E7 were found expressed, while none of viral mRNAs were detected in paired non-dysplastic tissues. The HPV16 E2/E4 transcripts were detected at low levels in 80% (8 out of 10) of cervical SCC and in 17% (1 out of 6) of CIN, but not detected in oropharyngeal SCC.In one CIN only E4 mRNA and not E2 was revealed.The levels of E2/E4 and E5 mRNAs were higher in cervical SCC than in CIN, but the difference was statistical significant only for E2/E4 levels (p < 0.01).The E5 mRNA was identified in 80% (8 out of 10) of cervical SCC, in 33.3% (2 out of 6) of CIN, and in none of oropharyngeal SCC and paired non-tumor tissues (Figure 3). L1 and L2 bicistronic transcripts were expressed at low levels in all cervical SCC and in 83% (5 out of 6) of CIN, however no expression was observed in oropharyngeal SCC and in paired non-tumor tissues. There was no statistically significant correlation between viral load and oncogene expression in all analyzed samples. DISCUSSION Persistent infection with high risk HPVs, mainly HPV16, is strictly associated with cervical SCC [39] and is becoming an emerging etiological factor for oropharyngeal SCC [40].In this study the viral load and the expression pattern of HPV16 early (E2, E4, E5, E6, E6*I, E6*II, E7) and late (L1 and L2) genes were analyzed in cervical SCC and paired tissues, in CIN, in oropharyngeal SCC and paired non-tumor tissues, as well as in oropharyngeal dysplasia to uncover differences in HPV-related transformation mechanisms in different types of HPV-related tumors.In agreement with previous studies, we found that HPV16 was the most frequent genotype in cervical and oropharyngeal cancer [12,14,41]. Studies on the positive correlation between HPV16 viral load and cervical or oropharyngeal SCC are controversial [42][43][44][45], with some reporting that viral copy number increases with CIN severity and is highest in cervical SCC [46].We found that HPV16 viral load was broadly variable especially in cervical SCC with a viral copy number significantly higher than in cervical paired non-tumor tissues.In oropharynx we found a slightly higher viral load in non-tumor tissues than in carcinoma.Until now no studies have compared the viral load in oropharyngeal SCC versus autologous non-tumor tissues.A recent study by Dang and Feng (2016) showed that oral and oropharyngeal cancer samples contained a higher HPV16 DNA load than normal tissues from non-cancer patients [47], however these data are not informative regarding the difference of viral copy number between non-tumor tissues and oropharyngeal carcinoma in the same patient. Several studies showed that HPV gene expression pattern could be useful as viral molecular marker of tumor progression [48][49][50].Analysis of E6 and E7 mRNA levels, together with other clinical data, seems useful to assess the risk of progression of cervical SCC and oropharyngeal SCC cases [43,51,52].In our study, the E6 and E7 expression in oropharyngeal SCC and paired non-tumor tissues was much lower than in cervical SCC.These results are consistent with previous studies reporting that while HPV is always transcriptionally active in cervical SCC the viral oncogenes may not be expressed in HPV16 DNA positive oropharyngeal cancer suggesting that the virus in some HPV-positive head and neck cancers is not the main carcinogenic factor [53,54].Oropharyngeal SCC patients without biologically active HPV are considered at risk for cancer progression similarly to HPV-negative smoker patients [55][56][57]. Holzinger et al (2012) reported that both high viral load and increased expression of E6 and E7 mRNAs could define oropharyngeal SCC with active HPV16 involvement.However, we observed that there was no significant correlation between viral copy number and E6 and E7 viral mRNA levels in both cervical and oropharyngeal SCC [50].Our observation is in agreement with previous studies reporting no correlation between DNA copy number and the E6 and E7 mRNA expression in cervical cancer, implying that not all viral genomes are transcriptionally active in tumors and cancer derived cell lines [58].Paradigmatic is the highly divergent HPV16 copy number in CaSki (above 400 viral copies per cell) and SiHa cells (1 copy per cell) and the comparable levels of E7 transcripts (41.6 ± 1.5 and 14.8 ± 2.6 transcript copies per cell in CaSki and SiHa cells, respectively) in the two cervical cancer derived cell lines [59]. The over expression of HPV16 E6*I and E6*II isoforms has been also shown to modify the levels of many proteins involved in mitochondrial dysfunction and oxidative phosphorylation in C33A cervical cells, and the β-integrin signaling pathway in HPV16-positive SiHa cells [66].In high grade cervical intraepithelial neoplasia and in cervical cancer the E6*I is more expressed than E6*II [36,68] and E6*I/E6*II ratio seems to be a predictive marker of clinical outcome in HPV-related oropharyngeal cancer [69]. In the present study the expression levels of HPV16 E6*I and E6*II were found high in all cervical SCC but not in oropharyngeal SCC.In agreement with other studies, we observed a linear correlation between E6*I and E7 mRNA as well as between E6*II and E7 mRNA [33,70]. Interestingly, in the analyzed oropharyngeal dysplasia the E7 oncogene and the E6*I isoform were both detected leading to the hypothesis that HPV oncogenic activity may be important for the early phases of oropharyngeal neoplasia [53,71].It has been suggested that during the early stages of oropharyngeal carcinogenesis the viral oncoproteins may synergize with other carcinogens, such as smoke and alcohol, and may increase the risk of tumor progression [53,71].In our study the HPV16 E7 oncogene was much more expressed in cervical SCC than oropharyngeal SCC, suggesting that the expression of E7 is the main driver of tumor cell proliferation in cervical cancer but not in oropharyngeal SCC [72]. Several studies showed that E5 expression synergizes with E6 and E7 causing a more severe cancer phenotype [17,73].In our study the E5 expression was variable in cervical SCC, low or absent in CIN (except one sample showing also high E6 and E7 mRNA levels) and in cervical paired non-tumor tissues.Variable levels of E5 in cervical SCC could be due to the presence of both episomal and integrated forms of HPV16, as reported by Das et al. (2015) [74].In fact, integration of HPV16 in the host genome could disrupt E5 ORF together with E2 ORF [75]. In oropharyngeal SCC and respective paired nontumor tissues the E5 mRNA was found not expressed, in accordance to previous studies reporting variable expressions of E5 in head and neck cancer [73,76].The E2/E4 as well as L1 and L2 gene expression was generally low in cervical SCC and CIN and undetectable in oropharyngeal SCC [58,77]. The main limitations of our study are the limited number of patients and the inability to evaluate long term However, this is the first study comparing the HPV16 traits between the cervical and oropharyngeal SCC along with autologous non-tumor tissues. In conclusion, this study confirmed that HPV16 is highly prevalent in cervical and oropharyngeal SCC.However the viral load is very low in oropharyngeal SCC compared to cervical SCC and importantly the viral oncogene mRNA levels and expression profiles are very different between cervical SCC and oropharyngeal SCC.Indeed, E5, E6, E6*I, E6*II and E7 mRNA were significantly more abundant in cervical SCC than in oropharyngeal SCC suggesting the presence of different carcinogenic mechanisms in the two different virus-related tumors. Patients and samples Twenty-eight cervical SCC biopsies along with paired non-tumor tissues and 112 cervical biopsies, comprising 40 cases of borderline to mild dyscaryosis (BMD) cytology and normal histology, 66 cervical intraepithelial neoplasia (CIN) grade 1, four CIN2 and two CIN3 biopsies, obtained from patients attending the Gynecology Unit of Istituto Nazionale Tumori "Fond Pascale" from November 2013 to December 2015.Ten oropharyngeal SCC and paired non-tumor tissues, as well as four oropharyngeal dysplastic biopsies with paired nondysplastic tissues, were obtained from patients referred to the Head and Neck Surgery Unit of the Istituto Nazionale Tumori "Fond Pascale" from January 2012 to December 2015 (Table 1).Each biopsy was divided in two sections: the first section was stored in RNA Later (Ambion, Austin, Texas) at -80°C, the second was subjected to histopathologic examination.Similarly, paired non-tumor biopsies were divided in two sections and processed for molecular analysis and histopathologic examination.This study was approved by the Institutional Scientific Board and by the Ethical Committee of the Istituto Nazionale Tumori "Fond Pascale", and it is in accordance with the principles of the Declaration of Helsinki. HPV detection was carried out by nested PCR amplifying 300 ng of genomic DNA with MY09/MY11 primer pairs [79] for the outer reaction and MGP primer system for the inner reaction in 50μl reaction mixture containing 5μl of outer reaction, as previously described [80].HPV genotypes were identified by direct automated DNA sequencing analysis of MGP amplified products using the primer GP5+ [81] at Eurofins Laboratories (Milan, Italy).HPV type identification was performed by alignments of HPV sequences with those present in the GenBank database using the BLASTn software (http:// www.ncbi.nlm.nih.gov/blast/html).HPV16 positive samples were selected for gene expression and viral load analysis. HPV16 viral load quantization was performed in the Bio-Rad CFX96 Real-time PCR Detection System using 300 ng of template DNA, 12.5 µl of 1x iQ™ SYBR® Green supermix (Bio-Rad, Hercules, California) and 10 pmol each of E7 forward and reverse primers (Supplementary table) in a final volume of 25 µL.Thermal cycling consisted of a denaturation step at 95°C for 3 min, followed by 40 cycles of annealing at 54.3°C for 30 s, extension at 72°C for 30 s and denaturation at 95°C for 30 s. Exon 7 of TP53 human gene was also amplified with primers targeting the exon 7 to normalize the viral load in each sample, as previously described [82].Two replicates were performed for each sample and real-time PCR data were analyzed using Bio-Rad CFX manager software.Two standard curves were constructed to calculate absolute numbers of HPV16 E7 and TP53 copies, respectively, by amplifying serial dilutions (10 5 to 1 cell) of SiHa cell genomic DNA, containing 1 copy per cell of integrated HPV16 genome.The viral load per cell in each sample was calculated by normalizing the E7 copy number against the amount of cellular DNA (TP53) according to the formula: HPV copies/cell = Number of E7 copies/(number of TP53 copies/2). HPV16 gene expression analysis Total RNA was extracted from all samples using RNeasy MiniKit (Qiagen, Hilden, Germany) according to manufacturer procedure.The quality and quantity of isolated RNA was determined using the Nanodrop 2000c and calculating the ratio of absorbance at 260 nm and 280 nm.All RNA samples with a ratio in the range of 1.8-2.0 were included in further analyses.For each sample 250 ng of total RNA were reverse transcribed in 20 μL volume containing 4 μL of iScript reaction mix (Bio-Rad), 1 μL of iScript reverse transcriptase (Bio-Rad) and nuclease-free water.The reaction was incubated at 25°C for 5 min and at 42°C for 30 min, finally the enzyme was inactivated at 85°C for 5 min in the Gene Amp PCR System 2400 (Applied Biosystems, Foster City, California). The cDNA samples were amplified for HPV16 early (E2/E4, E5, E6, E6*I, E6*II, E7) and late (L1 and L2) transcripts by real time PCR using specific primer pairs described in Supplementary table.The reverse primers specific to each E6 isoform encompassed the respective splicing acceptor nucleotides ensuring the specific amplification of each spliced isoform, as confirmed by nucleotide sequence analysis of PCR amplimers (Figure 2).The reaction mixture included 12.5 μL of 1x iQ™ SYBR® Green supermix (Bio-Rad), 10 pmol of each primer, 1 μL of cDNA and nuclease-free water in a final volume of 25 μL.All reactions were performed in duplicate.The amplifications were carried out on the Bio-Rad CFX96 real time PCR Detection System following the protocols described in Supplementary table.The expression of each viral gene was analyzed with the 2 -ΔCt method using GAPDH as a reference gene.The ΔCt values for each amplified transcript were calculated by subtracting the respective Ct value from the corresponding GAPDH Ct (ΔCt = Ct x -Ct GAPDH ).The Ct values were corrected for primer pairs efficiency to compare the expression levels of the different genes.Primer efficiency was calculated generating curves of SiHa cDNA serial dilutions for each analyzed gene. Statistical analysis Statistical analysis was performed with GraphPad version 6 (Prism).Spearman's rank correlation coefficient (r) was calculated to evaluate correlation between viral load and oncogenes expression levels, Pearson's coefficient (R) was calculated to evaluate linear correlation between gene expression levels, while ANOVA Kruskal-Wallis test and U Mann-Whitney test were used to evaluate differences in gene expression levels and viral load.All variables with p < 0.05 were considered statistically significant. carcinoma; b borderline to mild dyscaryosis; c cervical intraepithelial neoplasia.
2018-04-03T05:33:51.217Z
2017-03-07T00:00:00.000
{ "year": 2017, "sha1": "53547cfc82af62fb082c68e3990649cced69b217", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=15977&path[]=51071", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "53547cfc82af62fb082c68e3990649cced69b217", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232327996
pes2o/s2orc
v3-fos-license
What Does CATS Have to Do With Cancer? The Cognitive Activation Theory of Stress (CATS) Forms the SURGE Model of Chronic Post-surgical Pain in Women With Breast Cancer Chronic post-surgical pain (CPSP) represents a highly prevalent and significant clinical problem. Both major and minor surgeries entail risks of developing CPSP, and cancer-related surgery is no exception. As an example, more than 40% of women undergoing breast cancer surgery struggle with CPSP years after surgery. While we do not fully understand the pathophysiology of CPSP, we know it is multifaceted with biological, social, and psychological factors contributing. The aim of this review is to advocate for the role of response outcome expectancies in the development of CPSP following breast cancer surgery. We propose the Cognitive Activation Theory of Stress (CATS) as an applicable theoretical framework detailing the potential role of cortisol regulation, inflammation, and inflammatory-induced sickness behavior in CPSP. Drawing on learning theory and activation theory, CATS offers psychobiological explanations for the relationship between stress and health, where acquired expectancies are crucial in determining the stress response and health outcomes. Based on existing knowledge about risk factors for CPSP, and in line with the CATS position, we propose the SURGEry outcome expectancy (SURGE) model of CPSP. According to SURGE, expectancies impact stress physiology, inflammation, and fear-based learning influencing the development and persistence of CPSP. SURGE further proposes that generalized response outcome expectancies drive adaptive or maladaptive stress responses in the time around surgery, where coping dampens the stress response, while helplessness and hopelessness sustains it. A sustained stress response may contribute to central sensitization, alterations in functional brain networks and excessive fear-based learning. This sets the stage for a prolonged state of inflammatory-induced sickness behavior – potentially driving and maintaining CPSP. Finally, as psychological factors are modifiable, robust and potent predictors of CPSP, we suggest hypnosis as an effective intervention strategy targeting response outcome expectancies. We here argue that presurgical clinical hypnosis has the potential of preventing CPSP in women with breast cancer. INTRODUCTION Chronic post-surgical pain (CPSP) affects a substantial amount of patients undergoing either major or minor surgeries (Shug and Pogatzki-Zahn, 2011). CPSP can be defined as pain that develops after surgical intervention and persists minimum 3-6 months after healed tissue damage (Cohen and Raja, 2020). An example of debilitating CPSP is documented in women undergoing breast cancer surgery. More than 1 million women are diagnosed with breast cancer every year, and approximately 25-60% of them will struggle with CPSP, regardless of surgical procedure (Andersen and Kehlet, 2011;Wang et al., 2018b). The prevalence of severe CPSP following breast cancer surgery is estimated to be 5-10%, where CPSP causes patients to experience a significant reduction in daily functioning, work capability, and quality of life (Andersen and Kehlet, 2011). As with any chronic pain condition, the pathophysiology of CPSP in breast cancer is multifactorial, and knowledge of the underlying mechanisms is still unclear. As an example of this complexity, only some of the women with CPSP following breast cancer surgery have peripheral pain drivers as a result of intra-surgical nerve damage (Gärtner et al., 2009;Schou Bredal et al., 2014). It is therefore acknowledged that CPSP is best understood through a bio-psycho-social model, with multivariate factors contributing to its development (Weinrib et al., 2017). When evaluating modifiable and well-documented risk factors for CPSP following breast cancer surgery, we argue for the potential impact of expectancies on psychoneuroimmunological responses to a stressful situation. Conceptualized by an expectancy model (SURGE), we propose that CPSP can be understood, delineated, and possibly prevented. Our suggested model incorporates the cognitive activation theory of stress (CATS), predictive coding principles, cortisol function, and inflammatoryinduced sickness behavior. SETTING THE SCENE FOR CPSP: LIFE LEADING UP TO A SURGERY Throughout our lives, our learning history shapes expectancies, higher order beliefs about how we will respond to stressful challenges such as an impending surgery. Surgery in the context of cancer represents a highly stressful experience for most, if not all. It gives rise to multitude of expectancies of how the surgery and disease will unfold and how one is going to deal with the consequences. Dealing with such a challenge evokes past learning in the form of acquired expectancies and prior conditioning, here seen as complementary and overlapping constructs (Stewart-Williams and Podd, 2004). Expectancies are commonly defined as "beliefs that something will happen or is likely to happen" (Schwarz et al., 2016) and can be acquired by direct experience, verbal instruction, or observation of others Laferton et al., 2017;Rief and Joormann, 2019). In other words, any direct or indirect experience with surgery will contribute to the formation of expectancies. The subsequent expectancies can be colored by hope, trust, and optimism, but also by fear, worry, and catastrophic thoughts. As an example, if a loved one previously has undergone surgery and experienced CPSP, we might fear an approaching surgery. This fear quickly becomes important as expectancies can be the powerful modulators of health outcomes (Benedetti, 2008;Kirsch, 2018;Lasselin et al., 2018). Some of the strongest effects from expectancies are seen in the placebo/nocebo literature. Positive expectancies about a given treatment can lead to increased pain relief, even if the given treatment is perceived as inactive, e.g., a calcium tablet or sham acupuncture (Benedetti, 2008;Atlas and Wager, 2012;Forsberg et al., 2017). Also, it is wellestablished that positive expectancies about the response of a given treatment may enhance the analgesic effects of active surgical (Gandhi et al., 2009), pharmacological (Bingel et al., 2011), and non-pharmacological treatments (Peerdeman et al., 2016). These processes are coined placebo analgesia. A related phenomenon is nocebo hyperalgesia. Here, negative response outcome expectancies are found to increase the intensity of pain in experimental and clinical studies (Colloca and Miller, 2011;Petersen et al., 2014). Negative expectancies about a treatment can block the analgesic effects of active treatments or exaggerate negative side effects (Petersen et al., 2014;Smith et al., 2020). While most of this research primarily focuses on experimental and acute pain, other lines of research have shown how negative expectancies can have debilitating effects on the development and maintenance of chronic pain (Atlas and Wager, 2012). COGNITIVE ACTIVATION THEORY OF STRESS From the moment an individual receives word about an upcoming surgery, particularly, a potential life-threatening cancer requiring surgery, a stress response usually follows. This response can be understood using the cognitive activation theory of stress (CATS), a psychobiological theoretical framework offering clear and formal definitions of the stress response and how this affects health (Ursin and Eriksen, 2004). In CATS, "stress" is defined and operationalized as a psychobiological concept with four stages (Figure 1). The first stage is the orientation. Here, we orient toward what could be a stress stimulus, representing objective internal or external stimuli automatically processed by the brain, ultimately leading to appraisal. The second stage is the appraisal or subjective anticipation of stress, where the stimuli have been filtered by the brain in terms of individual learning history. In CATS, learning history includes stimuli expectancies driven by classical conditioning, and response outcome expectancies driven by operant conditioning. Frontiers in Psychology | www.frontiersin.org These expectancies determine to a large degree intensity and duration of the third stage, the physiological stress response. The physiological stress response is an alarm system representing a general, non-specific arousal response in the somatic and autonomic nervous system as well as in several endocrine axes (Ursin and Eriksen, 2010). The alarm goes off when an imbalance is expected in the homeostatic system, e.g., when experiencing novel or threatening stimuli or a discrepancy between what is expected and what actually is (Subjective set Value − Actual Value ≠ 0; Ursin and Eriksen, 2010). The fourth and final stage of the definition represents the individual experience of the stress response, consisting of information from the arousal response being fed back to the brain, ultimately maintaining, adding to, or resolving the unpleasant feeling of stress. According to CATS, stress is a beneficial reaction, meaning that an activation of a stress response in challenging situations is healthy and adaptive. The goal of a short activation of the physiological stress response is to restore homeostasis (Ursin and Eriksen, 2004), and the arousal response is gradually turned off when the individual expects to handle the challenge successfully. If not, the arousal may be sustained, leading to illness and disease. Whether the stress response is eliminated, dampened, or sustained relies on expectancy filters (Ursin and Eriksen, 2004). These filters are described as stimulus expectancies and response outcome expectancies. Stimulus Expectancies Our brain is designed to store information about the relationships between sets of stimuli and our available responses (Ursin and Eriksen, 2004). This information is stored as expectancies and is how we come to expect that one specific stimulus typically precedes another specific stimulus. In CATS, this is called as stimulus expectancies, and it represents classical conditioning within traditional learning theory (Ursin and Eriksen, 2004). A classic example of associative learning and stimulus expectancies is the work of Ivan Pavlov and his dogs. His now famous experiment showed how dogs that were continuously presented with food paired with a sound of the bell later would salivate when they heard the bell ring, regardless of food were offered or not (Pavlov and Thompson, 1902). A particular feature of Pavlovian conditioning is that stimuli sharing characteristics with the original conditioning stimuli may become capable of eliciting conditioned responses, depending on the perceptual or functional proximity between the two. This was exemplified in his studies showing that the dogs eventually started to salivate just as they heard the footsteps of the experimenters. Thus, during stimulus generalization, individuals extrapolate knowledge from one aspect of the situation to other aspects and situations -making more and more stimuli capable to elicit the conditioned response. Response Outcome Expectancies Response outcome expectancies are within CATS regarded as acquired information about available responses to a stimulus and how these responses affect subsequent outcomes. This type of learning follows principles of operant or instrumental conditioning, where the individual learns from positive and negative reinforcements of behavior (Ursin andEriksen, 2004, 2010). Ursin and Eriksen, 2010). The stress stimulus (load) is registered. Stimulus-and response-outcome expectancies influence whether the load is appraised as stressful. If so, a general physiological stress response is activated. Feedback from the physiological stress response is being fed back to the brain. A short activation of the stress response is healthy and adaptive, while a sustained stress response may lead to illness or disease. Reprinted from Ursin and Eriksen (2010), Copyright (2021) with permission from Elsevier. Frontiers in Psychology | www.frontiersin.org Through response outcome expectancies, you anticipate successful or unsuccessful handling of future threats without yet having experienced them, an essential prerequisite for avoiding or anticipating harm. A physiological stress response experienced by a woman who is about to undergo surgery could thus be interpreted in different ways according to her expectancies; it could either be interpreted as a sign of anxiety implicating uncontrollable danger and harm, or as a normal response to a challenging situation. While the first interpretation has the potential to increase and sustain the stress activation, the second interpretation has the potential to dampen the stress response. The power of beliefs and expectancy in regulating physiology is a hallmark of another important learning theory, the predictive coding framework of information processing. This theory suggests that the brain uses Bayesian prediction principles to constantly match bottom-up sensory information with top-down predictions created by prior experiences (Gilbert and Sigman, 2007;Petrovic et al., 2010;Büchel et al., 2014). These predictions are organized hierarchically in the brain, from lower-level momentary hypotheses about the causes of current sensory inputs (e.g., feeling pain from a gentle touch) to increasingly more overarching beliefs the nature of the world and yourself (e.g., "I cannot cope with this pain anymore"). These higher order beliefs are in many ways analog to the concepts of stimulus expectancies and response outcome expectancy in CATS. One can envision generalized response outcome expectancies forming enduring overarching hypotheses (e.g., "I am not a person that handles pain"). When a person experiences a discrepancy between experience and expectancy, these higher order beliefs can overturn lower-level sensory input, motivating behavior and cognition in order to uphold an expectancy, regardless of lower level input and prediction errors. This has been described as cognitive immunization and is particularly evident in patients with depression (Kube et al., 2020). Numerous studies show how patients suffering from depression are prone to maintain their negative expectancies despite of positive, contradictory evidence (Korn et al., 2014;Liknaitzky et al., 2017;Everaert et al., 2018). This immunization contrasts that of a healthy population who show an overall optimism bias, i.e., a tendency mainly to update expectancies if new information are positive, while maintaining one's prior belief if the presented evidence is negative (Sharot, 2011). The notion of a hierarchical organization of processing is also described in CATS through the feedback loop in the model, where lower level peripheral changes -i.e., the stress response -is being fed back to the brain, but can be prolonged or dampened according to higher order expectancies or predictions (Ursin and Eriksen, 2004). Principles from the predictive coding framework thus align with the expectancy principles outlined in CATS. In effect, generalized expectancies based on prior experiences can then override lower-level changes and new learning, potentially maintaining a stress response in the weeks leading up to breast cancer surgery. The CATS model has further specified three forms of generalized response outcome expectancies, namely coping, helplessness and hopelessness. Coping A significant contribution from CATS is its clarification of the coping term and its assumed correlates. Coping in CATS terminology is the acquired expectancy that most or all responses to a situation will lead to a positive outcome. Thus, it represents an anticipatory cognitive construct rather than objective abilities or strategies that could be applied in challenging situations. Coping in form of generalized response outcome expectancy may be associated with a proactive appraisal of the stressful situation, reflecting improved anticipatory stress regulation, ultimately resulting in a shortened physiological stress response (Ursin and Eriksen, 2004). In the case of a woman undergoing breast cancer surgery, coping may refer to the expectancy of being able to handle the stressful aspects of the surgery, i.e., the post-surgical pain and potential side effects in a successful way. This taps into the established CPSP resilience factors of dispositional optimism and self-efficacy (Weinrib et al., 2017). According to CATS, it is when coping is defined as a generalized response outcome expectancy it may hold the strongest predictive power for health outcomes, mediated by its presumed reducing effects on the strength and the duration of the physiological stress response (Ursin andEriksen, 2004, 2010). The authors of CATS argue that since coping defined as coping strategies can be carried out under various levels and lengths of arousal, it is not a robust predictor of stressrelated illness or disease (Ursin and Eriksen, 2004). Both human and animal studies suggest that positive expectancy attenuates the cortisol response to stress. Rats exposed to shocks will initially show high behavioral and endocrine arousal. However, in late stages of avoidance learning tasks when they have established that they will be able to escape the shocks, the arousal diminishes to a minimum (Coover et al., 1973). Ursin and Eriksen (2004) suggest that this happens so rapidly and efficient that it is not just a result of the avoidance behavior, but due to an expectancy that the behavior will lead to a successful outcome. Ursin et al. (1978) also tested this position in humans. A group of novel parachutist trainees showed the high levels of endocrine and subjective reported arousal before their first jump. Already after their first training session, before there had been any real improvement of their performance, the arousal reduced significantly. This could indicate that it was not the actual performance, but the acquired expectancy of being able to handle the situation with a positive result, that explained the diminished stress response. Recent studies of how we react to psychosocial stress confirm and expand upon these early reports of positive physiological effects from cognitive re-framing and coping. Jamieson et al. (2012) showed that during a psychosocial stress test, participants instructed to reappraise their arousal in a positive way had increased cardiac efficiency, lower vascular resistance, and decreased attentional bias. Similarly, Nasso et al. (2019) showed that when anticipating a stressful task (i.e., giving a speech), individuals using an adaptive cognitive emotion regulation strategy showed better anticipatory stress regulation than individuals prone to worry or catastrophizing. Overall, these results suggest that positive response outcome expectancies can affect the long-term consequences of our physiological stress responses in a beneficial fashion. Helplessness and Hopelessness Helplessness refers to the acquired expectancy of one's actions having no impact on the outcome of an aversive event. This can be exemplified by a woman going into breast cancer surgery with the expectancy that there is nothing she can do to control the outcome of the surgery or potential negative side effects. A qualitative study by Lie et al. (2018) highlighted how young adult cancer patients, aged 18-35 years at time of diagnosis, describe that not being able to predict or control their situation was the most stressful aspect of all stages of their disease and treatment. This study focuses on patients in a particular vulnerable transitional life period. However, other studies find similar results on helplessness, i.e., the factors of uncertainty and lack of perceived control are common characteristics of stress and chronic disease, with negative effects on pain outcomes and quality of life (Johnson et al., 2006;Müller, 2011;Caruso et al., 2014;Engevold and Heggdal, 2016). Hopelessness, on the other hand, is an expectancy of most or all responses leading to negative outcomes. In women with breast cancer going into surgery, this could be the expectancy that all attempts to handle or change the stressful situation evolving around the surgery, will only make it worse. Hopelessness implies that there is control, responses have effects, but they are all negative. These failed attempts combined with the assumed control could evoke feelings of guilt and self-blame in those who acquire expectancies of hopelessness. Thus, these expectancies are proposed by the authors of CATS as a cognitive model for depression (Ursin and Eriksen, 2004), a condition that increases the risk of developing CPSP (Weinrib et al., 2017). The expectancies of helplessness and hopelessness are also conceptually close to another established risk factor of CPSP namely pain catastrophizing. When measured with the Pain Catastrophizing Scale (PCS; Sullivan et al., 1996), this is a strong and consistent predictor of CPSP (Hannibal and Bishop, 2014;Johannsen et al., 2018Johannsen et al., , 2020. In PCS, patients report about helplessness and hopelessness in response to pain (e.g., "It's terrible and I think it's never going to get any better" and "there's nothing I can do to reduce the intensity of the pain"; Sullivan et al., 1996). Moreover, the elements of hopelessness are captured within measures of injustice experiences (The Injustice Experience Questionnaire; Sullivan et al., 2008), which also is a significant psychological risk factor for developing CPSP (Yakobov et al., 2014). In summary, CATS states that coping may reduce or eliminate the physiological stress response, and helplessness and hopelessness may sustain it. If sustained, the stress response affects specific psychological and neurobiological mechanisms that can reinforce and perpetuate pain relating to the surgery, increasing the risk for developing CPSP. Stress and Sensitization A line of experimental studies have demonstrated the link between a sustained stress response and the process of sensitization, which is suggested as a psychobiological mechanism in the transition from acute to chronic pain (Ursin, 2014). On the cellular level, sensitization is defined as an increased efficiency in a neural circuit, due to a change in synapses from repeated use (Collingridge et al., 2004). Sensitization of pain pathways in the central nervous system is widely accepted as a theory of neural mechanisms enhancing pain transmission (Ikeda et al., 2009). This central sensitization progressively amplifies the responses to pain stimuli. It manifests as pain hypersensitivity both as a reduction in pain threshold and an increase in pain responsiveness as well prolonged after sensations and an expansion of the receptive field (Woolf, 2011). A large body of evidence showing central sensitization in chronic pain syndromes originates from research on patients with fibromyalgia, a condition with widespread pain in the body. Research has demonstrated widespread reductions in pain thresholds as well as an increased temporal summation and a spatial area of pain in this patient group (Gibson et al., 1994;Lorenz et al., 1996;Graven-Nielsen et al., 2000). Patients with CPSP also show the signs of central sensitization (Woolf, 2011;Johannsen et al., 2020). The role of central sensitization in CPSP is further supported by an indication of pain reducing effects due to centrally acting agents such as ketamine (Remérand et al., 2009), pregabalin (Mathiesen et al., 2009;Burke and Shorten, 2010), gabapentin (Sen et al., 2009;Verret et al., 2020), and duloxetine (Ho et al., 2010). However, more studies are needed to establish the effectiveness of pharmacological treatments. Pre-surgical pain in the surgical area as well as other sites of the body is the strong predictors of CPSP (Poleshuck et al., 2006;Kudel et al., 2007;Gärtner et al., 2009;Nikolajsen and Aasvang, 2019). Patients who experience pre-surgical pain conditions, such as fibromyalgia, migraine, or chronic low back pain, have a significant increased risk of CPSP following breast cancer surgery (Bruce et al., 2012;Schou Bredal et al., 2014). The association between pre-and post-surgical pain could be due to an unknown common underlying factor (e.g., genetic and/or psychological), making a group of patients more vulnerable to persistent pain. Still, it could suggest that a central sensitization plays a role in CPSP through repeated pain stimuli increasing the efficiency and excitability of central pain pathways or stated another way; pain produces pain. Different lines of research thus present the hypothesis that sensitized stress responses could interact with sensitized pain responses, and ultimately increase the risk of CPSP. However, it has proven difficult to establish direct causal delineation of sustained stress in chronic pain, but psychological and physiological stress is associated frequently with the development and persistence of chronic disease such as chronic pain conditions (McEwen and Kalia, 2010;Timmers et al., 2019). In a CATS perspective, sustained activation is the motor that accelerates sensitization and prevents its reversibility, thus sustained stress activation will affect almost all bodily systems through the actions of cortisol. As principles of central sensitization likely contribute to the chronification of pain (Woolf, 2011), the potential maladaptive effects of stress hormones on pain transmission could mediate the relationship between chronic stress and chronic pain. Cortisol Function and Chronic Pain Cortisol is a catabolic hormone produced in the adrenal cortex, which plays a crucial part in the physiological stress response (Hannibal and Bishop, 2014). In stressful situations, cortisol levels rise to provide energy to deal with the situation or escape danger (fight or flight; Blackburn-Munro and Blackburn-Munro, 2003). Prolonged cortisol secretion, on the other hand, could have damaging effects and increase the risk of chronic pain. During the stress response, unbound cortisol binds on glucocorticoid receptors (GRs) resulting in anti-inflammatory and pain inhibiting mechanisms (Fries et al., 2005;Sorrells et al., 2009). However, an exaggerated or sustained cortisol secretion may cause GR to downregulate, or block cortisol binding, ultimately creating cortisol dysfunction (Norman and Hearing, 2002). Further, an impaired binding to GR might disrupt the negative feedback loop, which under normal circumstances enables cortisol to regulate the release of corticotrophin-releasing hormone (CRH) (Fries et al., 2005). CRH upregulates glutamate and N-methyl-d-aspartate (NMDA) in the amygdala, which might set prime for a conditioned fear-based stress response (Tsigos and Chrousos, 2002;Shekhar et al., 2005;McEwen and Kalia, 2010). Additionally, it is indicated that the activation of CRH receptors in the amygdala may trigger pain in the absence of tissue damage and that hyperpolarized postsynaptic potentials might be able to make amygdala resistant to inhibitory signals from prefrontal cortex (Shekhar et al., 2005;Ji et al., 2013). Such reduced prefrontal modulation is associated with pain catastrophizing in chronic pain patients experiencing intense pain (Seminowicz and Davis, 2006). Several studies have associated the actions of cortisol with increased activation in the amygdala during anxiety and fear (Shekhar et al., 2005;Ji et al., 2013;Vachon-Presseau et al., 2013). Using an animal model of neuropathic pain in rats, Li et al. (2013) found that lesions of the basolateral amygdala inhibit the transition from acute to chronic pain in the early stages of nerve damage. Due to the well-established role of the amygdala in the fear learning system, the authors suggest that a possible explanation of this involves interruptions of negative emotions and consolidation of fear-based pain memories. Such learning processes may possibly relate to the acquisition of negative response outcome expectancies, potentially leading to sustained activation, sensitization and chronic pain. Pain catastrophizing, i.e., a sense of helplessness and hopelessness, elevates the cortisol secretion and sustains the activation of the stress response (Johansson et al., 2008;Quartana et al., 2010;Müller, 2011). Sustained activation of a sensitized stress response exhausts the HPA-axis, and chronic stressinduced hypocortisolism has been linked to chronic pain conditions (Tsigos and Chrousos, 2002;Tak and Rosmalen, 2010;Hannibal and Bishop, 2014). Paradoxically, hypercortisolism is also reported as a contributor to chronic pain (Blackburn-Munro and Blackburn-Munro, 2003;Dedovic et al., 2009), i.e., potentially mediated by the blunted feedback mechanisms discussed earlier in this section. The relationship between stress, chronic pain, and hypo-and hyper-cortisolism thus depend on temporal aspects of measurement, the individualized stress response, the different mechanisms of cortisol dysfunction described earlier and numerous situation-specific factors (Hannibal and Bishop, 2014). These inconsistencies call for more research on the relationship between cortisol and chronic pain, but available data suggest that stress-induced cortisol dysfunction could contribute to the development and persistence of chronic pain. Cortisol dysfunction through the mechanisms discussed above represents potential harmful effects of sustained activation on a neurochemical level. In addition, prolonged secretion of stress hormones may alter both the functional and physical properties of the corticolimbic system with considerable consequences for the development and perpetuation of chronic pain following breast cancer surgery. Corticolimbic Plasticity The corticolimbic circuit of the brain consists of neural loops between structures such as the prefrontal cortex (PFC), the amygdala, the hippocampus, and hypothalamus in strong connections to the HPA-axis (Vachon-Presseau, 2018). The corticolimbic circuit is involved in a variation of cognitiveemotional processes and plays a crucial role in motivation and learning, i.e., in relation to pain and the anticipation of pain (Ploghaus et al., 2001;Apkarian et al., 2009). It has been suggested that the corticolimbic circuit may represent the primary system through which nociception accesses consciousness and is experienced as pain (Baliki and Apkarian, 2015). The corticolimbic structures show high affinity to stress hormones, which enable them to regulate the stress response through feedback loops to the HPA axis, and at the same time making them sensitive to the effects of long-term exposure to cortisol (Radley and Sawchenko, 2011;Vachon-Presseau, 2018). The PFC is particular sensitive to the effects of stress hormones. Sustained exposure to cortisol has shown to generate extensive dendritic spine loss (Arnsten, 2009;McEwen and Morrison, 2013) similar to that observed in medial prefrontal cortex (mPFC) in animal models of neuropathic pain (Metz et al., 2009). Moreover, the mPFC has been associated with individual differences in subjective pain intensity in chronic pain patients. For example, an fMRI study by Baliki et al. (2012) indicated that the strength of the functional connectivity between mPFC and nucleus accumbens (NAc) is a dominating predictor of pain chronification in humans with subacute back pain (stronger mPFC-NAc connectivity was associated with pain persistence). The activity of the PFC regulates, and is regulated by, the amygdala. In animal models of chronic pain, the excitability of neurons in the amygdala rapidly increases in response to repeated pain stimuli (Ursin, 2014). This increased excitability compliments animal models showing hypertrophy Frontiers in Psychology | www.frontiersin.org and increased spinogenesis in basolateral regions of the amygdala when animals are exposed to sustained stress (Roozendaal et al., 2009). Studies of post-traumatic stress disorder in humans expand upon this indicating that both pain and fear-based learning can drive hypertrophy in these regions of the amygdala (Morey et al., 2020). The increased activity and hypertrophy of the amygdala divergently affects plasticity in other brain regions such as the PFC and hippocampus (Patel et al., 2018). The amygdala then influences the corticolimbic circuit by modulating excitability of the inhibitory neurons in the mPFC, as well as neurons in the spinal cord (Neugebauer et al., 2004;Neugebauer, 2015), which may result in pain hypersensitivity. Thus, the connectivity between the amygdala and the PFC may be distorted by long-term exposure to cortisol, mediated by CRH as well as GR signaling (Galatzer-Levy et al., 2017), which have implications for the regulation of anxiety and pain (Shekhar et al., 2005;Ji et al., 2013). Finally, several studies have implicated that alterations in the physical and functional features of the hippocampus are associated with chronic pain conditions. Using an animal model of neuropathic pain, Mutso et al. (2012) found decreased hippocampal neurogenesis and altered hippocampal shortterm synaptic plasticity in mice with spared nerve-injury neuropathic pain compared with sham animals. In addition, this study found lower hippocampal volume in patients suffering from low back pain and complex regional pain syndrome. The authors propose that the functional hippocampal abnormalities found in their animal model of neuropathic pain potentially relate to the decreased hippocampal volume observed in chronic pain conditions, and that this ultimately contributes to emotional and learning deficits associated with chronic pain. The deteriorating effects of stress hormones on hippocampal volume and neurogenesis are indicated in both aging (Lupien et al., 1998) and psychiatric populations (Sapolsky, 2000;Videbech and Ravnkilde, 2004). In summary, the corticolimbic system may be sensitive to maladaptive effects of long-term exposure to stress hormones, both in terms of its physical and functional properties. These stress-induced changes in the corticolimbic circuit may negatively affect the regulation of the stress response by impairing the inhibitory feedback loops from the HPA axis (Vachon-Presseau, 2018). This could contribute to a vicious cycle sustaining the activation of the stress response and presents direct and indirect implications for the chronification and experience of pain in a woman entering surgery for breast cancer. PERI-AND POSTOPERATIVE STRESS -THE CRUCIAL TIME JUST BEFORE, DURING, AND AFTER SURGERY In the perioperative phase, breast cancer patients often experience high levels of distress and expect a variety of post-surgery symptoms (Deane and Degner, 1998;Spencer et al., 1999). Such distress may include everything from concerns about diagnosis and prognosis (Schnur et al., 2008), to concerns about anesthesia (Shevde and Panagopoulos, 1991), and surgical procedures (e.g., pain during procedure and postoperative side effects; Klafta and Roizen, 1996). Pre-surgery distress and patient expectancies about the severity of postoperative side effects have both been found to predict pain severity, nausea, and fatigue 1 week after surgery in breast cancer patients (Montgomery et al., 2010). In addition, patients' presurgical expectancies of pain, fatigue, and nausea have been shown to partially mediate the effects of distress on pain severity 1 week after surgery, where expectancies and psychological distress together explained 28% of the variance in 1 week post-surgery pain (Montgomery et al., 2010). In CATS terminology, this would entail background arousal (high distress), stimulus expectancies (severe pain from surgery), and response outcome expectancies ("I have no power over what's to come"), resulting in a tonic (sustained) arousal with increased risk of negative health consequences (e.g., pain and other side effects 1 week after surgery). A breast cancer surgery usually involves either total removal of the breast (mastectomy) or breast-conserving surgery (lumpectomy) with or without sentinel node biopsy. Breast conserving surgery is a less invasive yet a safe and effective option (Fisher et al., 1989) and is the most commonly performed surgery (Lazovich et al., 1999). While breast conserving surgery has fewer early post-operative complications (Chatterjee et al., 2015) and has been associated with better quality of life (Sun et al., 2014), incidence rates of CPSP appear to be less influenced by type of surgery . Instead, CPSP is heavily influenced by emotional distress, which has led to a general call for ways to target the emotional distress, since this is a modifiable risk factor that could be intervened on (Jackson et al., 2016). In a recent study, those women with the highest level of distress after surgery were those who benefited the most from a psychological treatment (Wang et al., 2018a,b). We therefore argue that from a prevention perspective, timing of the intervention is crucial. The time window immediately before surgery, on the day of surgery, is critical. If distress is reduced and coping increased already prior to surgery, an important risk factor for CPSP and other negative health outcomes could be eliminated, ultimately affecting the prognosis and risk for CPSP. Inflammation, Sickness Behavior, and Post-surgical Pain Stress, inflammation, and pain are inherently interlinked systems, whether you look at it from an acute or chronic perspective. Both pro-and anti-inflammatory processes kick in, in response to stressors such as pain, perceived or anticipated danger, injury, and infection (Slavich and Cole, 2013). A short-term pro-inflammatory response increases the chance of survival by accelerating wound healing and limit potential spread of an infection. In addition, pro-inflammatory cytokine activity, involving tumor necrosis factor-α (TNF-α) and interleukins 1β and 6 (IL-1β and IL-6), promote a distinct motivational state called as sickness behavior, observed in both human and animals (Hart, 1988;Dantzer, 2001;Shattuck and Muehlenbein, 2016). The cluster of behavioral symptoms that constitutes sickness behavior includes fatigue, pain hypersensitivity, psychomotor retardation, social withdrawal, and decreased interest in hedonic behaviors (Dantzer, 2001;Lasselin et al., 2020). Sickness behavior also involves an emotional component, i.e., heightened emotional distress, which are evident in humans exposed to experimentally induced inflammation (Lasselin et al., 2018). This motivational state lowers (social) activities in order to facilitate recovery and decrease the risk of spreading an infection to conspecifics. In addition, hypervigilance involving pain hypersensitivity and emotional distress would motivate the vulnerable organism to tend to one's wounds and stay away from potential danger while recovering. As with the stress response, a short increased inflammation and subsequent sickness behavior is adaptive and desirable. However, the inflammation and sickness behavior need to subside for health and healing to take place. Unfortunately, fear-based learning, threat monitoring (i.e., searching for pain in the area of surgery), and sustained stress can maintain inflammation processes through stress-driven alterations in the nucleuses of the amygdala. Recent imaging studies in humans have shown how a hyperactive amygdala activates leukopoietic tissue in the bone marrow, increasing arterial inflammation (Tawakol et al., 2019) and C-reactive protein (CRP) (Osborne et al., 2020). Elevated levels of CRP are strongly associated with reduced pain tolerance and increased pain sensitivity (Schistad et al., 2017), and increased pain sensitivity would increase acute post-operative pain, furthering the risk of developing CPSP (Wilder-Smith, 2011). Increased inflammation in persistent pain also has a behavioral analog. Using a cross-sectional design, Jonsjö et al. (2020) concluded that chronic pain patients report high levels of sickness behavior (assessed with a validated questionnaire for subjective sickness behavior, Sickness Q; Andreasson et al., 2018). The level of sickness behavior in chronic pain patients was similar to the levels reported by healthy volunteers following injection with a lipopolysaccharide (LPS), a method used to induce a strong inflammatory response in human or animals (Jonsjö et al., 2020;Lasselin et al., 2020). LPS-injected individuals report higher pain sensitivity compared to controls, and the increase in pain sensitivity correlates with lower activation in the ventrolateral prefrontal cortex and the rostral anterior cingulate cortex -areas associated with top-down pain modulation (Karshikoff et al., 2016). Moreover, when compared to others, the levels of self-reported sickness behavior in chronic pain patients and LPS-injected individuals are significantly higher than general care patients and healthy subjects (Jonsjö et al., 2020). In sum, when undergoing breast cancer surgery, the surgery naturally and adaptively elicits stress-, immune-, and painresponses. Inflammatory-induced sickness behavior serves adaptive and protective functions in the acute post-surgical phase. However, if the women undergoing surgery enter and exit the surgery with brain alterations and increased inflammation driven by a sustained stress response, this could result in pain hypersensitivity and hypervigilance toward pain following in the weeks after surgery. This fits with persistent sickness behavior mirroring these alterations. While neural and humoral pathways that restore homeostasis may terminate sickness behavior, the same sickness behavior processes can be maintained without an ongoing infection (Jonsjö et al., 2020), possibly through inflammation driven by a sustained stress response. Significantly elevated levels of the pro-inflammatory cytokine interleukin 6 (IL-6) are found in chronic pain patients compared to healthy controls (Koch et al., 2007). In addition, increased plasma concentrations of TNF-α and IL-1β, other common markers of low-grade systemic inflammation, were detected in chronic pain patients with severe pain, though not in patients with light or moderate pain, suggesting a potential role of low-grade inflammation in chronic pain at least when pain intensity exceeds a certain threshold (Koch et al., 2007). Overall, higher plasma concentrations of inflammatory markers correlate with higher self-reported pain intensity (Koch et al., 2007). As cytokines are thought to be the main mediators in this stress-induced pro-inflammatory effect, this has led to low grade pro-inflammatory processes being proposed as a biological mechanism directly contributing to the pathophysiology of stress-related diseases (Rohleder, 2014). The previous sections have discussed various mechanisms through which sustained stress activation may contribute to CPSP following breast cancer surgery. Sickness behavior, cortisol dysfunction, and alterations in the corticolimbic circuit due to prolonged secretion of cortisol are essential. They combine to drive the physical and functional irregularities characteristic for chronic pain states, as evident in human and animal studies. Moreover, disrupted corticolimbic connectivity has negative consequences for the regulation of the HPA-axis through its inhibitory feedback loops. The potential maladaptive effects of long-term exposure to stress hormones are important aspects of the vicious cycle of chronic stress and chronic pain, preventing the "alarm" to be turned off, and enabling the stress and pain to persist many years after surgery. Common to the proposed pathophysiological mechanisms of a stress-induced transition from acute to chronic pain is the involvement of various forms of learning. The corticolimbic circuit, in particular, the amygdala and the hippocampus, is essential in learning and consolidation of fear-based memories, i.e., in response to pain (Hannibal and Bishop, 2014;Vachon-Presseau, 2018). This contributes to the conditioning of a sensitized stress response, readily activated in response to pain (Hannibal and Bishop, 2014). It seems reasonable to hypothesize, that the corticolimbic pathways play a role in the conditioning of response outcome expectancies. Helplessness and hopelessness in response to pain relate to outcomes with strong affective value during high arousal, which make it likely to involve activation of limbic pathways. Furthermore, the relationship is likely bidirectional, in such that hopelessness and helplessness sustain the stress response and contribute to the long-term exposure and maladaptive effects of stress hormones on the corticolimbic circuit. THE SURGE MODEL According to SURGE, generalized response outcome expectancies in form of helplessness and hopelessness sustain a physiological stress response before and after surgery. This sets the stage Frontiers in Psychology | www.frontiersin.org for fear-based learning, pain sensitization, and maladaptive effects from stress hormones. Moreover, the sustained stress response may contribute to increased pro-inflammatory activity in the peri-and post-operative phase. An increased and prolonged inflammatory state may lead to chronic sickness behavior with its characteristic cluster of hyperalgesia, emotional distress, and other debilitating behaviors. We here propose that the SURGE model of CPSP (Figure 2) offers a possible explanation on how acute pain following breast cancer surgery may develop into CPSP depending on generalized response outcome expectancies. The model further proposes which psychobiological mechanisms drive this transition in form of a sustained activation of the stress response and inflammatory processes. Moreover, the model suggests targets for interventions that could prevent the development of CPSP in women with breast cancer. HOW TO ADDRESS THE PROBLEM: MANAGING EXPECTANCIES If response outcome expectancies are an important driver in the development of CPSP, mediated by sustained stress activation, a change in these expectancies should be followed by reduced stress activation and a correspondingly reduced risk of acute as well as CPSP. Challenging and changing negative expectations are fundamental to several psychological interventions. In cognitive behavioral therapy (CBT), unhelpful cognitions are targeted and challenged with a goal of reversing thoughts of helplessness and hopelessness (Beck and Dozois, 2011). The efficacy of CBT has been demonstrated in several populations and settings, including women with breast cancer (Antoni, 2013), with evidence from self-reported outcomes as well as cancerrelevant biological outcomes (McGregor and Antoni, 2009). A more recent approach from the third generation CBT is Acceptance and Commitment Therapy, which also holds promise as a valuable adjunct to surgical interventions (Weinrib et al., 2017). Nevertheless, in the myriad of psychological interventions and techniques, one particular intervention stands out as notably potent in the context of surgery namely clinical hypnosis. Verbal suggestions appear to be a particularly powerful way of changing expectancies, and this very element is refined and perfected in hypnosis. The seminal study by Montgomery et al. (2007) demonstrates the effects of a hypnosis in women undergoing breast cancer surgery, where a brief session of hypnosis focusing on increasing coping expectancies right before surgery, produced large reductions in pain, distress, and discomfort immediately after surgery. Hypnosis has been defined in various ways, but is most often described as a state of highly focused attention and increased suggestibility (Lynn et al., 2010). It is often compared to the everyday state of becoming so immersed in a good book FIGURE 2 | The SURGE Model of chronic post-surgical pain in women with breast cancer: surgery activates the central nervous system and creates acute pain. In line with CATS-and predictive-coding framework principles, the pain is appraised based on previous experiences in form of response outcome expectancies. An expectancy of being able to handle the pain with a positive outcome (coping) dampens or eliminates the physiological stress response. An expectancy of not being able to control or influence the pain (helplessness) or only making the pain worse (hopelessness) sustain the activation of the stress response. The sustained activation creates a vicious cycle of chronic stress, chronic inflammation, and chronic pain mediated by pathophysiological mechanisms such as central sensitization, cortisol dysfunction, impairment of corticolimbic connectivity, and inflammatory-induced sickness behavior. or a movie that you enter the imagined world and loose contact with the real world (Lang et al., 2000;Montgomery et al., 2002). The evidence-base for clinical hypnosis as an effective adjunctive non-pharmacological analgesia is strong, as demonstrated in several articles and meta-analyses in top-tier journals (Lang et al., 2000;Montgomery et al., 2002;Tefikow et al., 2013;Kekecs et al., 2014). Of particular relevance here are effects that involve pain reduction, reduced need for medication, and shorter duration of surgery, with effect sizes indicating better clinical outcomes in patients receiving hypnosis than 89% of patients in control groups (Montgomery et al., 2002). Hypnosis has further been shown to be superior to other psychological techniques (e.g., therapeutic suggestions; Kekecs et al., 2014) and might also provide benefits when delivered during general anesthesia (Berlière et al., 2018;Lacroix et al., 2019;Nowak et al., 2020). While studies of long-term effects of hypnosis are scarce, one recent study indicates the potential for preventing CPSP with peri-operative hypnosis (Berlière et al., 2018) in line with the SURGE model of CPSP. Mechanisms of Hypnotic Analgesia Exactly how hypnotic analgesia works is heavily debated and not agreed upon. While some insists that hypnosis involves an altered state of consciousness (Lynn et al., 2010) others refer to hypnosis as a cognitive behavioral technique (Montgomery et al., 2007), implying that it works through the same system as placebo analgesia works through. Our approach is mostly in line with the latter position. Consistent with the SURGE model, we propose that hypnotic analgesia might work through hypnotic suggestions inducing positive coping expectancies in response to surgery and pain, leading to a dampening of the physiological stress response and ultimately a decrease in pain intensity and a lower risk of developing CPSP. Nevertheless, earlier studies have demonstrated that hypnotic analgesia could occur through other systems than through the endogenous pain inhibitory mechanisms within the central nervous system that placebo works through (Barber and Mayer, 1977). Injections of naloxone, which is an opioid antagonist, have for instance not been able to change the elevated pain threshold induced by hypnosis in acute (Barber and Mayer, 1977) or in chronic pain (Spiegel and Albert, 1983). Rather than a placebo effect "in disguise, " or an altered state of consciousness, we argue that hypnotic analgesia instead involves an altered perception. This has been suggested by leading experts in the field (Spiegel, 2007) and aligns well with the SURGE model. Through a mobilization of attention pathways in the brain brought about by hypnosis, specific instructions are given that alters the experience of pain and associated anxiety. The recent predictive coding approaches have also shown relevance to hypnosis. By suggesting that hypnosis causes a shift in the default mode network (DMN; Carhart-Harris and Friston, 2019), an opportunity is created for the psychotherapeutic context surrounding the administration to establish longer-term changes in predictive coding activity. By increasing their sensitivity toward prediction errors, otherwise stable beliefs become more easily updated (Carhart-Harris and Friston, 2019). Furthermore, bottom-up information that is normally inhibited by compressive beliefs becomes liberated and is allowed to "travel up the (brainbody) hierarchy with greater latitude and compass" (Carhart-Harris and Friston, 2019). A central characteristic of this state is increased context sensitivity, i.e., a heightened susceptibility toward ongoing processes in the internal and external context. The hypnosis session then becomes a catalyst creating a unique opportunity to modulate behavioral activation in order to promote a functional homeostasis (Greenway et al., 2020). We propose that all the mentioned findings on mechanisms involved in hypnotic analgesia are in fact not contradictory, but instead pointing toward a common ground -the role of stress and expectancies. CONCLUSION Acute pain after breast cancer surgery is expected and adaptive, while the development from acute to CPSP represents a highly prevalent and significant clinical problem. Overall, CPSP is a multifaceted syndrome involving physiological, cognitive, and emotional factors (in addition to important socioeconomic aspects, which have not been discussed here). Expectancy effects are well-established in pain research, showing how expectancies strongly modulate acute and experimental pain. By applying CATS and principles from predictive coding framework, this review has argued how expectancies might contribute to chronic pain, in the specific case of CPSP following breast cancer surgery -mediated by sustained activation, inflammatory-induced sickness behavior, sensitization, and the neurotoxic effects of stress hormones. Clinical hypnosis is suggested as an effective intervention strategy targeting response outcome expectancies, with the potential of preventing CPSP in women with breast cancer. AUTHOR CONTRIBUTIONS HJ conceived the idea to the manuscript. AM and SR provided critical intellectual input to the disposition and conceptual framework. AM performed the literature review and wrote the first draft of the manuscript. All authors contributed to the conceptualization, writing, and approval of the final manuscript.
2021-03-24T13:30:45.802Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "9f0bdc769220db83f48667e739ec2dc5f1e2c920", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.630422/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f0bdc769220db83f48667e739ec2dc5f1e2c920", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
225112293
pes2o/s2orc
v3-fos-license
The value of a post-polio syndrome self-management programme Background Post-polio syndrome is characterised by symptoms of fatigue, pain and new-onset neuromuscular weakness, and emerges decades after the initial poliovirus infection. We sought to evaluate the only post-polio syndrome specific self-management programme in the United Kingdom. Methods This was a retrospective study of patients who had completed a residential self-management programme led by a multi-disciplinary clinical team. Following a confirmed diagnosis of post-polio syndrome by rehabilitation and neurology specialists, patients were offered to participate in the programme. Although group-based, patients also received individually tailored support on physical exercise and fatigue management. Physical effects, physical function, psychosocial well-being measures were assessed at baseline and 6 months follow-up. Knowledge was tested at baseline and immediately following the programme. Statistical comparisons were made using paired t-test and Wilcoxon signed rank test according to the data distribution. Results Over a period of 17 years, 214 participants (median age 61.3 years old, 63% female) attended 31 programmes. At 6 months the following post-polio syndrome specific symptoms improved significantly: fatigue, as measured by the Multidimensional Assessment of Fatigue scale [37.6 (7.1) vs. 34.2 (9.3), P=0.005]; and pain [15.0 (6.1) vs. 13.1 (6.7), P=0.001], atrophy [10.0 (8.0–12.0) vs. 9.0 (7.0–11.0), P=0.002] and bulbar symptoms [3.0 (1.0–5.0) vs. 2.0 (0–4.0), P=0.003] as measured by the Index of Post-polio Sequelae scale. Knowledge related to post-polio syndrome also significantly increased [14.0 (11.0–16.0) vs. 17.0 (16.0–19.0), P=0.001]. Participants were able to walk at a faster speed over 10 meters [0.77 (0.59–1.00) vs. 0.83 (0.67–1.10) m/s, P=0.003] and walked longer distances during the 2-minute walk test [76.9 (31.7) vs. 82.0 (38.4) m, P=0.029]. Depression and anxiety scores did not change over time [PHQ-9, 2.0 (0.3–10.8) vs. 2.0 (0.3–6.8), P=0.450; GAD-7, 2.0 (0–7.0) vs. 1.0 (0–3.0), P=0.460] nor was there change in self-reported quality of life {60 [50–70] vs. 60 [55–70], P=0.200}. Conclusions This study suggests that a post-polio syndrome self-management programme led to improvement in symptoms, knowledge and walking speed, but not quality of life. Anxiety and depression scores remained low. Introduction Comparisons have been drawn between the polio epidemics in Europe and USA that spiked seasonally in the first half of the 20th century and the need for a vaccine and the social responses during the COVID-19 pandemic (1). Post-polio syndrome (PPS) is a progressive neurological condition characterised by mild to extreme symptoms of fatigue, new muscular weakness and pain that emerges decades after the acute poliomyelitis viral infection (2). Symptoms of PPS can also include respiratory muscle weakness and swallowing insufficiencies. Combined with pre-existing neurological and orthopaedic impairments as a consequence of the original polio, symptom burden can be high (3). The pathogenesis of PPS is still not fully understood and it has taken many years of campaigning to be widely recognised as a distinct condition within the medical community (4). It is estimated that more than 80% of 120,000 polio survivors are living with PPS in the UK (5). Although there are limited data examining the longterm outcomes of the COVID-19 outbreak there are some similarities between PPS symptoms and other viral pandemics. Symptoms such as fatigue, pain and neuromuscular weakness suggest we should be generally cognizant about the potential chronic effects of viral infections (6)(7)(8). Self-management groups for chronic conditions have been established in many settings to guide patients when treating chronic pain, arthritis, chronic obstructive pulmonary disease and other disease. Benefits have been shown to include positive effects on quality of life (QoL) and function (9,10). Pulmonary rehabilitation includes many educational aspects of self-management and has been proven to have some of the highest evidence of efficacy for interventions of long-term conditions (11). In order to manage PPS better, a multi-disciplinary selfmanagement course was established 17 years ago at a South London hospital. A pilot study of 27 participants completing the self-management course was published in 2008 (12) which suggested positive effect on depression levels, chronic fatigue, shuttle walk distance and perceived exertion. Larsson Lund et al. (13) suggested a multidisciplinary based programme could bring positive change through reducing the burden of illness for those attending a PPS programme in Sweden. However, globally there remains little evidence to demonstrate what specific benefits a self-management programme delivers for PPS sufferers. It was hypothesised that a structured self-management programme for PPS could improve outcomes for long-term survivors of the condition. An evaluation of this dataset with 6-month follow-up data was undertaken to inform the benefit of such a programme for PPS patients in the aftermath of severe viral illness due to a pandemic. We present the following article in accordance with the STROBE checklist (available at http://dx.doi.org/10.21037/ jtd-cus-2020-009). Methods This study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). This study was registered and approved as a service evaluation with the institutional clinical governance review board (reference number: GSTT-9729) and did not require ethical approval. Patients who were referred to the PPS service at the Lane Fox Unit, Guy's & St Thomas' NHS Foundation Trust, London, UK were offered and approached to participate in the self-management programme. Patients had to have a confirmed diagnosis of PPS following specialist review with rehabilitation and neurologist specialists. Exclusion criteria for the course included not being able to attend a residential programme, unable to actively participate in any of the exercise sessions or lacking understanding of spoken English. Each participant provided verbal consent for the programme participation and completing outcome measures. Self-management programme This residential course initially comprised of nine, or more recently six days of practical and classroom-based content. It was facilitated by an experienced occupational therapist and physiotherapist and individual sessions were delivered by other clinicians such as respiratory and rehabilitation specialists, neurologists, dieticians, psychologists and psychiatrists with connections to the polio population via the same London hospital. The course has evolved and content has been amended over time. However, a core curriculum has addressed the same key themes throughout the assessment period:  Fatigue management;  Individual tailored exercise programmes;  Falls management;  Mood and anxiety in the context of a long-term condition;  Cognitive behaviour therapy to facilitate change;  Sleep and respiratory considerations in PPS;  Diet recommendations;  Individual goal setting. Study design This retrospective cohort study examined clinical outcomes measured on the first day of the structured PPS selfmanagement programme, with repeat measurements taken 6 months later. Incomplete datasets due to not attending for the 6-month review or loss-to-follow-up were excluded from the analysis. Outcome parameters A total of ten different outcome measures, assessing the impact of the known symptoms of PPS, were analysed. The primary outcomes measured self-reported physical effects related to PPS [The Index of Post-polio Sequelae (IPPS) (14) and Multidimensional Assessment of Fatigue (MAF) (15) Table 1. Statistical analysis Statistical analyses were performed using IBM SPSS version 25 (released 2017, IBM Corp., Armonk, NY, USA). The Shapiro-Wilk test of normality and normal Q-Q Plot of difference were used to check data distribution. The demographic characteristics are reported as percentages for categorical data and mean (standard deviation, SD) following testing for normality. The total range was reported for continuous variables. Parametric data were analysed using the Shapiro-Wilk test and a paired-samples t-test was used, while the Wilcoxon signed rank test was used with non-parametric data. A P value of 0.05 or less was considered significant. Results Between 13th January 2003 and 1st March 2019, 214 individual patients attended the PPS self-management programme over a total of 31 courses. The group size ranged between 3-11 individuals with a mean of 7.0 (1.9) participants per group. The majority attending the programme contracted polio in the 1950's but a smaller number had acute polio later than 1969. When they attended the programme 134 (62.6%) participants were younger than 65 years old. Most frequently the UK was the geographical area where the polio infection occurred but other areas were represented throughout the programmes. The most common residual effect following polio were in the lower limbs. Participant demographics are presented in Table 2. Understanding of PPS The participants' recall of knowledge of PPS showed a significant improvement Table 3). Discussion A tailored self-management programme for PPS can significantly improve chronic symptoms, wellbeing, physical function and the understanding of the condition. These results are sustained over a period of 6 months. Fatigue, pain, atrophy and bulbar function frequently cause problems in PPS, and all responded well to the intervention. Although the satisfaction with performance score improved as well, this was less markedly beneficial. Most relevant, however, physical function improved, an effect typically observed following evidence-based rehabilitation classes in chronic conditions. Clinical significance Both peripheral fatigue based on a deterioration of performance such as a sustained muscle contraction or a more central fatigue such as sustained attention or whole body activity seem to be a feature of PPS and is reported as one of the most debilitating symptoms (23). At followup, fatigue was notably reduced, and it has to be noted that, generally, PPS-related fatigue does not improve over time without intervention (24); equipping individuals to deal with fatigue is one of the primary aims of the course. Without a proven pharmacological treatment (25,26) the course addresses PPS fatigue through pacing techniques, knowledge of early signs of fatigue and exercise as well as addressing acceptance of individual limitations. The sample size was relatively small for the use of the MAF, but the change of 3.4 points was clinically relevant, as a range of 1.4-5.4 points suggests the clinically relevant difference for fatigue reduction (27). However, it remains difficult to describe and quantify fatigue despite it being a common feature of many neurological conditions (28). It should be mentioned that there is a high prevalence of fatigue following acute viral infections, as was also reported in the acute COVID-19 infection (29) and SARS (7), including sleep disturbances similar to that reported within chronic fatigue syndrome. Symptoms This study also showed that there were changes in the IPPS sub-categories of atrophy, pain and bulbar function. It would seem unlikely that atrophy could be changed but this domain also includes questions related to weakness and fatigue. Therefore, an improvement here reflects the positive reduction in the MAF results. It is understood that successful pain management can be partly achieved by changing pain perception (30) and the course is directed towards explanation of the experience of pain as a multidimensional perception, not in terms of damage but in terms of guiding mastery of the symptom. Changes in pain perception could also be attributed to changes in exercise habits, encouraging regular sub-maximal exercise being another aim of the course. Furthermore, the bulbar dysfunction domain incudes questions regarding both bulbar issues and also breathing. General information on both topics is provided, although individual issues may not be addressed through the course. We interpret this improvement in the IPPS bulbar score as the participants having a better understanding of symptoms and seeking specific help or being less concerned having received general information. Finally, unlike the three other domains the IPPS showed no changes in reported temperature control and this remains a symptom which is very difficult to address, 74% of responders in a large Norwegian study reported issues with the cold (31). Well-being While the symptoms of PPS improved this did not lead to an improvement in the reported performance of activities participants had self-identified as meaningful (COPM-p) but they were significantly more satisfied with the performance of those same activities (COPM-s). Over 6 months, neither the COPM-p nor COPM-s met the minimally clinical important difference (MCID) of a change of 2 points (17). However, it is worth noting while there was no clinical improvement there was also no further decline from baseline, which in itself could be viewed as positive in a progressive condition even if it has only been shown in the short term. The QoL scale showed no change over the 6-month period, this is despite changes in other domains contributing to quality of life, such as fatigue. Although there are various possible explanations, the Cochrane review of lay-led selfmanagement programmes for long term conditions similarly found that QoL generally did not improve (32). Mood The original pilot study (12) reported a high incidence of anxiety (33%) and a moderate incidence of clinical depression (11%) amongst the PPS cohort. However, our data showed a very low incidence of either anxiety or depression. The median GAD-7 score of 2.0 (0-7.0) at baseline dropped to 1.0 (0-3.0) points at follow-up. Using the PHQ-9 a score of 5-9 is suggestive of only mild depression (19) Walk speed is important not only because of its clear links to function but also due to its predictive association with falls and health status (34,35). Changes to physical function, such as walking, are frequently what patients first report at the point of seeking medical assistance for PPS (31,36). Additionally, we also used the recommended measurement of distance over time with a 2-minute walk test. This gives an additional indication of functional capacity because of the impact of fatigue. However, while a small statistically significant increase was seen at followup (P=0.029), the 5.1-meter difference falls well short of the MCID for a PPS population of 22.9 meters (36). The 30sCST test assesses the complex interaction of momentum and stabilisation required to change from the seated to a standing vertical position (37). A repeated sit-to-stand also requires an element of endurance. No difference was found between mean repetition and, again unsurprisingly, given the physical effects of polio the number of repetitions was lower than the reported age related norm of 11-16 over 30 seconds (22). It could be hypothesised that the combination of damaged motor units, reduced endurance and altered movement patterns seen in PPS presents too high a load for individuals to impact on their 30sCST outcomes. Knowledge To the best of our knowledge, this is the only UK programme that focuses solely on the long-term effects of the polio virus. However, the structure of the course is adopted from other long-term self-management programmes where there is emphasis on promoting knowledge of the condition (38). We found a significant improvement of post-polio knowledge, measured through the locally devised knowledge test. This is not a validated tool and retention of that knowledge was not retested at the 6-month follow-up. Study limitations There are a number of limitations impacting on the interpretation of our data. Firstly, the data was collected routinely over 17 years for clinical purposes rather than for the purposes of research. Therefore, data collection was not as robust and complete or systematic as in clinical trials. A small proportion of historical demographic data (8%) regarding specific year or geographic location of contracting acute polio or the polio residual effects were no longer available or were unknown. We purposefully only included data that were complete at baseline and follow-up which limited the available data sets but might have also skewed some of the overall results. Data was incomplete either because patients did not complete the follow-up or because data from either the baseline or follow-up was not recorded. In order to explore the patterns of missingness, comparisons were made in terms of baseline demographics, age and gender between those with complete paired data and therefore included in the analysis and those with missing data and excluded. This can be seen in the Supplementary information. The results showed the participants whose COPM were included in the study were younger than those whose COPM was missing but there were no other significant differences in terms of gender or age between the two groups. Therefore, it could be suggested while there was missing data this was unlikely to have invalidated the overall results. Another limitation was the lack of control group and we cannot account for other confounders or suggest the changes seen are correlated to attending the course. The course participants were selected based on the course criteria and self-selection by being able to attend is a potential confounder, although one that similarly applies to other standardised rehabilitation programmes. The use of the QoL without anchors used in this programme might not have been sensitive enough in the PPS population; although QoL is multi-dimensional, some polio relevant themes, namely issues with environmental accessibility and health professional's attitudes (39), are beyond the scope of a discreet healthcare intervention, therefore potentially limiting the impact of the course on this domain. Despite these limitations, the authors believe that the dataset suggests the positive value for those who attended the PPS self-management programme. Qualitative feedback of participant's experiences has not been examined here but may further add value and is being analysed separately. Conclusions A self-management programme for PPS can improve fatigue, the severity to which pain, atrophy and bulbar function issues are experienced and the overall knowledge of PPS, as well as physical function over a 6-month period. Despite some limitations the study results are important to a wider audience given the difficulties in collecting systematic data in this cohort and the relatively sparse evidence for the current approach to PPS management. Experiences from PPS management might prove valuable at a time when a new viral pandemic requires us to design future rehabilitation and self-management programmes for many more survivors of a devastating viral condition. Acknowledgments The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. Funding: None. Statistical analysis Descriptive statistics were completed to examine whether there were age and gender differences in the patients whose outcome measures were analysed in the study and the patients with missing data. A Chi-square test was carried out to review the proportional gender difference between the patients in the analysis of each outcome measure and those who were not included due to missing data. For all outcome measures it was shown that there was no significant gender proportion difference between the two groups ( Table S1). Independent t-tests were completed to analyse the age difference of the two groups. Percentage of missing data was also used for the outcome measures when statistical analysis was not appropriate due to the small number of missing data (less than 30). The results showed that there was no age difference between the two groups in terms of quality of life (QoL), 10-m walk test, 2-minute walk test and knowledge test. The analysis showed that patients with missing Canadian Occupational Performance Measure (COPM) data were younger and the difference between the two groups was significant (P value ≤0.05). The authors reviewed the causes of missing data for this outcome measure. It was noted that a significant number of raw scores were not recorded rather than that the participants did not attend the 6-month follow-up. An attempt was made to retrieve the original documentation but paper notes were no longer available. Percentage of missing data was used for The Index of Post-polio Sequelae (IPPS), Multidimensional Assessment of Fatigue (MAF), Patient Health Questionnaire (PHQ-9), General Anxiety Disorder (GAD-7) and 30sCST as the amount of missing data was too low for statistical analysis. The 30-second Chair Stand test (30sCST) has a high percentage of missing data. However, this outcome measure has only been used with a small number of programme participants and was considered less relevant to the overall conclusions of the study (Table S2). In conclusion, the descriptive statistics demonstrated that the participants with missing data and those whose data were included in outcomes measures analysis were not different in terms of gender proportion. While age was different between those analysed and those who had missing COPM data, for all other outcomes this difference between groups was not seen. This suggests that the missing data is unlikely to invalidate the overall results. Table S1 Comparison of gender proportion between the participants whose outcomes measures were included in the study and the participants whose outcomes were missing
2020-10-28T19:21:07.041Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "1ee13c56dce09441b3ec8fc55dcafac943e17fd9", "oa_license": "CCBYNCND", "oa_url": "https://jtd.amegroups.com/article/viewFile/44758/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3a096d253c47d6ba95543b4eed901ede4ec7da32", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
232143584
pes2o/s2orc
v3-fos-license
The effect of occupational exposure to ergonomic risk factors on osteoarthritis of hip or knee and selected other musculoskeletal diseases: A systematic review and meta-analysis from the WHO/ILO Joint Estimates of the Work-related Burden of Disease and Injury Background: The World Health Organization (WHO) and the International Labour Organization (ILO) are developing joint estimates of the work-related burden of disease and injury (WHO/ILO Joint Estimates), with contributions from a large network of experts. Evidence from mechanistic data suggests that occupational exposure to ergonomic risk factors may cause selected other musculoskeletal diseases, other than back or neck pain (MSD) or osteoarthritis of hip or knee (OA). In this paper, we present a systematic review and meta-analysis of parameters for estimating the number of disability-adjusted life years from MSD or OA that are attributable to occupational exposure to ergonomic risk factors, for the development of the WHO/ILO Joint Estimates. Objectives: We aimed to systematically review and meta-analyse estimates of the effect of occupational exposure to ergonomic risk factors (force exertion, demanding posture, repetitiveness, hand-arm vibration, lifting, kneeling and/or squatting, and climbing) on MSD and OA (two outcomes: prevalence and incidence). Data sources: We developed and published a protocol, applying the Navigation Guide as an organizing systematic review framework where feasible. We searched electronic academic databases for potentially relevant records from published and unpublished studies, including the International Trials Register, Ovid Medline, EMBASE, and CISDOC. We also searched electronic grey literature databases, Internet search engines and organizational websites; hand-searched reference list of previous systematic reviews and included study records; and consulted additional experts. Study eligibility and criteria: We included working-age (≥15 years) workers in the formal and informal economy in any WHO and/or ILO Member State but excluded children (<15 years) and unpaid domestic workers. We included randomized controlled trials, cohort studies, case-control studies and other non-randomized intervention studies with an estimate of the effect of occupational exposure to ergonomic risk factors (any exposure to * Corresponding author at: Amsterdam UMC, University of Amsterdam, Department Public and Occupational Health, Coronel Institute of Occupational Health, K0121, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands. E-mail addresses: c.t.hulshof@amsterdamumc.nl (C.T.J. Hulshof), pegaf@who.int (F. Pega), subas.neupane@tuni.fi (S. Neupane), claudio.colosio@unimi.it (C. Colosio), j.g.daams@amsterdamumc.nl (J.G. Daams), prakash.kc@tuni.fi (P. Kc), p.p.kuijer@amsterdamumc.nl (P.P.F.M. Kuijer), stefan.mandic-rajcevic@ unimi.it (S. Mandic-Rajcevic), federica.masci@unimi.it (F. Masci), h.f.vandermolen@amsterdamumc.nl (H.F. van der Molen), clas-hakan.nygard@tuni.fi (C.-H. Nygård), j.oakman@latrobe.edu.au (J. Oakman), karin.proper@rivm.nl (K.I. Proper), m.frings@amsterdamumc.nl (M.H.W. Frings-Dresen). Background The World Health Organization (WHO) and the International Labour Organization (ILO) are finalizing joint estimates of the work-related burden of disease and injury (WHO/ILO Joint Estimates) (Ryder, 2017). The organizations are estimating the numbers of deaths and disability-adjusted life years (DALYs) that are attributable to selected occupational risk factors. The WHO/ILO Joint Estimates is based on already existing WHO and ILO methodologies for estimating the burden of disease for selected occupational risk factors (International Labour Organization, 2014;Pruss-Ustun et al., 2017). It expands these existing methodologies with estimation of the burden of several prioritized additional pairs of occupational risk factors and health outcomes. For this purpose, population attributable fractions (Murray et al., 2004)the proportional reduction in burden from the health outcome achieved by a reduction of exposure to the risk factor to zeroare being calculated for each additional risk factor-outcome pair, and these fractions are being applied to the total disease burden envelopes for the health outcome from the WHO Global Health Estimates for the years 2000-2016(World Health Organization, 2019. The WHO/ILO Joint Estimates may include estimates of the burden of selected musculoskeletal diseases other than back or neck pain (MSD) or osteoarthritis of hip or knee (OA) attributable to occupational exposure to ergonomic risk factors if feasible, as one additional prioritized risk factor-outcome pair. To optimize parameters used in estimation models, a systematic review and meta-analysis is required of studies with estimates of the effect of occupational exposure to ergonomic risk factors on MSD or OA (Hulshof et al., 2019). In the current paper, we present this systematic review and meta-analysis. WHO and ILO, supported by a large network of experts, have in parallel also produced a systematic review of studies estimating the prevalence of occupational exposure to ergonomic risk factors (Hulshof et al., 2021) and several other systematic reviews and meta-analyses on other additional risk factor-outcome pairs (Descatha et al., 2018Godderis et al., 2018;Hulshof et al., 2019;Li et al., 2018Li et al., , 2020Mandrioli et al., 2018;Paulo et al., 2019;Rugulies et al., 2019;Teixeira et al., 2019;Tenkate et al., 2019;Pachito et al., 2021;Pega et al., 2020). To our knowledge, these are the first systematic reviews and meta-analyses conducted specifically for an occupational burden of disease study, including having a pre-published protocol that ensures full transparency (Mandrioli et al., 2018). The WHO/ILO joint estimation methodology and the burden of disease estimates are separate from these systematic reviews, and they will be described and reported elsewhere. Rationale To consider the feasibility of estimating the burden of MSD or OA from exposure to occupational ergonomic risk factors, and to ensure that potential estimates of burden of disease are reported in adherence with the guidelines for accurate and transparent health estimates reporting (GATHER) (Stevens et al., 2016), WHO and ILO require a systematic review and a meta-analysis of studies with estimates of the relative effect of exposure to occupational ergonomic risk factors on the prevalence or incidence of MSD or OA respectively, compared with the theoretical minimum risk exposure level (presented in this article). The theoretical minimum risk exposure level is the level that would result in the lowest possible population risk, even if it is not feasible to attain this exposure level in practice (Murray et al., 2004). These data and effect estimates should be tailored to serve as parameters for estimating the burden of MSD and OA respectively, from exposure to occupational ergonomic risk factors in the WHO/ILO Joint Estimates. Seven previous systematic reviews have however focused on the evidence on the effect of exposure to one or more of these occupational ergonomic risk factors on one or more selected musculoskeletal diseases of the shoulder (Lievense et al., 2001;van Rijn et al., 2010;van der Molen et al., 2017); elbow (Descatha et al., 2016); hip (Lievense et al., 2001;Jensen, 2008); and knee (Verbeek et al., 2017). These systematic reviews identified the following occupational ergonomic risk factors as relevant. A recent meta-analysis, based on seven studies, revealed moderate quality evidence for associations between shoulder disorders (M75.1-M75.5) and arm elevation (odds ratio (OR) 1.9, 95% CI 1.47 to 2.47, I 2 31%) and shoulder load, a combined biomechanical exposure measure (OR = 2.0, 95% CI 1.90 to 2.10, I 2 0%) and low to very low evidence for hand force exertion (OR = 1.5, 95% CI 1.25 to 1.87, I 2 66%), and handarm vibration (OR = 1.3, 95% CI 1.01 to 1.77, I 2 99%) (van der Molen et al., 2017). Van Rijn et al. (2010) performed a systematic review on the relationship between work-related factors and specific disorders of the shoulder and found in the 17 included studies that repetitive movements of the shoulder, repetitive motion of the hand/wrist of > 2 h/day, hand-arm vibration, and arm elevation showed an association with subacromial impingement syndrome (ORs between: 1.04, 95% CI 1.00-1.07 and 4.7, 95% CI 2.07-10.68), as did upper-arm flexion of ≥ 45 • for ≥ 15% of time (OR 2.43, 95% CI 1. 04-5.68) and duty cycle of forceful exertions of ≥ 9% time or any duty cycle of forceful pinch (OR 2.66, 95% CI 1. 26-5.59) (van Rijn et al., 2010). Descatha et al. (2016) included in a meta-analysis five prospective studies published between 2001 and 2014 and found a positive association between combined biomechanical exposure involving the wrist and/or elbow and incidence of epicondylitis lateralis (OR 2.6, 95% CI 1.9-3.5) (Descatha et al., 2016). In a systematic review by van Rijn et al. (2009) the associations between force, posture, repetitiveness, handarm vibration and a mixture of these exposures and elbow disorders were studied (van Rijn et al., 2009). Handling tools of > 1 kg (ORs of 2.1-3.0), handling loads of > 20 kg at least 10 times/day (OR 2.6) and repetitive movements for > 2 h/day (ORs of 2.8-4.7) were associated with lateral epicondylitis, while handling loads of > 5 kg (2 times/min at minimum of 2 h/day), handling loads of > 20 kg for at least 10 times/ day, high hand grip forces for > 1 h/day, repetitive movements for > 2 h/day (ORs of 2.2-3.6) and working with vibrating tools for > 2 h/day (OR 2.2) were all associated with medial epicondylitis. Jensen (2008) evaluated the association between physical work demands and hip osteoarthritis in 22 included studies and concluded that moderate to strong evidence exists for a relation with heavy lifting (OR ranges between 1.97, 95% CI 1.14-3.4, and 8.5 (95% CI 1.6-45.3) (Jensen, 2008). Furthermore, 13 studies showed a significantly increased risk between farming and hip osteoarthritis, with ORs ranging from 1.9 (95% CI 1.01-3.87) to 12.0 (95% CI 6.7-21.4). Lievense et al. (2001) used a best-evidence synthesis to summarize the results of two retrospective and 14 case-control studies and found moderate evidence for a positive association between previous physical workload and hip osteoarthritis, with ORs ranging from 1.5 (95% CI 0.9-2.5) and 9.3 (95% CI 1.9-44.5) (Lievense et al., 2001). In a subgroup analysis, also ≥ 10 years farming was positively related to hip osteoarthritis. Work in the informal economy may lead to different exposures and exposure effects than work in the formal economy does. The informal economy is defined as "all economic activities by workers and economic units that arein law or in practicenot covered or insufficiently covered by formal arrangements", but excluding "illicit activities, in particular the provision of services or the production, sale, possession or use of goods forbidden by law, including the illicit production and trafficking of drugs, the illicit manufacturing of and trafficking in firearms, trafficking in persons, and money laundering, as defined in the relevant international treaties" (p. 4) (104th International Labour Conference, 2015). Therefore, we consider the formality of the economy studied in studies included in both systematic reviews. Description of the risk factor The aforementioned seven systematic reviews on the effect of occupational ergonomic risk factors on musculoskeletal diseases of the shoulder (van Rijn et al., 2010;van der Molen et al., 2017); elbow (Descatha et al., 2016); hip osteoarthritis (Lievense et al., 2001;Jensen, 2008); and knee osteoarthritis (Verbeek et al., 2017), and additional documents (Harris and Coggon, 2015); (EWCS, 2017) have identified the seven following types of occupational ergonomic risk factors as being of interest: (i) force exertion (e.g., carrying or moving heavy loads, turn and screw); (ii) demanding posture (e.g. arm elevation, bending and/or twisting); (iii) repetitiveness (e.g., physically repetitive work); (iv) hand-arm vibration; (v) kneeling and/or squatting; (vi) lifting (e.g. lifting heavy loads); and/or (vii) climbing. Therefore, we have reviewed studies on occupational exposure to any (i.e., one or more) of these seven different ergonomic risk factors. The definition of the risk factor, the risk factor levels and the theoretical minimum risk exposure level are presented in Table 1. The WHO Burden of Disease study has previously defined occupational ergonomic risk factors into four categories by occupation, these being background exposure (defined by manager and professionals as occupations); low exposure (clerical and sales workers); moderate exposure (operators and service workers); and high exposure (farmers) (Murray et al., 2004). The Institute of Health Metrics and Evaluation's burden of disease study has defined occupational ergonomic factors for low back and neck pain specifically as "All individuals have the ergonomic factors of clerical and related workers" (p. 1362) (G. B. D. Risk Factors Collaborators, 2017). Definition of the outcome In this systematic review, we will review two outcomes: 1. Any selected other musculoskeletal diseases (MSD), defined as one or more of: shoulder disorders: rotator cuff syndrome, bicipital tendinitis, calcific tendinitis, shoulder impingement, bursitis shoulder; elbow disorders: epicondylitis medialis, epicondylitis lateralis, bursitis elbow; hip disorders: trochanter and other hip bursitis; and knee disorders: chondromalacia patella, meniscus disorders and bursitis knee. 2. Osteoarthritis of the hip or knee (OA). For the outcomes MSD and OA, only diseases have been included, for which exposure to one or more of the included occupational ergonomic risk factors (Table 1) is considered as a necessary factor for disease development. This selection was mainly based on the information about a possible occupational origin of the selected health outcomes in the seven systematic reviews described above (van der Molen et al., 2017;van Rijn et al., 2010van Rijn et al., , 2009Descatha et al., 2016;Jensen, 2008;Lievense et al., 2001;Verbeek et al., 2017), plus additional evidence (Harris and Coggon, 2015). The WHO Global Health Estimates group outcomes into standard burden of disease categories (World Health Organization, 2017), based on standard codes from the International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10) (World Health Organization, 2015). The relevant WHO Global Health Estimates Table 1 Definitions of the risk factor, risk factor levels and the minimum risk exposure level. Risk factor Occupational exposure to ergonomic risk factors (defined as occupational exposure to one or more of: force exertion, demanding posture, repetitive movement, hand-arm vibration, kneeling or squatting, lifting, climbing) Risk factor level Two levels: 1. No or low occupational exposure to ergonomic risk factors. 2. Any occupational exposure to ergonomic risk factors. If possible, "any" exposure may be further classified into "moderate" and "high" exposure, preferably based on exposure in terms of level, frequency and/or duration of the exposure. Theoretical minimum risk exposure level No occupational exposure to ergonomic risk factors categories for this systematic review are "II.M.2. Osteoarthritis" and "II. M.5. Other musculoskeletal diseases" (World Health Organization, 2017). 1 presents the logic model for our systematic reviews of the causal relationship between exposure to occupational ergonomic risk factors and MSD and OA, respectively. This logic model is an a priori, process model (Rehfuess et al., 2018) that seeks to capture complexity of the risk factor-outcome causal relationship (Anderson et al., 2011). Musculoskeletal diseases are multifactorial in origin which means that there may be several etiological risk factors for their onset. Specific potentially relevant pathomechanisms include: posturally induced muscular imbalance, neural pathomechanisms, the 'Cinderella hypothesis' of motor unit recruitment, reperfusion, impaired heat-shock response and stress-induced mitochondrial damage (Forde et al., 2002). Nevertheless, there is currently no clear and circumscriptive understanding of the pathogenesis of work-related musculoskeletal diseases. One postulation is that musculoskeletal diseases result from cumulative micro damage induced by risk factors on cellular and/or tissue level over time. Objectives To systematically review and meta-analyze randomized control trials, cohort studies, case-control studies and other non-randomized intervention studies with estimates of the relative effect of any occupational exposure to ergonomic risk factors on MSD and OA respectively among workers of working age, compared with the minimum risk exposure level of no exposure. Developed protocol The study protocol was registered in PROSPERO (CRD42018102631). This protocol is in accordance with the preferred reporting items for systematic review and meta-analysis protocols statement (PRISMA-P) Shamseer et al., 2015). The abstract is in line with the reporting items for systematic reviews in journal and conference abstracts (PRISMA-A) (Beller et al., 2013). Any modification of the methods stated in the present protocol will be registered in PROSPERO and is reported in the systematic review itself under the section 'Differences between protocol and review'. This systematic review of the effect of exposure to occupational ergonomic risk factors on MSD and OA is reported according to the preferred reporting items for systematic review and meta-analysis statement (PRISMA) (Liberati et al., 2009). Reporting of all parameters for estimating the burden of osteoarthritis, and other musculoskeletal diseases respectively, from occupational exposure to ergonomic risk factors in the systematic reviews will adhere with the requirements of the GATHER guidelines (Stevens et al., 2016), as the WHO/ILO burden of disease estimates produced from the systematic review follow these reporting guidelines. Electronic academic databases We searched the following five electronic academic databases: (Hulshof et al., 2019). The full search strategies for all databases were revised by a clinical librarian/information scientist and the strategies used in Ovid Medline and in EMBASE are presented in Appendix 1 in the Supplementary data. We performed searches in electronic databases operated in the English language using a search strategy also in the English language. Consequently, study records that did not report essential information (i.e. title and abstract) in English were not captured. We have adapted the search syntax to suit the other electronic academic and grey literature databases. Just prior to completion of the review, an additional search of the MEDLINE database was undertaken on 3 March 2020 to capture the most recent publications (e.g., publications ahead of print). No additional studies were identified. Differences between the proposed search strategy and the actual search strategy are documented in Section 7. Internet search machines In addition, we also searched the Google (www.google.com/) and Google Scholar (www.google.com/scholar/) Internet search engines and screened the first 100 hits for potentially relevant records, a strategy used previously in Cochrane Reviews (Pega et al., 2015(Pega et al., , 2017. Organizational websites The websites of the following nine international organizations and national government departments were searched in the period December 2018 to Hand-searching and expert consultation Hand-searching for potentially eligible studies was undertaken in: • Reference lists of previous systematic reviews. • Reference lists of all study records of all included studies. • Study records published over the past 24 months in the three peerreviewed academic journals from which we obtained the largest number of included studies (Occup Environ Med; Scand J Work Environ Health; Int Arch Occup Environ Health). • Study records that have cited an included study record (identified in Web of Science citation database). • Collections of the review authors. Additional experts were contacted with a request to identify potentially eligible studies. The Scientific Committee on Musculoskeletal Disorders of the International Commission on Occupational Health and the International Ergonomics Association have been contacted with a request to suggest eligible studies. Selected studies Study selection was carried out with the Covidence software. All records identified in the search were downloaded and duplicates were identified and deleted. Afterwards, pairs of two review authors independently screened titles and abstracts (step 1) and then full texts (step 2) of potentially relevant records. A third review author resolved any disagreements between the two review authors. If a study record identified in the literature search was authored by a review author assigned to study selection or if an assigned review author was involved in the study, the record was re-assigned to another review author for study selection. We present the study selection for both health outcomes in a flow chart, as per PRISMA guidelines (Liberati et al., 2009). Additional study selection by natural language processing In order to efficiently identify all instances of the ergonomic risk factors of interest to our research project in the information found in more than two × 18,000 titles and abstracts retrieved by our search strategies, a natural language processing (NLP) method was used. Natural language processing is a subset of artificial intelligence techniques which deals with processing natural language (human language) and extracting the required information. Since our study had precise inclusion criteria as the presence of a number of ergonomic risk factors, we used a regular expression (RegEx or RegExp) technique. Regular expressions are a sequence of characters which represent a search pattern, and have been successfully used for data mining in various fields of medicine (Chen et al., 2019;Sohn et al., 2014). In the case of this systematic review, to search for published papers dealing with vibrations we have employed a regular expression 'vibrat' which would cover all variations of this word, such as vibration, vibrations, vibratory, vibrated, etc. For each of the seven risk factors, regular expressions were developed as presented in Table 3. References which were originally in Endnote were exported as Bib-Tex and saved as a .txt file. Then, all references were imported into the R programming language using the RefManageR package (R Core Team, 2019; McLean 2017). The regular expression search strategy was applied to all titles and abstracts and each occurrence of any of the ergonomic risk factors was flagged. Finally, the presence and number of flagged risk factors in the title and abstract was exported to Microsoft Excel together with the original data for further filtering. The regular expression strategy was intentionally developed to result in a high sensitivity to reduce the risk of false negatives. Eligibility criteria The PECO (Morgan et al., 2018) criteria are described below. Types of populations We included studies of the working-age population (≥15 years) in the formal and informal economy. Studies of children (aged < 15 years) and unpaid domestic workers were excluded. Participants residing in any WHO and/or ILO Member State and any industrial setting or occupational group were included. Appendix F of our protocol paper provides a briefer overview of the PECO criteria. Types of exposures We included studies that define exposure to occupational ergonomic risk factors in accordance with our standard definition (Table 1). We included studies where exposure to occupational ergonomic risk factors was measured, whether objectively (e.g. by means of technology) or subjectively, including studies that used measurements by experts (e.g. scientists with subject matter expertise) and self-reports by the worker or workplace administrator or manager. If a study presented both objective and subjective measurements, then we have prioritized objective measurements. We included studies with measures from any data source, including registry data. Studies from any year were included. For studies that reported exposure levels differing from our standard levels (Table 1), we converted the reported levels to the standard levels if possible and reported analyses on these alternate exposure levels if possible. Types of comparators The included comparator were participants exposed to the theoretical minimum risk exposure level (Table 1). We excluded all other comparators. Types of outcomes This systematic review included two outcomes: 1. Has selected other musculoskeletal diseases (MSD). Has osteoarthritis of hip or knee (OA). We included studies that defined MSD or OA, in accordance with our standard definition of these outcomes (Table 3). We included only include binary measures (present versus not present) of clinically assessed MSD or OA, respectively. Prevalence and incidence of eligible diseases were included, but mortality was excluded. The following measurements of MSD or OA were regarded as eligible: i) Diagnosis by a physician. ii) Hospital admission or discharge records. iii) Other relevant administrative data (e.g. records of sickness absence or disability). iv) Registry data of treatment for MSD or OA, respectively. All other measures were excluded from this systematic review. We included objective measures of these eligible musculoskeletal diseases (e.g., measured by an occupational health and safety practitioner, such as an occupational physician or nurse, using a validated tool), as well as subjective measures (e.g., measured by a worker). If subjective and objective measures were presented, then we prioritized Types of studies We included studies that investigate the effect of exposure to any occupational ergonomic risk factor on MSD or OA for any years. Eligible study designs were randomized controlled trials (including parallelgroup, cluster, cross-over and factorial trials), cohort studies (both prospective and retrospective), case-control studies, and other nonrandomized intervention studies (including quasi-randomized controlled trials, controlled before-after studies and interrupted time series studies). We included a broader set of observational study designs than is commonly included, because a recent augmented Cochrane Review of complex interventions identified valuable additional studies using such a broader set of study designs (Arditi et al., 2016). As we have an interest in quantifying risk and not in qualitative assessment of hazard (Barroga and Kojima, 2013), we excluded all other study designs (e.g. uncontrolled before-and-after, cross-sectional, qualitative, modelling, case and non-original studies). Records published in any year and any language were included. Again, the search was conducted using English language terms, so that records published in any language that present essential information (i. e. title and abstract) in English were included. If a record was written in a language other than those spoken by the authors of this review or those of other reviews in the series, then the record was translated into English. Published and unpublished studies were included. Studies conducted using unethical practices were excluded from the review (e.g., studies that deliberately exposed humans to a known risk factor to human health). Types of effect measures We included measures of the relative effect of any exposure to occupational ergonomic risk factors on the prevalence or incidence of MSD or OA respectively, compared with the theoretical minimum risk exposure level of no exposure. Effect estimates of mortality measures were excluded. We include relative effect measures such as risk ratios and odds ratios for prevalence measures and hazard ratios for incidence measures (e.g., developed MSD or OA, respectively). Measures of absolute effects (e.g. mean differences in risks or odds) were converted into relative effect measures, but if conversion was impossible, they were excluded. As shown in our logic framework ( Fig. 1), we a priori considered the following variables to be potential effect modifiers of the effect of occupational exposure to ergonomic factors on MSD or OA: country, age, sex, industrial sector, occupational group and formality of employment. We considered age, sex, socioeconomic position, body mass index, smoking status, comorbidity and sporting and/or leisure activities to be potential confounders. Potential mediators are tasks performed, load on the musculoskeletal system, psychosocial demands, social support, decision latitude, job control, job security, long working hours and workrelated stress. If a study presented estimates for the effect from two or more alternative models that have been adjusted for different variables, then we systematically prioritized the estimate from the model that we consider best adjusted, applying the lists of confounders and mediators identified in our logic model (Fig. 1). We prioritized estimates from models adjusted for more potential confounders over those from models adjusted for fewer. For example, if a study presented estimates from a crude, unadjusted model (Model A), a model adjusted for one potential confounder (Model B) and a model adjusted for two potential confounders (Model C), then we prioritized the estimate from Model C. We prioritized estimates from models unadjusted for mediators over those from models that adjusted for mediators, because adjustment for mediators can introduce bias. For example, if Model A has been adjusted for two confounders, and Model B has been adjusted for the same two confounders and a potential mediator, then we have chosen the estimate from Model A over that from Model B. We prioritized estimates from models that can adjust for time-varying confounders that are at the same time also mediators, such as marginal structural models (Pega et al., 2016), over estimates from models that can only adjust for time-varying confounders, such as fixed-effects models (Gunasekara et al., 2014), over estimates from models that cannot adjust for time-varying confounding. If a study presented effect estimates from two or more potentially eligible models, then we documented specifically why we prioritized the selected model. Extracted data A data extraction form was developed and trialed until data extractors reached convergence and agreement. Pairs of two review authors have extracted data on study characteristics (including study authors, study year, study country, participants, exposure and outcome), study design (including summary of study design, comparator, epidemiological models used and effect estimate measure), risk of bias (including selection bias, reporting bias, confounding and reverse causation) and study context (e.g., data on contemporaneous exposure to other occupational risk factors potentially relevant for health loss from MSD or OA, respectively). A third review author has resolved conflicts in data extraction, if any. Data were entered into and managed with Excel. We have also extracted data on potential conflict of interest in included studies. For each author and affiliated organization of each included study record, we have extracted their financial disclosures and funding sources. We have used a modification of a previous method to identify and assess undisclosed financial interest of authors (Forsyth et al., 2014). Where no financial disclosure or conflict of interest statements were available, we have searched the name of all authors in other study records gathered for this study and published in the prior 36 months and in other publicly available declarations of interests (Drazen et al., 2010a(Drazen et al., , 2010b. Requested missing data If relevant data were missing, we requested by email or by phone to provide the missing data using the contact details provided in the principal study record. Mostly, missing data were dealing with analysis of exposure to any of the selected risk factors or any of the selected health outcomes. If we did not receive a positive response by study author, a follow-up email was sent at two weeks. On our request, some of the authors performed additional analyses and provided us the requested data. We present a description of additional data, the study author from whom the data were requested, the date of requests sent, the date on which data were received (if any), and a summary of the responses provided by the study authors (Appendix 2 in the Supplementary data). Assessed risk of bias Standard risk of bias tools do not exist for systematic reviews for hazard identification in occupational and environmental health, nor for risk assessment. The five methods specifically developed for occupational and environmental health are for either or both hazard identification and risk assessment, and they differ substantially in the types of studies (randomized, observational and/or simulation studies) and data (e.g. human, animal and/or in vitro) they seek to assess . However, all five methods, including the Navigation Guide (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c, assess risk of bias in human studies similarly . The Navigation Guide was specifically developed to translate the rigor and transparency of systematic review methods applied in the clinical sciences to the evidence stream and decision context of environmental health , which includes workplace environment exposures and associated health outcomes. The guide is our overall organizing framework, and we will also apply its risk of bias assessment method in this systematic review. The Navigation Guide risk of bias assessment method builds on the standard risk of bias assessment methods of the Cochrane Collaboration and the US Agency for Healthcare Research and Quality (Viswanathan et al., 2008). Some further refinements of the Navigation Guide method may be warranted (Goodman et al., 2017), but it has been successfully applied in several completed and ongoing systematic reviews Koustas et al., 2014;Lam et al., 2014;Vesterinen et al., 2014;Johnson et al., 2016;Lam et al., 2016aLam et al., , 2016bLam et al., , 2016cLam et al., 2017). In our application of the Navigation Guide method, we have drawn heavily on one of its latest versions, as presented in the protocol for a systematic review (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c. We have assessed risk of bias on the individual study level and on the body of evidence overall. The nine risk of bias domains included in the Navigation Guide method for human studies are: (i) source population representation; (ii) blinding; (iii) exposure assessment; (iv) outcome assessment; (v) confounding; (vi) incomplete outcome data; (vii) selective outcome reporting; (viii) conflict of interest; and (ix) other sources of bias. While two of the earlier case studies of the Navigation Guide did not utilize outcome assessment as a risk of bias domain for studies of human data Koustas et al., 2014;Lam et al., 2014;Vesterinen et al., 2014), all of the subsequent reviews have included this domain (Johnson et al., 2016;Lam et al., 2016aLam et al., , 2016bLam et al., , 2016cLam et al., 2017). Risk of bias or confounding ratings were: "low"; "probably low"; "probably high"; "high" or "not applicable" (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c. To judge the risk of bias in each domain, we have applied a priori instructions (Appendix H), which we have adopted or adapted from an ongoing Navigation Guide systematic review (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c. For example, a study was be assessed as carrying "low" risk of bias from source population representation, if we judged the source population to be described in sufficient detail (including eligibility criteria, recruitment, enrollment, participation and loss to follow up) and the distribution and characteristics of the study sample to indicate minimal or no risk of selection effects. The risk of bias at study level was determined by the worst rating in any bias domain for any outcome. For example, if a study was rated as "probably high" risk of bias in one domain for one outcome and "low" risk of bias in all other domains for the outcome and in all domains for all other outcomes, the study will be rated as having a "probably high" risk of bias overall. All risk of bias assessors have jointly trialed the application of the risk of bias criteria until they have synchronized their understanding and application of the criteria. Pairs of study authors have independently judged the risk of bias for each study by outcome. Where individual assessments differ, a third author has resolved the conflict. For each included study, we have reported our study-level risk of bias assessment by domain in a standard 'Risk of bias' table . For the entire body of evidence, we present the study-level risk of bias assessments in a 'Risk of bias summary' figure . Conducted evidence synthesis (including meta-analysis) If we found two or more studies with an eligible effect estimate (Table 2), two review authors independently investigated the clinical heterogeneity of the studies in terms of participants (including country, sex, age and industrial sector or occupation), level of risk factor exposure, comparator and outcomes. If we found that effect estimates differed considerably by country, sex and/or age, or a combination of these, then we have synthesised evidence for the relevant populations defined by country, sex and/or age, or combination thereof. Differences by country could include or be expanded to include differences by country group (e.g. WHO region or World Bank income group). If we found that effect estimates were clinically sufficiently homogenous across countries, sexes and age groups, we have combined studies from all of these populations into one pooled effect estimate that could be applied across all combinations of countries, sexes and age groups in the WHO/ILO Joint Estimates. If we judged two or more studies for the relevant combination of country, sex and age group, or combination thereof, to be sufficiently clinically homogenous to potentially be combined quantitatively using quantitative meta-analysis, we have tested the statistical heterogeneity of the studies using the I 2 statistic . If two or more clinically homogenous studies were found to be sufficiently homogenous statistically to be combined in a meta-analysis, we have pooled the risk ratios of the studies in a quantitative meta-analysis, using the inverse variance method with a random effects model to account for cross-study heterogeneity . The metaanalysis was conducted in RevMan 5.3, but the data for entry into these programmes may be prepared using another recognized statistical analysis programme, such as Stata. We have neither quantitatively combined data from studies with different designs (e.g. cohort studies with case-controls studies), nor unadjusted and adjusted models. We have only combined studies that we judged to have a minimum acceptable level of adjustment for confounders. If quantitative synthesis was not feasible, we have synthesised the study findings narratively and identified the estimates that we judged to be the highest quality evidence available. Additional analyses If there was evidence for differences in effect estimates by country, sex, age, industrial sector and/or occupation, or by a combination of these variables, we have conducted subgroup analyses by the relevant variable or combination of variables, as feasible. Findings of these subgroup analyses, if any, will be used as parameters for estimating burden of disease specifically for relevant populations defined by these variables. We have also conducted subgroup analyses by study design (cohort studies versus case-control studies). Assessed quality of evidence We assessed quality of evidence using a modified version of the Navigation Guide quality of evidence assessment tool (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c. The tool is based on the GRADE approach (Schünemann et al., 2011) adapted specifically to systematic reviews in occupational and environmental health . We assessed quality of evidence for the entire body of evidence by outcome. We have adopted or adapted the latest Navigation Guide instructions for grading the quality of evidence (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c. We downgraded the quality of evidence for the following five GRADE reasons: (i) risk of bias; (ii) indirectness; (iii) inconsistency; (iv) imprecision; and (v) publication bias. We have judged the risk of publication bias qualitatively. To assess possible risk of bias from selective reporting, protocols of included studies have been screened to identify instances of selective reporting. We have graded the evidence, using the three Navigation Guide standard quality of evidence ratings: "high", "moderate" and "low" (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c. Within each of the relevant domains, we rated the concern for the quality of evidence, using the ratings "none", "serious" and "very serious". As per Navigation Guide, we start at "high" for randomized studies and "moderate" for observational studies. Quality was downgraded for no concern by nil grades (0), for a serious concern by one grade (− 1) and for a very serious concern by two grades (− 2). We up-graded the quality of evidence for the following other reasons: large effect, dose-response and plausible residual confounding and bias. For example, if we had a serious concern for risk of bias in a body of evidence consisting of observational studies (-1), but no other concerns, and there were no reasons for upgrading, and we downgraded its quality of evidence by one grade from "moderate" to "low". Assessed strength of evidence We have applied the standard Navigation Guide methodology (Lam et al., 2016a(Lam et al., , 2016b(Lam et al., , 2016c to rate the strength of the evidence. The rating was be based on a combination of the following four criteria: (i) quality of the body of evidence; (ii) direction of the effect; (iii) confidence in the effect; and (iv) other compelling attributes of the data that may influence our certainty. The ratings for strength of evidence for the effect of exposure to occupational ergonomic risk factors on MSD and OA respectively, were "sufficient evidence of harmfulness", "limited evidence of harmfulness", "inadequate evidence of harmfulness" and "evidence of lack of harmfulness". Study selection Figs. 2a and 2b present the flow diagrams of the study selection for the outcomes MSD and OA respectively. Of the total of 36,120 individual study records identified in our searches, 18 records from 17 studies fulfilled the eligibility criteria and were included in the systematic review. For the 30 excluded studies that most closely resembled inclusion criteria, the reasons for exclusion are listed in Appendix 1. Of the 18 included studies, eight were included in one or more quantitative meta-analyses. Characteristics of included studies The characteristics of the included studies are summarized in Tables 4a and 4b. Study type Half of the included studies were cohort studies (four studies) and the other half were case control studies (four studies). The type of effect estimates most commonly reported was odds ratios (eight studies). Most studies did adjust for the most important of our pre-specified confounders, no study did not adjust for any of these confounders. The confounders most commonly adjusted for were age and sex. Several studies in addition also adjusted for further potential confounders (Tables 4a and 4b). Population studied The included studies captured 2,378,729 workers (1,157,943 females and 1,220,786 males) in total. Six studies examined both female and male workers, while two studies examined only male workers. The most commonly studied age groups were those between 20 and 65 years while in the studies on knee or hip osteoarthritis the age groups between 45 and 65 were overrepresented. By WHO region, most studies examined populations in the European region (six studies from four countries) followed by populations in the Eastern Mediterranean region (one study) and populations in the Western Pacific region (one study). The most commonly studied countries were Germany (two studies) and France (two studies). Most studies Fig. 2a. Flow diagram of study selection for outcome: selected other musculoskeletal diseases. did not provide detailed quantitative break downs of participants by industrial sectors and occupation, but most studies covered several industrial sectors and occupations. Exposure studied Out of a total of eight studies, seven studies measured exposure to the ergonomic risk factors using self-reports by questionnaires or interviews while one study used a job-exposure matrix to measure exposure only indirectly. All studies measured any exposure to at least three out of the seven selected ergonomic risk factors (force exertion, demanding posture, repetitiveness, hand-arm vibration, lifting, kneeling and/or squatting, and climbing). Comparator studied The comparator in all studies was no or low exposure to the selected ergonomic risk factors. Outcomes studied No studies reported evidence on the outcome of prevalence of MSD or OA. Five studies reported evidence on the outcome of acquired MSD. Of these, four studies reported evidence on the incidence of several shoulder diseases (supraspinatus tendon lesions, rotator cuff syndrome, subacromial impingement syndrome or chronic shoulder pain), while one study reported evidence on epicondylitis lateralis. Most studies used physician diagnostic records. Three studies reported evidence on the outcome of acquired OA; two on knee OA and one on hip OA. The outcome was most commonly assessed through physician diagnostic records. Acquired other MSD Tables A4.1-A4.5 in Appendix 4 present the risk of bias in the included studies at individual study level for the outcome 'other MSD' We judged the risk of bias to be low to probably low across studies (Fig. 3). Selection bias. For the cohort studies included in this review we assessed the risk of selection bias to be probably low. Only the cohort study by Herquelot et al. (2013) showed a substantial number of missing cases from the original population. For our purpose the results from the cohort studies by Bodin et al. (2012) and Herquelot et al. (2013) were combined because they originated from the same cohort population. For the only case control study we rated the risk of selection bias as probably low. Although in case control studies the risk of selection bias is often higher compared to cohort studies, this case control study showed an appropriate selection strategy. Performance bias. For the included cohort studies and the case Workers were defined as exposed if they were exposed Individual level Workers were defined as exposed if they were exposed to any of the ergonomic risk factors ≥ 2h/day control study, the risk of performance bias was assessed as probably low. Information on blinding of study participants and study personnel was not always provided but in most cases, because of the use of primarily questionnaire or administrative data, the possible knowledge of exposure or outcome status could have hardly impacted the results. Detection bias (exposure assessment). For possible detection bias regarding the exposure assessment, the rating is probably high to high. In none of the studies, exposure was measured directly;-it was always self-reported or based on a job-exposure matrix. Therefore, detection bias due to exposure misclassification was mostly rated Probably High. In one study, the case control study by Seidler et al. (2011), it was rated as high, mainly because the additional analysis of the data that was provided by the authors were partly based on a recalculation of cumulative exposure data, averaged over a time period, which may not always reflect adequate exposure assessment. Detection bias (outcome assessment). Detection bias regarding outcome measurement was not seen as a big problem and therefore rated as low to probably low. Most studies used physician diagnostic records, detailed administrative health records or radiological findings related to specific diagnosis or a diagnosis group. For some of the studies specific ICD-codes were reported. Confounding. Possible confounding across the studies was rated as low to probably low. In all studies, the results were presented based on a (multivariable) model to adjust for the most important possible confounders as indicated in our logic model. Adjustment was mostly done for age, sex and socio-economic position and sometimes for other factors like BMI or sporting activities. Appropriate statistical techniques were used for adjustment of confounders. Selection bias (incomplete outcome data). Selection bias due to incomplete outcome data was judged to be low to probably low. In the cohort studies almost all subjects diagnosed with the outcome were analysed at follow up. In one study (Herquelot et al., 2013) multiple imputations were performed for missing data on the follow-up. There is no evidence that this has led to selective reporting of the outcome data. Reporting bias. Selective reporting was not judged as a major issue in the included studies and therefore, this was rated as low to probably low. Although the study protocol of the included studies was not available, it is unlikely that there was selective reporting of outcomes. The outcomes were reported in the results sections of the study records as they had been reported in the abstracts and methods sections in the study record. Conflict of interest. All included studies on MSD did not receive support from a company or other entity with a financial interest in the study findings; were funded by public research agencies or related organizations that were free from commercial interests in the study findings; were authored only by persons who were not affiliated with companies or other entities with vested interests; and/or had no conflict of interest declared by study authors. Other risk of bias. There were no indications for other risk of bias in the included studies and therefore this domain was rated as probably low. Acquired knee or hip OA Tables A4.6-A4.8 in Appendix 4 present the risk of bias in the included studies at individual study level for the outcome 'knee or hip OA'. All included studies are case control studies. Although in general, case control studies are regarded as sensitive to a higher risk of bias in comparison to cohort studies, we judged the risk of bias of the included Tables 5 and 6 See Tables 5 and 6 See Tables 5 and 6 See Tables 5 and 6 Low or no exposure to lifting, kneeling, squatting, or climbing Seidler 2008 Knee OA:To qualify as cases, patients had to have at least grade 2 osteoarthritis according to the reference radiologist's assessment. Not reported Physician diagnostic record Physician diagnostic record Knee OA See Tables 1 and 2 See Tables 1 and 2 See Tables 1 and 2 See Tables 1 and 2 Job = reference group: service occupation as main occupation, Activities= studies for this outcome to be low to probably low across the studies (Fig. 3). Selection bias. In two of the three studies selection bias was rated as probably low. In the study by Gholami et al. (2016) it was judged as high, mainly because the study population of cases and controls differed from the eligible population on demographic variables. Performance bias. The risk of performance bias was assessed as probably low. In all included studies, knee or hip OA were radiographically confirmed, according to clear diagnostic criteria and/or to a single trained observer. Detection bias (exposure assessment) . Also in the three included studies for this outcome, exposure assessment was based on self-report and therefore detection bias regarding exposure assessment was rated as probably high. In the study by Yoshimura et al. (2000), lifetime history after leaving school of exposure to occupational ergonomic risk factors was asked which may have led to recall bias. Detection bias (outcome assessment). Detection bias regarding outcome measurement leading to possible outcome misclassification was seen as low to probably low. Mostly radiographic confirmation of findings was used based on clear diagnostic criteria (e.g. the American College of Rheumatology (ACR) criteria). Confounding. Possible confounding across the three studies was rated as low to probably low. Results were presented based on a (multivariable) model to adjust for the most important possible confounders as indicated in our logic model. Matching or adjustment was performed for age, sex and socio-economic position and also for other factors like BMI or sporting activities. Appropriate statistical techniques were used for adjustment of confounders. Selection bias (incomplete outcome data). Selection bias due to incomplete outcome data was not seen as a problem across the studies. Outcome data were complete for cases and controls. Reporting bias. Selective reporting was rated as low to probably low across the studies. The outcomes were reported in the results sections of the study records as they had been reported in the abstracts and methods sections in the study. Conflict of interest The included studies on OA of hip or knee did not receive support from a company or other entity with a financial interest in the study findings; were funded by public research agencies or related organizations that were free from commercial interests in the study findings; were authored only by persons who were not affiliated with companies or other entities with vested interests; and/or had no conflict of interest declared by study authors. Other risk of bias. Other possible risk of bias was not identified in the included studies, and therefore this was domain was rated as probably low. Outcome: Acquired other MSD (MSD incidence) A total of five studies (four cohort studies and one case control study) comprising2,377,375 participants from one WHO region (Europe) reported estimates of the effect of occupational exposure to ergonomic risk factors on other MSD, compared with no or low exposure to ergonomic risk factors. The four cohort studies could be included in a quantitative meta-analysis on prioritized evidence. The results from two studies (Bodin et al., 2012, Herquelot et al., 2013 were based on the same cohort, which both reported on the relationship between exposure to occupational ergonomic risk factors and other MSD (rotator cuff syndrome and epicondylitis lateralis, respectively), and therefore their results have been combined. These studies that we pooled in our metaanalysis were somewhat heterogeneous in the measurement of exposure, but we considered the definition of exposure to still be similar enough to warrant inclusion in the meta-analysis. Compared with no or low exposure, any occupational exposure to ergonomic risk factors increased the risk of acquiring other MSD (OR 1.76, 95% CI 1.14 to 2.72, 4 studies, 2,376,592 participants, I 2 70%; Fig. 4). A supporting case control study (Seidler et al., 2011) on 783 male workers (483 cases and 300 controls) on occupational exposure to ergonomic risk factors and MSD, in this case shoulder tendon (supraspinatus) lesions, showed an elevated risk for this more specific disorder (OR 4.69, 95% CI 2.10 to 10.47; Fig. 7). Outcome: Acquired knee or hip OA Three studies (all case control studies) with a total of 1,354 participants from three different WHO regions (Europe, Eastern Mediterranean, Wester Pacific) reported estimates of the effect of occupational exposure to ergonomic risk factors on knee or hip OA, compared with no or low exposure to ergonomic risk factors. All three studies could be included in a quantitative meta-analysis. These studies that we pooled in our meta-analysis were somewhat heterogeneous in the measurement of exposure, but we considered the definition of exposure to still be homogeneous enough to warrant inclusion in the meta-analysis. Compared with no or low exposure, any occupational exposure to ergonomic risk factors increased the risk of acquiring knee or hip OA (OR 2.20, 95% CI 1.42 to 3.40, 3 studies, 1,354 participants, I 2 13%; Fig. 5 We started at "moderate" for observation studies. The general picture of the cohort studies was that we did have serious concern regarding risk of bias in the prioritized body of evidence, in particular regarding detection bias due to exposure misclassification on this outcome. We judged the overall risk of bias to be moderate, and therefore the quality of evidence was downgraded for this consideration (-1 level). Exposure data were all based on self-reports or on job exposure matrix data and although they were in general defined clearly in the judgement of the risk of bias this was considered a serious concern. All included studies were carried out in only one WHO region (Europe) which could be considered a risk for indirectness in relation to other WHO regions. However, in contrast to the systematic review on prevalence estimates where we downgraded one level for indirectness, for this review on the relationship between exposure and health outcome, the impact of WHO region was considered to be less significant. Moreover, we did not have any serious concern regarding the quality of evidence on outcome definitions and descriptions. Therefore, following discussion we decided not to downgrade the quality of evidence for indirectness (± 0 levels). We also had no serious concerns regarding inconsistency, in relation to the cohort studies and were sufficiently clinically homogeneous to be combined in a quantitative meta-analysis. Although there is some statistical heterogeneity (p = 0.03; I 2 = 70%), the point estimates do not demonstrate wide variability and the confidence intervals show considerable overlap, in particular at the lower boundaries. Therefore, no downgrading of the quality of evidence (± 0 levels) for inconsistency was applied. We had no serious concerns for imprecision, given the relatively narrow CI for the pooled OR, so did not downgrade the quality of evidence for imprecision (± 0 levels). We did not have any serious concerns for publication bias (± 0 levels). We did not upgrade the quality of evidence for a large effect estimate, nor for evidence for consistent dose-response gradients across the studies, or for residual confounding that could increase the confidence. In conclusion, we started to assess the quality of evidence regarding the outcome MSD at the level of "moderate" for observational studies and decided to downgrade with one level. Therefore, we arrived at a final overall rating of "low". Quality of evidence regarding the outcome OA of knee or hip As with the previous MSD outcome, we started at "moderate" for observational studies. For the OA outcome we only have three case control studies available which, in general, are thought to possibly lead to a higher risk of bias. From the study limitations for the individual studies and across the studies as summarized in the heat map we did have a serious concern regarding the risk of bias in the body of evidence for this outcome, in particular due to probably high risk of bias for exposure assessment because also here all exposure assessment was based on self-report. Therefore, we regarded the quality of study limitations (risk of bias) as moderate and did downgrade the quality of evidence for risk of bias (-1 level). We did not have serious concerns for indirectness. In contrast with the studies on MSD, the study data were coming from three different WHO regions (Europe, Easter Mediterranean and Western Pacific). Study populations did not differ from populations at interest and also the exposure definitions or health outcomes did not differ substantially those of primary interest. The quality of evidence was not downgraded for indirectness (± 0 levels). We did not have any serious concerns regarding inconsistency. The included studies were thought to have sufficient clinically homogeneity and the statistical heterogeneity was limited (p = 0.32; I 2 = 13%). Also for this outcome, the point estimates do not vary very widely and the confidence intervals show considerable overlap. Therefore, no downgrading of the quality of evidence for inconsistency was considered (± 0 levels). We also had no serious concerns for imprecision, given the sufficiently narrow CI of the pooled OR, and therefore did not downgrade (± 0 levels). We did not have any serious concerns for publication bias (± 0 levels). Comparably to the MSD outcome studies, we did not upgrade the quality of evidence for a large effect estimate, nor for evidence for a consistent dose-response gradient across the studies, or for residual confounding that could increase the confidence. In conclusion, we started to assess the quality of evidence regarding the outcome OA of knee or hip at the starting level of "moderate" for observational studies and decided to downgrade one level for risk of bias. Therefore, we also arrived at a final rating of "low" for this outcome. Strength of evidence According to the protocol, the strength of the evidence was rated based on a combination of four criteria outlined in the Navigation guide: (1) Quality of the entire body of evidence; (2) Direction of the effect estimate; (3) Confidence in the effect estimate; (4) Other compelling attributes. Strength of evidence regarding the outcome MSD Concerning the size, and quality of the individual cohort studies, it was discussed and agreed that we judge the main body of evidence on the relationship between occupational exposure to physical ergonomic factors and selected other MSD as "limited evidence of harmfulness". The meta-analysis based on four cohort studies, including a large number of participants, and taking into account relevant confounders, documents a moderately increased risk of incident MSD (OR 1.76) with a lower CI beyond 1.0 and a rather narrow CI (1.14-2.72). Overall, across the studies, risk of bias of the cohort studies was regarded moderate, supporting reasonable quality, and the direction of the estimate was similar in all included studies. No study documented a negative effect estimate. There is reasonable confidence in the effect estimate as also a supporting case control study and earlier systematic reviews on the same topic found comparable results. Strength of evidence regarding the outcome OA of knee or hip In comparison with the outcome MSD, the number of included studies on OA of knee or hip, was smaller, based on smaller-sized study populations, and consisting of only case control studies, giving lower confidence in the overall quality. Nevertheless, also here a moderately increased odds ratio for OA in exposed populations was found (OR 2.20) with a reasonably narrow CI (1.42-3.40) and a low statistical heterogeneity. The direction of the effect was similar in the studies. Given the moderate risk of bias, and the aforementioned comments regarding the lower confidence in the quality of the body of evidence this leads for this outcome also to the judgement of "limited evidence of harmfulness". Summary of evidence As shown in the table of summary of findings (Table 5) systematic review found low quality of evidence for an association between occupational exposure to ergonomic risk factors (force exertion, demanding posture, repetitiveness, hand-arm vibration, lifting, kneeling and/or squatting, and climbing) and the incidence of other MSD, mostly located in the shoulder or elbow. Also low quality of evidence was found for an association between exposure to the aforementioned ergonomic risk factors and OA of knee or hip. Based on the considerations for evaluating the strength of evidence we concluded that, based on human evidence, for other MSD there is limited evidence of harmfulness of exposure to occupational ergonomic risk factors, and also for OA of knee or hip there is limited evidence of harmfulness. Although the reported effects may be modest, due to the widespread and high prevalence estimates of this occupational exposure to ergonomic risk factors, this possible harmfulness warrants attention for preventive occupational health and safety measures. Comparison to previous systematic review evidence Previous systematic reviews on the relationship between ergonomic risk factors and musculoskeletal diseases or osteoarthritis have mostly concentrated on one or only a few ergonomic risk factors and a more specific health outcome, e.g. epicondylitis lateralis, subacromial impingement syndrome, or osteoarthritis of only the knee. Regarding other MSDs, a systematic review by van Rijn et al. (2010) on the relationship between repetitive movements of the shoulder, repetitive motion of the hand/wrist of > 2 h/day, hand-arm vibration, and arm elevation with with subacromial impingement syndrome revealed an elevated risk (ORs between 1.04 and 4.7). Van der Molen et al. (2017) found moderate quality evidence for associations between shoulder disorders (M75.1-M75.5) and several of 'our' individual ergonomic risk factors with odds ratios, all ranging between 1.5 and 2.0. Descatha et al. (2016) showed a positive association between combined biomechanical exposure involving the wrist and/or elbow and the incidence of epicondylitis lateralis (OR 2.6, 95% CI 1.9-3.5) and an earlier review by van Rijn et al. (2009) on the same health outcome came to comparable results. We think that our review and meta-analysis on other MSD's corroborates this evidence. With respect to OA, Verbeek et al. (2017) performed a meta-analysis of case control studies on knee osteoarthritis and found for exposure to kneeling or squatting, lifting and climbing all elevated risks with odds ratios varying between 1.4 and 1.7. Moderate to strong evidence for a relationship between heavy lifting and more general physical workload and hip osteoarthritis was reported in previous systematic reviews by Lievense et al (2001) Also for this outcome (OA), we think that our systematic review and meta-analysis is well in line with this previous evidence Strengths Our systematic review is part of a larger project with the aim to develop Joint Estimates for estimating the national and global workrelated burden of disease and injury (WHO/ILO Joint Estimates), with contributions from a large network of experts. The methodology of the review process was discussed, adapted, accepted and performed according to an intensive and rigorous process that was also presented in a transparent way in a published protocol (Hulshof et al., 2019). To our knowledge, this is the first systematic review and meta-analysis conducted specifically for a global occupational burden of disease due to occupational exposure to ergonomic risk factors, and, as such, it provides a model for future systematic reviews that will help ensure that these global health estimates adhere fully with the GATHER Guidelines for Accurate and Transparent Health Estimates Reporting (Stevens et al., 2016). Limitations This review has several limitations. First, our searches may have missed studies published in languages other than English. However, we searched many electronic bibliometric and grey literature databases using a comprehensive search strategy and consulted additional experts who also did not identify any additional eligible studies. We have some confidence that we identified most if not all studies eligible for inclusion in our systematic review. Second, our review is based on a limited number of studies. While previous systematic reviews on the relationship between occupational exposure to ergonomic risk factors and musculoskeletal disorders or osteoarthritis of knee or hip in general were based on a larger number of studies, our rather strict inclusion criteria on exposure (data should be available on exposure to at least five of the seven selected risk factors) and outcome (data on the selected ICD codes should be available, see Table 2) led to the exclusion of all studies in the first round of study selection. Therefore, we have adapted our inclusion criteria regarding exposure and performed a second round of study selection where we included studies with data on at least three of the selected occupational ergonomic risk factors. For this second round of study selection, we have used an additional strategy by using natural language processing with Third, we did not receive requested missing or additional data from a part of the principal study authors that we have contacted. As most of the potentially eligible studies in their published papers did not present data in a way that met our inclusion criterion for exposure definition, i.e. occupational exposure to any of the included ergonomic risk factors, we needed additional data from a substantial part of the potentially eligible studies. For most of these papers, this required additional analysis of the original data. Fortunately, some of the authors responded positively to this request. However, some of the contacted authors indicated that this was not possible or feasible. From some other authors we did not receive any reply. This further limited the number of eligible studies. Fourth, the relation of work-related factors of physical activities with harm to the human body is a rather complex one. The medical outcomes of this review (selected other MSD and OA) are multifactorial of origin where several risk factors, both work-related and non-work-related may play a role. We have chosen to include only clinically assessed MSD or OA and to exclude (lighter) signs or symptoms of physical load or physical stress. In the included studies, adjustment was made as much as possible for non-work-related risk factors for the outcomes. Nevertheless, it is not absolutely possible to disentangle the influence of occupational physical activities and leisure time physical activity to the full extent. However, recent research suggests that occupational physical activities are of other nature (e.g. often more repetitive or static) and related to other health effects than leisure time physical activity, so it is not possible or sensible to just simply adding or multiplying duration, frequency or intensity of occupational and leisure time physical activities (Holtermann et al., 2019;Coenen et al., 2020). Although this is a very interesting field of discussion and research, this was not the primary purpose of this review. Conclusions Overall, for both outcomes, the main body of evidence was assessed as being of low quality. Occupational exposure to ergonomic risk factors increased the risk of acquiring MSD and of acquiring OA of knee or hip. We judged the body of evidence on the relationship between exposure to occupational ergonomic factors and MSD as "limited evidence of harmfulness" and the relationship between exposure to occupational Navigation Guide quality of evidence ratings High quality: Further research is very unlikely to change our confidence in the estimate of effect. Moderate quality: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate. Low quality: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. Navigation Guide strength of evidence ratings Sufficient evidence of toxicity/harmfulness: The available evidence usually includes consistent results from well -designed, well -conducted studies, and the conclusion is unlikely to be strongly affected by the results of future studies. For human evidence a positive relationship is observed between exposure and outcome where chance, bias, and confounding, can be ruled out with reasonable confidence. Limited evidence of toxicity/harmfulness: The available evidence is sufficient to determine the effects of the exposure, but confidence in the estimate is constrained by such factors as: the number, size, or quality of individual studies, the confidence in the effect, or inconsistency of findings across individual studies. As more information becomes available, the observed effect could change, and this change may be large enough to alter the conclusion. For human evidence a positive relationship is observed between exposure and outcome where chance, bias, and confounding cannot be ruled out with reasonable confidence. Inadequate evidence of toxicity/harmfulness: Studies permit no conclusion about a toxic effect. The available evidence is insufficient to assess effects of the exposure. Evidence is insufficient because of: the limited number or size of studies, low quality of individual studies, or inconsistency of findings across individual studies. More information may allow an estimation of effects. Evidence of lack of toxicity/harmfulness: The available evidence includes consistent results from well -designed, well -conducted studies, and the conclusion is unlikely to be strongly affected by the results of future studies. For human evidence more than one study showed no effect on the outcome of interest at the full range of exposure levels that humans are known to encounter, where bias and confounding can be ruled out with reasonable confidence. The conclusion is limited to the age at exposure and/or other conditions and levels of exposure studied. ergonomic factors and OA also as "limited evidence of harmfulness". These relative risks might perhaps be suitable as input data for WHO/ ILO modelling of work-related burden of disease and injury. Differences between protocol and systematic review • Additionally to the study selection process as described in the protocol, we have used a second study selection process by using natural language processing. • We planned to follow up request for missing data for principal study authors at twice, at two and four weeks after the initial request; in the systematic review we only followed up once, at two weeks after our initial request. • In the protocol, we planned to convert OR into RR, if possible. To conduct conversion, information on "prevalence of outcome in reference group or baseline risk" is required. However, such information was not available from all included studies. For case-control studies, ORs were reported and were synthesized directly. For cohort studies, also ORs were reported and were used for meta-analyses without any conversion.
2021-03-09T06:22:40.643Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "71c41d3a9186fa1ccaba6e5ed161757d6801f1aa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.envint.2020.106349", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9761055ee278dcc321c041e7b9e7a58c04e128a1", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210040212
pes2o/s2orc
v3-fos-license
The role of color Doppler in assisted reproduction: A narrative review Abstract Color Doppler of perifollicular vascularity is a useful assessment tool to predict the growth potential and maturity of Graafian follicles. Power Angio is independent of the angle of insonation and morphometry and provides reliable clues to predict the implantation window of the endometrium. Color Doppler can be used for the prediction of ovarian hyperstimulation syndrome. It can also be used to identify the hyper responder and gonadotropin-resistant type of polycystic ovaries. The secretory scan of corpus luteum can accurately predict its vascularity and functional status. A corpus luteum with decreased blood flow is a very sensitive and specific indicator of threatened and missed abortions. Color Doppler and Power Angio need to be standardized and identical settings should be maintained if different patients, or if changes over time within the same patient are to be compared. Introduction Angiogenesis occurs as a routine process in the female pelvic viscera resulting in a systematic dynamic vascular proliferation followed by regression in each menstrual cycle. Blood flow of a maturing follicle, vascular supply of endometrium, and corpus luteum vascularization can be quantified. Ovarian stimulation by gonadotropins induces a rise in stromal blood flow velocity as evidenced by two-dimensional color Doppler studies. The rise in stromal blood flow velocity is associated with a concurrent increase in serum vascular endothelial growth factor (VEGF) concentration (1,2). The VEGF or HIF (Hypoxia Induced Factor) is an endothelial cell mitogen with potent angiogenic properties leading to splitting, budding and branching and expansion of vessel walls, in regions of endothelial tips that show maximum sensitivity. Serum VEGF levels have a positive correlation with perifollicular blood flow. This can be easily measured by two-dimensional Color and Pulsed Doppler ultrasonography (3). The cumulus oophorus responds to indigenous and exogenous gonadotropin stimulation by increasing the VEGF production. This also explains the fact that increase in VEGF is not detected in the ovaries with low Antral Follicle Count (AFC) after exogenous gonadotropin stimulation (4). Vascular perfusion of the maturing follicle has been graded based on the percentage of follicular circumference seen to be vascularized. A mature follicle shows a vascularity of > 3/4 of its circumference. At this time, the peak systolic velocity (PSV) in ovarian stromal arteries is 10 cm/s. At this time, the LH (Luteinizing Hormone) surge starts in the normal cycle, and under the effect of LH, the PSV increases; this is the time to give trigger in ART (Assisted reproduction Technology) cycle. Another study states that a PSV of 42 cm/sec is achieved 1 hr. before ovulation in spontaneous cycles. Some studies give Human Chorionic Gonadotropin (HCG) trigger in intrauterine insemination cycles when PSV > 15cm/sec (5). The primordial and preantral follicles have no independent vascularity and are supplied by stromal blood vessels. As primary follicle grows the theca cells develop a vascular network. For ovulation to occur, a 24 hrs prior increase in vascularity is required. A hypoechogenic line surrounding the preovulatory follicle is seen on grey scale. This occurs due the separation of theca cells from internal granulosa cells. The theca cells just before ovulation are hyper vascularized and edematous, as imaged by Color Doppler. Spiral artery proliferates and grows into the endometrium in the late proliferative phase, this growth and periovulatory phase can be quantified. The number of spiral arteries is fixed and is about 120 in a woman. The growth of spiral arteries is accompanied by increased flow in the main uterine arteries. Addition of color doppler to gray scale The volume of the ovary in a 2D (twodimensional) ultrasound is calculated as PI/6 x length x breadth x width. When the Color Doppler was used to evaluate the ovarian stromal blood flow, it was found that the normal responders have higher peak systolic values of ovarian stromal blood flow than the hypo-responders (6). Women with Resistivity Index (RI) > 0.56 in ovarian stromal vessels were also found to have longer stimulation periods and lower oocyte yield. Table I summarizes how the gonadotropin dose is adjusted in relation to vascular flow (Table I). Ovarian vascularity Prior to menarche and following menopause, the ovarian stromal blood flow is hardly visualized. The ovaries are very poorly vascularized at that time. The resistance in vessel walls is high as depicted by the high flow indices (RI and PI). In the functional reproductive age, the ovarian stromal vascularity varies cyclically. This also depends on which ovary the dominant follicle is growing. In the ovary with the dominant follicle the resistance to blood flows is lower in as compared to the ovary without a dominant follicle. A growing follicle or the corpus luteum needs more vascularization, and so PSV is high and RI and PI are low. Ovarian arteries are difficult to find, to perform objective measurements; therefore intraovarian stromal blood flow is estimated. This changes with age and the phase of the menstrual cycle. Prior to puberty and following menopause, there should be no blood flow in the ovaries on Color Doppler. Any positive vascularization before menarche or after menopause in the ovaries raises doubts about possible pathology of the vascularized ovary (infections, benign and malignant tumors, endocrine dysfunction). Dose of gonadotropins Color Doppler can be used for deciding the dose of the follicle-stimulating hormone (FSH) If the ovarian volume is small (< 3 cc), there are lesser than three antral follicles, ovarian stromal RI is high (> 0.56) on 2D Doppler and stromal flow index (FI) is less (< 11) in 3D (three-dimensional) Doppler, higher doses of gonadotropins are required. Lower doses of gonadotropins are sufficient if the ovarian volume is more than 6 cc, there are more than eight antral follicles, ovarian RI < 0.50 in 2D Doppler, and ovarian stromal blood flow is > 15 in 3D Doppler (6-9). Time of trigger In gray scale, oocytes aspirated from follicle > 18 mm in size are usually MII (Metaphase II) oocytes and have much better fertilization and pregnancy rates. Follicles < 16 mm have 50% chances of yielding an M2 oocyte. Color Doppler can be added to assess the maturity of follicles by measuring the Perifollicular velocity. Table II Endometrial vascularity Assessing the endometrial volume and quantitative global vascularity can be an indirect marker of endometrial receptivity (15). 2D Doppler gives only endometrial thickness and only a few vessels in a single plane can be visualized (Figure 2). Endometrial volume > 3 ccs, FI > 20, and VFI > 5. The endometrial volume most favorable for implantation is estimated at 3-7 ml (mean 4.28 ± 1.9) (16). No pregnancy is achieved with an endometrial volume < 3 cc and sub-endometrial VI < 10 (8). Good International Journal of Reproductive BioMedicine Color doppler in assisted reproduction pregnancy rates are achieved with endometrial volume of > 7 ccs and sub-endometrial VI between 10 and 35 (17). It is believed that VFI is preferred over volume, FI, and VI taken separately (18,19). Corpus luteum vascularity Following ovulation the vascular network of the theca layer enter the cavity of the ruptured follicle, the amount of blood flow increases manifold, this is depicted in Doppler waveforms with increased velocity and low impedance to blood flow. The RI decreases (0.43 ± 0.04), and remains stationary for 3-4 days, and then slowly starts rising to a level of 0.49 ± 0.04. This still remains lower than in the follicular phase. If HCG is available from the developing syncytiotrophoblast the corpus luteum attains blood flow with low Doppler indices (RI= 0.45 ± 0.04), and continues to have this vascularity till first trimester after which the placenta takes over the progesterone synthesis (20,21). Intrascrotal abnormalities detected by Color Doppler imaging aid in the management of male infertility. Vascular channels and reflux flow in varicocele can be graded. Furthermore, testicular microlithiasis and testicular tumors can be identified. Color Doppler in male infertility is also indicated to detect the vascularity of epididymal and testicular cysts. Intrascrotal hemangioma is accurately diagnosed using the modern, high-throughput Doppler imaging technique. A recent use is an ultrasound-guided testicular sperm aspiration that improves the yield and makes the procedure guided and minimally invasive (26). 3D color doppler and in-vitro fertilization In 2D Color Doppler studies, the information concerning the vascularization and blood flow in the organ is being obtained from a vascular network lying in a 3D plane. The measurement is essentially flawed because we are visualizing a 3 D object with a 2D imaging modality. Further more to estimate blood flow velocity; the angle of insonation to the blood vessels should be less than 30 degrees. The ovarian vascularity is spherical network of thin vessels to be imaged by 3D Doppler (27). An additional problem with a stimulated ovary is to differentiate between stromal and Perifollicular follicular vascularization which often overlap in a 2 D image. The 3 D Power Doppler is angle-independent and can decipher the total spherical vascularization of the ovary and endometrium. The indices measured include, the mean gray scale, the VI, the FI, and the VFI. The mean gray value in the gray voxels is a measure of the mean echogenicity (range 0-100). The VI is the ratio of number of color voxels to total number voxels in the specified volume. The VI represents the ratio of volume of vessels in the tissue to the total volume of tissue, and is expressed as a percentage. The FI, the mean value of the color voxels getting filled with time, tells the mean intensity of blood flow (range 0-100). The VFI is a composite; mean color voxels getting filled with time are expressed as a ratio to all the voxels in the specified volume (range 0-100). Doppler information and the difference in scan images may be quantifiable through the "histogram" facility, thereby demonstrating how vascularity is independent of morphometry and varies throughout the different phases of menstrual cycle, for example, luteal/follicular phases (28). With the Gonadotropin stimulation and growth of several follicles the total ovarian volume enlarges mean gray value decreases because the grayscale voxel value for the follicular antral fluid is less. After gonadotropin induction ovarian VI, FI, and VFI increase in patients with good AFC. The intensity of ovarian stromal blood flow is essential for gonadotropins to act on target receptors. This is because perfusion is independent of the total volume of ovary but it depends on the phase of menstrual cycle and hormone levels). Gonadotropin stimulation is not able to improve vascularity indices in ovaries where there are no antral follicles. VI, FI, and VFI correlate with the number of antral follicles after gonadotropin stimulation. This suggests that primarily the follicles containing oocytes control the increase in vascularization and blood flow. Color doppler in polycystic ovaries In the polycystic ovaries the stroma is hyper echoic and hyper vascular and this hyper vascularity is noncyclical. In patients with polycystic ovaries there are also no cyclical changes in uterine arteries (29,30). Two patterns of polycystic ovaries are identified, the hypervascular stroma that is the hyper responder type of polycystic ovaries and the hypovascular stroma that is the gonadotropinresistant polycystic ovary (31-33). Ovarian hyper stimulation prediction The pathogenesis of OHSS is increased VEGF induced "neoangiogenesis" (34). Color Doppler imaged ovarian vascular mapping can help to identify a new functional approach to ovarian hyperstimulation syndrome (OHSS). Local vascular factors for ovarian angiogenesis and increased capillary permeability triggered by HCG injection play an important role in this syndrome. Two patterns of 3D vascular images were detected on days 8-10 of stimulation; each pattern corresponding to a certain level of serum E2. "Reassuring pattern" is characterized by regular marginally dilated, regularly branching vessels that have few coils, and RI (0.55-0.61). Serum E2 associated is 3,000-5,000 pg/ml. OHSS is not observed with this pattern. "Aggressive pattern" has features of markedly dilated, irregularly branching vessels with several angles and coils and RI (0.48-0.52). Levels of Serum E2 are more than 5,000-7,000 pg/ml. OHSS is more likely in aggressive pattern (35,36). Standardization of color doppler The capturing of indices in 3 D power Doppler is dependent on speed of acquisition, color gain, pulse repetition frequency (PRF), line density, wall motion filter, signal rise and persistence (slow and steady signals). Maintaining identical settings is helpful to compare cyclical changes in the same patient or different patients (37). Conclusion Stimulation cycles differ from natural cycles and the precise values of vascular indices vary depending on angiogenic, hormonal and growth factors. Independent perifollicular vascular network is achieved when follicle reaches a diameter of >10 mm. The role of oocyte release factors in growth and differentiation of granulosa cells needs to be studied. In the follicular phase, the RI is around 0.54 ± 0.04. 48 hr. Before ovulation, the RI begins to fall and at ovulation the RI is 0.44 ± 0.04. Sometimes only PSV rises on the onset of ovulation without a concurrent fall in RI. The increase in vascularity of theca and granulosa layer and separation of theca cells and granulosa cells results in ovulation. In the luteinized unruptured follicle (LUF), it was seen that there is no rise in perivulatory flow velocities. The perifollicular vascularity of the Graafian follicle is helpful for clinicians to crosscheck biochemical hormone estimations. Perifollicular vascularity correlates well with the quality of cumulus oophoricus complex. Organization of the chromosomes is abnormal in oocytes harvested from inadequately vascularized follicles. Whether this is the cause or effect, needs to be investigated. In the natural physiological nonstimulated cycles, the perifollicular blood flow is always higher as compared to stimulated cycles where multiple follicles are growing simultaneously and competing for vascularity. Color Doppler is a useful noninvasive, reproducible, and reliable biophysical marker for monitoring of IVF cycles. Since vascularity correlates well with biochemical hormone estimations and is independent of morphometry, there is a potential role to reduce the costly biochemical hormone tests. Conflicts of Interest We do not have any commercial association that might pose a conflict of interest in connection with the manuscript. We certify that neither this
2019-12-12T10:24:09.855Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "9c0d7a507c2a9faf6292f8db39eaed9573ab3ba4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/ijrm.v17i10.5484", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38b553166e7ad39d838e6b06b299386bf0119c1c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119165856
pes2o/s2orc
v3-fos-license
The Spatial Product of Arveson Systems is Intrinsic We prove that the spatial product of two spatial Arveson systems is independent of the choice of the reference units. This also answers the same question for the minimal dilation the Powers sum of two spatial CP-semigroups: It is independent up to cocycle conjugacy. Introduction Arveson [Arv89] associated with every E 0 -semigroup (a semigroup of unital endomorphisms) on B(H) its Arveson system (a family of Hilbert spaces E = (E t ) t≥0 with an associative identification E s ⊗ E t = E s+t ). He showed that E 0semigroups are classified by their Arveson system up to cocycle conjugacy. By a spatial Arveson system we understand a pair (E , u) of an Arveson system E and a unital unit u (that is a section u = (u t ) t≥0 of unit vectors u t ∈ E t that factor as u s ⊗ u t = u s+t ). Spatial Arveson systems have an index, and this index is additive under the tensor product of Arveson systems. Much of this can be carried through also for product systems of Hilbert modules and E 0 -semigroups on B a (E), the algebra of all adjointable operators on a Hilbert module; see the conclusive paper Skeide [Ske09] and its list of references. However, there is no such thing as the tensor product of product systems of Hilbert modules. To overcome this, Skeide [Ske06] (preprint 2001) introduced the product of spatial product systems (henceforth, the spatial product), under which the index of spatial product systems of Hilbert modules is additive. It is known that the spatial structure of a spatial Arveson system (E t ) t≥0 depends on the choice of the reference unit (u t ) t≥0 . In fact, Tsirelson [Tsi08] showed that if (v t ) t≥0 is another unital unit, then there need not exist an automorphism of (E t ) t≥0 that sends (u t ) t≥0 to (v t ) t≥0 . Also the spatial product depends a priori on the choice of the reference units of its factors. This immediately raises the question if different choices of references units give isomorphic products or not. In these notes we answer this question in the affirmative sense for the spatial product of Arveson systems. For two Arveson systems (E t ) t≥0 and (F t ) t≥0 with reference units ((u t ) t≥0 and (v t ) t≥0 , respectively, their spatial product can be identified with the subsystem of the tensor product generated by the subsets u t ⊗ F t and E t ⊗ v t . This raises another question, namely, if that subsystem is all of the tensor product or not. This has been answered in the negative sense by Powers [Pow04], resolving the same question for a related problem. Let us describe this problem very briefly. Intertwining semigroups correspond one-to-one with unital units of the associated Arveson systems E i t t≥0 , so that these are spatial. Then by T a 11 a 12 a 21 a 22 : we define a Markov semigroup on B(H 1 ⊕ H 2 ). Its unique minimal dilation (see Bhat [Bha96]) is an E 0 -semigroup (fulfilling some properties). At the 2002 Workshop Advances in Quantum Dynamics in Mount Holyoke, Powers asked for the cocycle conjugacy class (that is, for the Arveson system) of that E 0 -semigroup. More precisely, he asked if it is the cocycle conjugacy class of the tensor product of ϑ 1 and ϑ 2 , or not. Still during the workshop Skeide (see the proceedings [Ske03]) identified the Arveson system of that Powers sum as the spatial product system of the Arveson systems of ϑ 1 and ϑ 2 . So, Powers' question is equivalent to the question if the spatial product is the tensor product, or not. In [Pow04] Powers answered the former question in the negative sense and, henceforth, also the latter. He left open the question if the cocycle conjugacy class of the minimal dilation of the Powers sum depends on the choice of the intertwining isometries. Our result of the present notes tells, no, it doesn't depend. We should say that Powers in [Pow04] to some extent considered the Powers sum not only for E 0 -semigroups but also for those CP-semigroups he called as spatial. We think that his definition of spatial CP-semigroup is too restrictive, and prefer to use Arveson's definition [Arv97], which is much wider; see Bhat, Liebscher, and Skeide [BLS10a]. The definition of Powers sum easily extends to those CP-semigroups and the relation of the associated Arveson system of the minimal dilations is stills the same: The Arveson system of the sum is the spatial product of the Arveson systems of the addends; see Skeide [Ske10]. Therefore, our result here also applies to the more general situation. Remark 1.1 It should be noted that the result is visible almost at a glance when the intuition of random sets to describe spatial Arveson systems is available; see [Lie09,Tsi00]. However, in order to make this clear a lot of random set techniques had to be explained, so we opted to give a plain Hilbert space proof. Although this is, maybe, not too visible, the proof here is nevertheless very much inspired by the intuition coming from random sets. We will explain that intuition elsewhere ( [BLS10b]). Arveson systems Remark 2.2 In the sequel, we shall omit the V s,t and simply identify E s ⊗ E t with E s+t . This lightens the formulae, but requires a certain flexibility (see Proposition 2.7 or the proof of Lemma 3.2) when interpreting correctly operators on tensor products of Arveson systems. [Arv89]; see [Lie09,Lemma 7.39]. The only difference is that Definition 2.1 allows for onedimensional and zero-dimensional Arveson systems. The latter is necessary in view of the following property. Remark 2.3 Note that Definition 2.1 is equivalent to Arveson's in By [Lie09, Theorem 5.7], for every Arveson system E the set and E ′ ∨ F ′ defined as the smallest Arveson subsystem containing both E ′ and F ′ . [Lie09,Theorem 7.7], the algebraic structure of an Arveson system determines the measurable structure completely. Definition 2.5 A unit u of an Arveson system is a measurable non-zero section If u is unital ( u t = 1∀t ≥ 0), the pair (E , u) is also called a spatial Arveson system. For Hilbert spaces, the spatial product from Skeide [Ske06] can be defined as a subsystem of the tensor product in the following way. Definition 2.6 Let (E , u) and (F , v) be two spatial Arveson systems. We define their spatial product as That this coincides with the product in [Ske06] follows either from the universal property [Ske06, Theorem 5.1] that characterizes it, or after Proposition 2.7 below, that identifies directly the pieces from the inductive limit by which the product is constructed in [Ske06]. Let Proposition 2.7 Let (E , u) and (F , v) be two spatial Arveson systems, and define Then for all t > 0 t , the limit exists due to monotonicity and (E u ⊗ v F ) t ⊂ E t ⊗ F t ∀t ≥ 0. From the properties of the interval partitions it is easy to see that in fact the RHS of ( * ) is a product system in its own right. Clearly, G u,v t ⊃ E t ⊗ v t and G u,v t ⊃ u t ⊗ F t . Therefore, the RHS of ( * ) contains both E ⊗ v and u ⊗ F . On the other side, let H ⊂ E ⊗ F contain both E ⊗ v and u ⊗ F . Then, obviously, G u,v t ⊂ H t . Consequently, E u ⊗ v F contains the RHS of ( * ) and the assertion is proved. Remark 2.8 The structure G s ⊗ G t ⊃ G s+t is a recurrent theme in the analysis of quantum dynamics, in particular, of CP-semigroup; see [Sch93,BS00,BBLS04,Ske06,MS02,Mar03,Ske03,BM10] [BS00]. While the spatial product may be viewed as amalgamation of two spatial product systems over their reference units, [BM10] generalize this to an amalgamation over a contraction morphism between two (not necessarily spatial) Arveson systems. This applies, in particular, to the amalgamation of two spatial Arveson systems of not necessarily unital units, and answers Powers' question for the Markov semigroup obtained from non necessarily isometric intertwining semigroups. Universality of the spatial product Our aim is to prove the following theorem. Actually, we will prove even more, namely, The key of the proof is the following lemma (whose proof we postpone to the very end, after having illustrated the immediate consequences). This proves E u ⊗ v F = E u ′ ⊗ v ′ F and, therefore, Theorem 3.1. Corollary 3.4 Denote by E 0 , F 0 the product subsystems of E and F generated by all units of E and F respectively. Then for the product with amalgamation over all units Proof. For every pair of unital units u and v we have Proof of Lemma 3.2. By Proposition 2.7, it is enough to show that for ψ ∈ E 1 we have increases strongly to a projection (the projection onto For 0 ≤ s < t ≤ 1, we define the projections Then (See Remark 2.2 about notation!) This gives Note that w S 1 ⊗ . . . ⊗ w S 2 n are unit vectors. Note, too, that in the last line of ( * * ) the projections ∏ i∈S (1 − Pi−1 2 n , i 2 n in the first factor are orthogonal for different choices of S. We conclude that . From [Lie09, Proposition 3.18] (see also [Arv03, Proposition 8.9.9]), we know that (s,t) → P s,t is strongly continuous. The simplex {(s,t) : 0 ≤ s ≤ t ≤ 1} is compact, so the function is even uniformly strongly continuous. This implies that 1 − P s,t −→ 0 strongly uniformly as (t − s) → 0. Thus we obtain that 2 −n
2010-06-14T15:55:26.000Z
2010-06-14T00:00:00.000
{ "year": 2010, "sha1": "ee5f1c68228dde1146ca9d1806d9322d5b69fa8d", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jfa.2010.09.001", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ee5f1c68228dde1146ca9d1806d9322d5b69fa8d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
18596935
pes2o/s2orc
v3-fos-license
Survival of the Stillest: Predator Avoidance in Shark Embryos Sharks use highly sensitive electroreceptors to detect the electric fields emitted by potential prey. However, it is not known whether prey animals are able to modulate their own bioelectrical signals to reduce predation risk. Here, we show that some shark (Chiloscyllium punctatum) embryos can detect predator-mimicking electric fields and respond by ceasing their respiratory gill movements. Despite being confined to the small space within the egg case, where they are vulnerable to predators, embryonic sharks are able to recognise dangerous stimuli and react with an innate avoidance response. Knowledge of such behaviours, may inform the development of effective shark repellents. Introduction Electroreception is found throughout the animal kingdom from invertebrates to mammals and has been shown to play an important role in detecting and locating prey [1,2], mates [3], potential predators [4,5] and is thought to be important in orienting to the earth's magnetic field for navigation [6][7][8]. Electroreceptors of sharks, the ampullae of Lorenzini, detect minute electric field gradients via an array of openings or 'pores' at the skin's surface [2]. Spatial information on the location of a field source is assessed by the differential stimulation of ampullae as the position of the source changes relative to the animal [1,2,6,9]. The spatial separation and arrangement of each pore in the array directly influences the detection of electric stimuli and the resultant changes in the shark's behaviour [2,10]. The electrosensory system of adult sharks is known to primarily mediate the passive detection of bioelectric stimuli produced by potential prey [1,2]. However, it has been postulated that the electroreceptive system can be used to detect, and thus avoid, potential predators [4]. Shark embryos that develop inside their mother may have little or no use for electroreception until birth, given that they are protected within the uterus and are nourished either directly by their mother (viviparity) or via an external yolk sac (ovoviviparity). However, oviposited embryos like those of the bamboo shark (Chiloscyllium punctatum) develop completely independently of their mother inside an egg case (oviparity) (Fig. 1A) [11]. These egg cases are typically deposited on or near the substrate, where they are vulnerable to predators including other sharks, teleost fishes, marine mammals and even large molluscan gastropods [12,13]. Chiloscyllium punctatum embryos will spend up to five months encapsulated inside a leathery egg case without the opportunity to escape or visually detect the approach of predators (Fig. 1A) [11]. After hatching, at just 10-12 cm in length [11], bamboo shark juveniles are extremely vulnerable to predation. However, at this stage, their distinctive pattern of high contrast banding (Fig. 1B) may assist in avoiding predators since these conspicuous bands mimic the colouration of unpalatable or poisonous prey, i.e. sea snakes, thereby avoiding predation (known as Batesian mimicry). This potentially aposematic colouration is lost as the bamboo shark reaches maturity and the banded pattern fades. As it matures, this species adopts the more familiar counter-shading pigmentation exhibited by many other species of sharks, thereby enabling it to camouflage itself against a dark substrate (Fig. 1C). During early embryonic development (stages 3-25) [11,14], bamboo sharks are sealed within a pigmented egg case, where their presence would be masked to any visually-driven predators and there would be no exchange of fluids [11] with the surrounding seawater, negating their detection via either mechanoreceptive (lateral line) or olfactory signals. However, as the embryo approaches the pre-hatching stage of development (stages 26-32), the bottom edge of the egg case weakens and the marginal seals open, thereby allowing the entry of seawater [11] and the release of sensory cues that may be detectable by predators. As the embryo increases in size, it begins to undulate the tail to facilitate circulation of fresh seawater through the open seals of the egg case to assist in respiration. However, this is thought to increase the risk of predation [13] owing to the greater likelihood that a passing predator could detect the presence of the embryo due to the release of olfactory cues and/or intermittent hydrodynamic disturbances. Following an increase in the frequency of tail undulations and respiratory gill movements, between stages 26 and 32, the electrosensory system differentiates and may become functional by stage 32 [15], presumably to assist in predator detection prior to hatching [4]. Results and Discussion When exposed to predator-simulating sinusoidal electric fields, late stage bamboo shark embryos (stage 34) respond by the cessation of all respiratory gill movements, thereby minimising their own electrosensory and mechanosensory output in order to avoid detection (Fig. 2). The cessation of gill movements is immediately followed by a rapid coiling of the tail around the body, with little or no discernible body movement during exposure ('freeze' response). Vertebrates that exhibit a 'freeze' response to predators have also been shown to induce cardioventilatory responses, where they decrease their heart rate (bradycardia) to reduce predation risk [16][17][18][19][20][21]. As a result, the length of time that an animal is able to respond is finite, as the need to breathe and pump oxygen around the body will eventually overcome the urge to remain still and undetected. Thus, the bamboo shark embryos tested eventually resume, albeit much reduced, gill movements whilst still being exposed to the predator-simulating stimuli. Bamboo shark embryos (stage 32-34) show the greatest avoidance response to sinusoidal electric field frequencies between 0.25 and 1.00 Hz (peaking at 0.5 Hz; Fig. 2), with response duration (measured from initial time of exposure) increasing as the electric field strength increases (increasing electric field strength may simulate closer and/or larger predators) (Fig. 3). Less developed embryos (stages 32-33) exhibit a reduced response duration to predator-simulating stimuli ( Fig. 3A-F). Embryos as young as stage 32 would only respond if the electric field was of sufficient strength, approximately $0.9 mV/cm (Fig. 3B). In contrast, stage 34 embryos would respond to electric field strengths as low as 0.4 mV/cm (Fig. 3I). Embryos prior to stage 32 failed to show any response to electric field strengths between 0.4 mV/cm and 2.1 mV/cm. These results agree with the differentiation and development of the electrosensory system, as has been previously shown for the lesser spotted catshark (Scyliorhinus canicula) when the ampullary organs become innervated [5,15]. Repeated exposure to the same stimulus also resulted in a reduced response duration as embryos (stages 34) became desensitised; embryos appeared to recognise previously presented stimuli when repeatedly exposed within a 30-40 minute period (Fig. 4). In contrast, the lesser spotted catshark and the clearnose skate (Raja eglanteria) habituate to stimuli within only 5 to 10 minutes of the initial exposure, respectively [4,5], highlighting significant species-specific differences in the level of temporal sensitivity of the electrosensory system in elasmobranchs. The greatest avoidance response to sinusoidal electric fields (0.25-1.00 Hz with a peak at 0.5 Hz; Fig. 2) exhibited by bamboo shark embryos in this study corresponds to the natural respiratory signals produced by their potential predators, i.e. teleosts and other elasmobranchs [3,4,13,22], and the low frequency modulations of D.C. fields produced by approaching predators [23]; thus indicating the important function of electroreception in the detection and avoidance of predators. This study advances our understanding of how embryonic sharks respond to electric fields of specific frequency and intensity and how their survival instincts to feed and defend themselves may take precedence over an electrical deterrent under some conditions [24]. The conditions under which this species habituates to electrical stimulation may also be useful in the development of electrical shark repellent devices. Ethics Statement This study was carried out in strict accordance with the guidelines of the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes (7 th Edition 2004) 'The Code'. The protocol was approved by the University of Western Australia Animal Ethics Committee (Permit No. RA/3/100/917). Embryos were monitored daily to assess activity levels before, during and post stimulation to allow adequate rest time between experimental trials, and all efforts were made to minimise suffering. Collection and staging of embryos Bamboo shark embryos were collected as freshly oviposited egg cases from captive bred adults from Underwater World and Daydream Island Resort aquaria in Queensland, Australia. To enable video recording of embryo activity within the egg case, the opaque external fibrous layer of each egg case was scraped off upon collection. Developing embryos could then be seen clearly through the transparent inner layer when held in front of a fibre optic light source. Eggs remained submerged in a shallow petri dish filled with seawater throughout this procedure. The embryos and stimulus intensities (0.4-2.1 mV/cm). Embryos are categorised into nine groups according to their relative stage in development and intensity of the electric field strength exposure. A: Stage 32 embryos exposed to 1.9-2.1 mV/cm (peak response frequency: 0.5 Hz; duration: mean 16.7 secs). B: Stage 32 embryos exposed to 0.9-1.1 mV/cm (peak response frequency: 0.5 Hz; duration: mean 14.9 secs). C: Stage 32 embryos exposed to 0.4-0.6 mV/cm (peak response frequency: 0.5 Hz; duration: mean 0.3 secs). D: Stage 33 embryos exposed to 1.9-2.1 mV/cm (peak response frequency: 0.75 Hz; duration: mean 27.7 secs). E: Stage 33 embryos exposed to 0.9-1.1 mV/cm (peak response frequency: 1.0 Hz; duration: mean 13.8 secs). F: Stage 33 embryos exposed to 0.4-0.6 mV/cm (peak response frequency: 0.5 Hz; duration: mean 3.7 secs). G: Stage 34 embryos exposed to 1.9-2.1 mV/cm (peak response frequency: 0.5 Hz; duration: mean 59.4 secs). H: Stage 34 embryos exposed to 0.9-1.1 mV/cm (peak response frequency: 0.5 Hz; duration: mean 38.4 secs). I: Stage 34 embryos exposed to 0.4-0.6 mV/cm (peak response frequency: 0.5 Hz; duration: mean 15.8 secs). doi:10.1371/journal.pone.0052551.g003 were monitored for developmental changes and compared to the stages described for Chiloscyllium punctatum [11] and the staging criteria outlined for Scyliorhinus canicula [14]. All stages, ranging from when the embryo could first be observed with the unaided eye (stage 14), through to pre-hatching, fully developed embryos (stage 34), were tested. Developmental changes were only recorded in the most advanced embryos (stages 31-34). Experimental design Embryos encapsulated within the egg case were suspended in a 90 cm long, 45 cm wide, 50 cm deep glass aquarium and transferred individually to an identical tank for testing. A total of 11 embryos were stimulated with sinusoidal electric fields (0-20 Hz) at various stages in their development (stages #31-34). Electric stimuli were applied at three major intensities (0.4-0.6 mV/cm, 0.9-1.1 mV/cm and 1.9-2.1 mV/cm) via a function generator and a custom built stimulus generator [25] with an applied current between 100 mA and 500 mA (Fig. 5). To ensure minimal variation in the electric field produced, water temperature of 24-25uC and water resistivity of 18-19 V cm were maintained. To account for non-responses, embryos were each stimulated the same number of times (3 replicates of each stimulus strength and frequency combination: 27 tests in total) and all test results (including zeros) were used to determine the average response duration. Therefore, each embryo was stimulated a total of 27 times (per developmental stage) to obtain a full data set covering all frequency variations (0-20 Hz) and all stimulus strength variations (0.4-2.1 mV/cm). Electric stimuli intensity (i.e. voltage gradient, V/cm) at the position where the embryo responds to the stimulus was calculated using the equation, V/cm = (r.I.d. cosa)/(2pr 3 ) [26], based on the 'ideal dipole field' equation [27] and the 'charge distribution of an electric field' equation [28]. The variables are as follows: r is the resistivity of the seawater (V cm), I is the applied electric current (A), d is the distance between the two electrodes of a dipole (cm), r is the radius (the distance from the centre of the dipole to the position in space where the potential is being calculated) and a is the angle from the position in space to the centre of the dipole with respect to the axis. In pre-experimental trials, shark embryos appeared to show an increased response duration when the electrode separation distance was increased, indicating that the embryos may interpret this as an increase in the size of the simulated predator [28,29]. To reduce these experimental variables, the electrode separation distance was set at 5 cm with the embryo held at a uniform radius of 12 cm from the dipole source. These measurements were based on tank size restrictions to minimise any backscatter effects. Further investigation is encouraged to better understand the effect of increasing electrode separation distance on predator avoidance response (repellent effect). The stimulus generator enabled the strength of the applied current to be varied and the function generator enabled a specific wave form to be selected and the output frequency controlled. An ammeter in series allowed the amount of current being applied through the circuit to be monitored in order to establish that the circuit was complete, thereby confirming that an electric field was being generated between the electrodes in the tank. Current from the stimulus generator was delivered to the electrodes via submerged cables and seawater-filled polyethylene tube salt bridges. A pair of shielded 18AWG coaxial underwater cables was plugged into the stimulus generator. Current was passed to seawater-filled polyethylene tubes via the exposed stainless steel pins of the cables [1]. The seawater-filled polyethylene tubing formed a salt bridge between the electrode arrays in order to eliminate eddy currents due to inhomogeneities on the electrode surface [4]. The electrodes were positioned adjacent to the egg case along the longitudinal axis (Fig. 5). During the behavioural observations, stimulus frequencies were presented as continuous sinusoidal stimuli. Response to the stimulus was determined by cessation of all gill movements or a 'freeze' response [Movie S1]. Embryos were stimulated for a minimum of 10 seconds after they resumed initial gill movements, to ensure that the resumption of breathing had been accurately identified. In order to avoid habituation to the electrical stimulus, an inter-trial interval of . Relative freeze response duration when embryos (stage 34) are repeatedly exposed to the same stimulus at set time intervals after the first (initial) response. Embryos were individually exposed to the same stimulus to get an average initial response time. Embryos were then exposed to the same stimulus 60 minutes after initial response, and re-exposed at decreasing time intervals. Response duration is expressed as a percentage of the initial response. doi:10.1371/journal.pone.0052551.g004 40 minutes was used after each freeze response was observed. The strength and frequency of the stimuli was also varied pseudorandomly. Video analysis All behavioural trials were recorded in high definition using a Canon S95 digital video camera. The camera was positioned to view the embryo and at least one of the electrodes (Fig. 5). As the electrodes were positioned along the same longitudinal axis as the embryo they could later be used as a calibration for measurements taken directly from the video clips (the electrodes were a known diameter) including embryo and yolk size and also to confirm embryo distance from the electrode source. Audio from the video was used to determine the point at which the stimulus source was switched on and off. The video analysis software Kinovea TM was used to assess behavioural clips and record response time. Supporting Information Movie S1 Video clip of a bamboo shark embryo (stage 33) responding to an electrical stimulus (stimulus strength: 0.25 Hz; 0.9-1.1 mV/cm) by ceasing gill movements. (MPG) Figure 5. Experimental apparatus used to study the response of bamboo shark embryos to predator simulating dipole electric fields. A function generator and stimulus controller were used to deliver a dipole electric field of specific intensity and frequency to electrodes positioned along the same longitudinal axis as the embryo. Embryo responses were recorded with a video camera positioned directly in front of the experimental tank. doi:10.1371/journal.pone.0052551.g005
2018-02-12T01:04:16.496Z
2013-01-09T00:00:00.000
{ "year": 2013, "sha1": "f01d29bf7b88037d0a9c5121cf3191ba15be9548", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0052551&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f0eef9ec41fa9d7d34847cf17b0a4d2e2b119b9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1787778
pes2o/s2orc
v3-fos-license
Expression of normal and mutant avian integrin subunits in rodent cells. We describe the expression of the beta 1 subunit of avian integrin in rodent cells with the purpose of examining the structure-function relationships of various domains within this subunit. The exogenous subunit is efficiently and stably expressed in 3T3 cells, and it forms hybrid heterodimers with endogenous murine alpha subunits, including alpha 3 and alpha 5. These heterodimers are exported to the cell surface and localize in focal contacts where both extracellular matrix and cytoskeleton associate with the plasma membrane. Hybrid heterodimers consisting of exogenous beta 1 and endogenous alpha subunits bind effectively and specifically to columns of cell-binding fragments of fibronectin. The exogenous avian beta 1 subunit appears to function as well as its endogenous murine equivalent, consistent with the high degree of conservation noted previously for integrins. In contrast, expression of a mutant form of avian integrin beta 1 subunit lacking the cytoplasmic domain produces hybrid heterodimers which, while efficiently exported to the cell surface and still capable of binding fibronectin, do not localize efficiently in focal contacts. This further implicates the cytoplasmic domain of the beta 1 subunit in interactions required for cytoskeletal organization. C ELLS from a wide variety of both vertebrate and invertebrate species share the ability to adhere to extracellular matrices. Cell adhesion is a property required for cell migration and tissue stability and is central to embryonic development, wound healing, metastasis and other biological processes requiring tethering of a cell to its substratum. Cell adhesion also affects cell shape, cell division, and cell differentiation. For these reasons, the molecules to which cells adhere as well as the constituents of the cell surface involved in the adhesion process have been subjected to intensive investigations (Buck and Horwitz, 1987;Martin and Timpl, 1987; for review, see Ruoslahti, 1988). Among the receptors playing a major role in cell-substratum adhesion are the members of a family of surface glycoproteins designated integrins (Hynes, 1987;Ruoslahti and Pierschbacher, 1987). Integrins are heterodimers consisting of noncovalently associated a and fl subunits. Those integrins involved in cell-substratum adhesion are found concentrated in or around focal contacts on the ventral cell surface, colocalizing with extracellular matrix (ECM)' molecules and cytoskeleton-associated (CSK) molecules (Chen et al., 1985;Damsky et al., 1985;Singer et al., 1988;Dejanna et al., 1988). Integrins are capable of binding directly to ECM molecules, including fibronectin, vitronectin, or laminin (Pytela et al., 1985a,b;Horwitz et al., 1985;Akiyama et al., 1985;Gardner and Hynes, 1985;Johansson et al., 1987a,b;Wayner and Carter, 1987;Wayner et al., 1988;Gehlsen et al., 1988;Ignatius and Reichardt, 1988;Sonnenberg et al., 1988), and to CSK molecules such as talin . The integrity of the aft complex is required for binding to both ECM and CSK molecules . Recent structural and serological data have led to the division of the integrin family into three subfamilies (Hynes, 1987;Anderson and Springer, 1987). Each subfamily is distinguished by a common fl subunit that can associate with a limited number of different a subunits. All fl subunits share certain structural features (Hynes, 1987;Buck and Horwitz, 1987;Ruoslahti and Pierschbacher, 1987). For example, the major portion of the/~ subunit is the extracellular domain which contains 56 conserved cysteine residues including four particularly cysteine-rich repeating structures. This is followed by a membrane-spanning domain and a relatively short intracellular domain (Marcantonio and Hynes, 1988;Mueller et al., 1988). Comparisons of the amino acid sequences of/~ subunits from the three integrin subfamilies reveal a 40-48 % identity while/~ subunits within a single subfamily display over 80% identity among diverse vertebrates (DeSimone and suggesting a molecule whose structure and function are highly conserved. Further evidence for the structural and functional conservation of portions of the/~, subunit comes from the observation that antibodies against the cytoplasmic domain of the avian /~, subunit react with fl subunits from many phylogenetically diverse sources (Marcantonio and Hynes, 1988). The/3, subfamily of integrins includes receptors for such ECM molecules as fibronectin, certain collagens, and laminin. This subfamily contains at least six serologically distinct a subunits each capable of binding to a common/3m subunit Hynes, 1987). The substrate specificity of each receptor is determined by the particular combination of c~ and/~ subunits. Thus, t~5/3~ is a fibronectin receptor (Pytela et al., 1985a;Argraves et al., 1987;Wayner et al., 1988), c~2/3~ is a collagen receptor Takada et al., 1988) and a6/3t is a laminin receptor , while ol3/3~ is a promiscuous receptor thought to bind to several different ECM molecules (Wayner and Carter, 1987;Wayner et al., 1988). It is clear from these results that integrins are involved in a variety of interactions and functions, including subunit dimerization, binding of extracellular matrix and cytoskeletal proteins, cell adhesion, and cytoskeletal organization. To begin to dissect the various structure-function relationships of integrin subunits, we have expressed the avian integrin/3m subunit in rodent cells and assayed its ability to perform various functions in this heterologous context. Plasmid Construction The restriction enzymes, T4DNA ligase, polynucleotide kinase, Escherichia coil DNA polymerase I large fragment, and Xbal linker, were from New England Biolabs (Beverly, MA). Standard recombinant DNA methods (Maniatis et al., 1982) were used. A 3.15 kb Eco RI fragment containing the entire coding sequence for chicken integrin/31 was isolated from the eDNA clone 1D described previously (Tamkun et al., 1986). This fragment was then inserted into the Hind III cloning site of the SV40 expression vector pESP-SVTEXP (Reddy and Rao, 1986) by blunt-end ligation. The resulting plasmid is designated pCINT/31. A Xbal linker (CTCTAGAG) including an in-frame stop codon was then used to generate the plasmid pCINT/31A761-803, which codes for the mutated chicken integrin/31 lacking its cytoplasmic domain (Fig. 6). Briefly, the chicken integrin/31 cDNA subcloned in pGEMI (Promega Biotec, Madison, WI) was propagated in an adenine methylase-deflcient E. coli strain GM2163 supplied by New England Biolabs. The purified plasmid DNA was then partially digested with restriction enzyme Bcl I and the fulllength linear DNA was isolated by agarose gel electrophoresis. After filling in the ends with E. coil polymerase I large fragment, the linear DNA was religated in the presence of kinased Xbal linker (molar excess) and transformed into E. coil strain HB101. The plasmid having Xbal linker incorporated into the second Bcl I site of integrin /31 eDNA was identified by restriction analysis and the expected sequence around the junction was confirmed by dideoxy sequencing (sequenase; United States Biochemical Corp., Cleveland, OH). The altered eDNA fragment was then excised from pGEM1 and inserted into the Eco RI cloning site of the SV40 expression vector pECE (Ellis et al., 1986) generating the plasmid pCINT/3~A761-803. Transfection of 3T3 Cells NIH 3T3 cells were maintained in DME supplemented with 10% FCS (Gibco Laboratories, Grand Island, NY). 5 × 105 cells plated the previous day in 100 mm dishes were co-transfected with 20 #g pCINT~I (or pCINT/31-A761-803) and 2 #g pSV2neo (Southern and Berg, 1982) as a calcium phosphate precipitate (WigJer et al., 1979). Cells were incubated for 20-22 h, washed with PBS, and fresh medium was replaced. Two days later, the transfected cells were split 1:10 and incubated in DME supplemented with 10% FCS and 0.5 mg/ml G418 (Geneticin, Gibco Laboratories). After "~2 wk, G418-resistant clones were isolated and expanded. The 3T3 cell clones expressing chicken integrin /31 were identified by indirect immunofluorescence staining using a chicken-specific polyclonal antiserum Chickie II (see below). These positive clones were then subcloned by plating ,~500 cells onto 100-ram dishes coated with 10 mg/ml gelatin. Individual subclones were then isolated and analyzed by immunofluorescence labeling. Subclones 1E encoding wild type chicken integrin/~1 and A7E expressing mutant ~l were used for further characterization. Antibodies and Peptides A polyclonal avian-specific antiintegrin antibody designated Chickie II was prepared by injecting CSAT-immunoaffinity-purified avian integrin into rabbits and has been used previously (Damsky et al., 1985). A second chicken-specific rabbit anti-/~l (366) serum was prepared by injection of SDS-gel purified chicken integrin complex and was kindly provided by L. Urry (Massachusetts Institute of Technology, Cambridge, MA). CSAT monoclonal antibody was prepared from CSAT hybridomas (Neff et al., 1982) and for immunoprecipitation was covalently coupled to protein A Sepharose (Sigma Chemical Co., St. Louis, MO) by binding in PBS, washing with 100 vol, and coupling with 0.04% glutaraidehyde for 1 h at 37°C, followed by blocking with 0.5 M ethanolamine pH 8.0 (Gyka et al., 1983). Rabbit anti-/31 cytoplasmic domain antibodies were prepared as described (Marcantonio and Hynes, 1988). Rabbit anti-or3 and anti-or5 COOH terminal peptide antibodies were prepared as described (Hynes et al., 1989). Monoclonal antivinculin antibody was a gift of B. Geiger (Weizmann Institute). Rhodaminelabeled phalloidin was purchased from Molecular Probes Inc. (Junction City, Oregon). GRGESP and GRGDSP were synthesized using a peptide synthesizer (Applied Biosystems Inc., Foster City, CA) using solid phase t-boc chemistry. Peptides were cleaved and deprotected using tfifluoromethane sulfonic acid and were desalted on Sephadex G-10. Before use, peptides were purified by reverse phase HPLC chromatography on a v)~lac C18 semipreparative column (Rainin Instrument Co. Inc., Woburn, MA), eluted with a 0-60% acetonitrile gradient in 0.1% TFA. Radiolabeling and lmmunoprecipitation For metabolic labeling, cells were incubated for l h in DME minus methionine plus 10% FCS, followed by incubation in methionine-free medium plus 10% FCS containing 20 #Ci/mi of [35S]methionine (Amersham Corp., Arlington Heights, IL) for 6 h. Cells were labeled with Na[125I] (New England Nuclear, Boston, MA) and lactoperoxidase (Sigma Chemical Co., St. Louis, MO) as a monolayer as described (Hynes, 1973). 107 cells and 1-2 mCi/mi were used per experiment. Cells were extracted with 0.5% NP-40 and immunoprecipitation was performed as described (Marcantonio and Hynes, 1988). In some experiments, extracts were immunoprecipitated using CSAT-Sepharose, followed by recovery of the integrin complexes by heating at 100°C for 2 min in 1% SDS. After cooling, a fivefold excess of Triton X-100 was added, and the extracts were reprccipitated using polyclonal antibodies and protein A-sepharose as described above. Ajffinity Chromatography Purified human plasma fibronectin was purchased from the New York Blood Center (New York, NY). The 120-kD cell-binding fragment of fibronectin was purified from a chymotryptic digest of fibronectin as described by Pierschbacher et al. (1981). Columns were prepared by coupling 1 mg/mi of purified 120-kD fragment to CNBr-activated Scpharose (Pharmacia Fine Chemicals, Piscata~way, NJ) in 0.2 M NaHCO3 pH 8.5. Affinity chromatography of 3T3 cell extracts on l-nil columns was performed using a modification (Gailit and Ruoslahti, 1988) of the procedures of Pytela et al. (1985a). Briefly, ,x,107 cells were labeled with [12~I] as described above and extracted using 200 mM octyl-/~-o-glucopyranoside in 50 mM Tris, pH 7.5, 150 mM NaCI, 1 mM MnCI2 (TBM). These extracts were loaded onto the 120-kD fragment columns over 1 h at 4°C, and then washed with 10 vol of TBM. Columns were eluted with 1 vol of TBM containing 1 mg/ml of control pcptide (GRGESP), followed by 2 vol of "IBM, and then 1 vol of TBM containing 1 mg/ml of GRGDSP. Column fractions were analyzed by immunoprecipitation or directly by SDS-PAGE. lmmunofluorescence Cells were plated in DME with 0.5% FCS overnight on coverslips previously coated with human plasma fibronectin (0.02 mg/ml). Cells were rinsed twice in PBS and fixed for 15 min in a freshly prepared 4% solution of paraformaldehyde (Fluka Chemical Co., Buchs, Switzerland) in PBS, rinsed and permcabilized with 0.5% NP-40 in PBS for 15 rain. Cells were stained with primary antiserum in 10% normal goat serum in PBS for 30 min at 37°C. After three washes with PBS, the second antibody mixture (rhodamine-conjngated goat anti-rabbit IgG and fluor~cein-conjngated goat anti-mouse lgG in 10% normal goat serum in PBS; Organon Teknika-Cappel, Malveru, PA) was added and incubated for 30 rain at 37°C. After three washes, coverslips were mounted in gelvatol and examined using an axiophot microscope (Carl Zeiss. Inc,. Thornwood, NY) and photographed (Tri-X film; Eastman Kodak Co., Rochester, NY). Quantitative lmmunoprecipitation Analysis of lntegrin Expression Quantitative immunoprecipitation of clone IE t25I-labeled extracts was performed. 106 TCA-precipitable cpm of extract were incubated with increasing amounts of 363 or 366 antisera followed by immunoprecipitation and SDS-PAGE as described above to determine the maximum recovery of integrins. Bands corresponding with the Bm subunit were excised from the gel and counted using a gamma counter. Quantitation of the ratio of ot/fl subunit and the relative amounts of the chicken and mouse integrin subunits was performed by integration of peaks obtained from scans of the autoradiographs using an LKB ultrascan XL laser densitometer (LKB Instruments, Gaithersburg, MD). Expression of Avian Integrin fll Subunit The eDNA sequence of avian integrin fl~ subunit has been described (Tamkun et al., 1986). A full length cDNA clone, 1D, was used for the analysis reported here. A 3.15-kb Eco RI fragment containing the entire coding region was isolated and subcloned into an SV40-based expression vector (Reddy and Rao, 1986) to generate pCINT~ (see Materials and Methods for details). This plasmid was cotransfected with pSV2neo (Southern and Berg, 1982) into murine 3T3 cells and clones resistant to (3418 were selected and expanded as described in Materials and Methods. To analyze the expression of avian fl~ integrin, we used the CSAT monoclonal antibody specific for this subunit ). Fig. 1 shows CSAT immunoprecipitates from [35S]methionineqabeled transfected 3T3 cells. SDS-PAGE analysis of immunoprecipitates from four independent clones of cells transfected with pCINTfl~ are shown in lanes B-E. The immunoprecipitatcs contain heterodimers typical of members of the integrin family. The lower molecular mass ll0-kD band migrates in about the same position on nonreduced SDS-PAGE as the fl~ subunit found in a control immunoprecipitate from avian cells (Fig. 1, lane A). No material was immunoprecipitated from 3T3 cells transfected with vector containing insert in the reverse orientation (Fig. 1, lane G) or from control 3T3 cells transfected only with PSV2neo (Fig. 1, lane F). That the ll0-kD band contained the avian fl~ subunit was confirmed by reaction in immunoblots with a second monoclonal antibody, G , which is also specific for the avian fl~ subunit (data not shown). These results clearly demonstrate the expression of the avian ~, subunit in the cloned transfected 3T3 cells. Subsequent experiments concentrated on one of these clones, 1E (Fig. 1, lane E). The Exogenous ~1 Subunit Forms Heterodimers with Endogenous ~ Subunits The presence of additional polypeptides in the CSAT immunoprecipitates from transfected cells suggested that the avian fit subunit could combine with endogenous murine c~ subunits. To document this, and to examine whether or not such complexes could be transported to the cell surface, clone 1E and control cells were surface labeled with ~25I. Detergent extracts were then immunoprecipitated with sev- eral different antibodies (Fig. 2). Antiserum 363 was raised against a/3, cytoplasmic domain peptide (Marcantonio and Hynes, 1988). This antiserum reacts exclusively with the cytoplasmic domain of/3t subunits regardless of species. Fig. 2, A and B, shows that this antibody precipitates/3, subunits together with at least two u subunits from both control and transfected 3T3 cells. A second antiserum, 366, was raised against SDS gel-purified avian /3~ subunit and reacts only with the avian subunit (Fig. 2, A and B; Urry, L., and R. O. Hynes, unpublished observations). It immunoprecipitates the avian Bj subunit and associated ct subunits from cells transfected with pCINTfl~ ( Fig. 2 B), but not from control 3T3 cells (Fig. 2 A) or from cells transfected with pSV2neo alone (data not shown). Immunoprecipitates using the monoclonal antibody CSAT were included in these experiments as a control for specificity as well as for comparison of integrin subunit behavior on SDS-PAGE. Comparison of the relative intensities of the ot and fl bands in the different immunoprecipitates suggests that the avian/3~ subunit associates with the murine a subunits as efficiently as does the endogenous murine /3~ subunit. Quantitative comparisons of the total surface level of all fl~ integrins (363 immunoprecipitates) with that of avian/~ integrin (366 or CSAT immunoprecipitates) shows that in stably transfected clone 1E cells, integrins containing the avian subunit make up 40-60% of the total/3~ integrins expressed by these cells. In several experiments, no consistent differences were detected in the ratios of a to/3 labeling in total integrins and in integrins containing the chicken/~ subunit. The identity of the accompanying a subunits was determined by sequential immunoprecipitations. Clone 1E cells were iodinated, extracted and immunoprecipitated with the Figure 2. Immunoprecipitation of mouse and chicken integrins. A and B, extracts of Iz~I-surfacelabeled control 3"1"3 cells (A) and clone IE cells expressing chicken integrin #~ subunit. (B) were incubated with broad spectrum anti-~ peptide serum (363), anti-chicken ~ serum (366) or monoclonal anti-chicken #~ (CSAT) Sepharose. Immunoprecipitates were recovered directly (CSAT) or indirectly using protein A-Sepharose (363, 366) and analyzed by SDS-PAGE. C, A nondenatured extract of ~2SI-surface-labeled clone 1E cells was immunoprecipitated using CSAT monoclonal anti-chicken ~/~-Sepharose. The recovered complexes were denatured in SDS, after which a fivefold excess of Triton X-100 was added. The denatured integrins were then incubated with anti-c~5 peptide serum, anti-a3 peptide serum, or anti-~t peptide serum. The samples were immunoprecipiated using protein A-Sepharose and analyzed by SDS-PAGE. Thus, in the transfected cells, chicken ~ is present at the cell surface, and associates with the endogenous mouse o~ subunits, predominantly c~3 and Ors. avian-specific monoclonal antibody CSAT. The resulting precipitates were dissolved in SDS and then reprecipitated with rabbit antisera raised against peptides from specific integrin c~ subunits (Hynes et al., 1989; see Materials and Methods). Results from such an experiment are shown in Fig. 2 C. The three bands immunoprecipitated by CSAT can be reprecipitated by antisera to c~5, ~3, and {/~ after SDS denaturation (Fig. 2 C). The or3 and ~5 subunits are the major ~/~ subfamily c~ subunits expressed in 3T3 cells (Marcantonio, E., unpublished observations). These data prove that the avian ~ subunit expressed in transfected 3T3 cells is transported to the cell surface and associates with the appropriate murine c~3 and c~5 subunits. Hybrid lntegrin Heterodimers Bind Fibronectin To assay the function of hybrid receptors, we analyzed the ability of the complexes to bind to columns containing the 120-kD cell-binding fragment of fibronectin. The c~5~t complexes from a variety of species bind specifically to such columns and can be eluted with peptides containing the RGD sequence (Pytela et al., 1985a(Pytela et al., , 1986Wayner and Carter, 1987;Wayner et al., 1988;Gailit and Ruoslahti, 1988;Hynes et al., 1989). Clone 1E cells were iodinated and extracted with ~/-octylglucoside in buffer containing MnCI2 (see Materials and Methods). The extracts were passed over columns of fibronectin cell-binding fragment and eluted sequentially with GRGESP and GRGDSP peptides. Total integrin content of the eluate was demonstrated by immunoprecipitation with antiserum 363 (Fig. 3 A). The fraction of the eluted integrins consisting of hybrid receptors was identified by immunoprecipitation with antiserum 366 specific for the avian B~ subunit (Fig. 3 B). As can readily be seen, integrins were eluted specifically with GRGDSP. The eluate included integrins containing the avian ~ subunit (Fig. 3 B). The doublet form of the a bands in the eluted fractions is frequently observed (Hynes et al., 1989). Both portions of this doublet react with antisera raised against a5 peptides (data not shown). The hybrid heterodimers consisting of avian ~ and murine c~ subunits are clearly able to bind to columns containing fibronectin cell-binding fragment. Quantitation shows that the hybrid heterodimers bind to the columns as efficiently as do the endogenous murine integrins. That is, the ratio of avian/~ to total fit is the same in the bound material as in the total extract, and the ratios of a and fl are the same in the total integrins and the integrins containing avian Bt. The Avian [31 lntegrin Subunit Becomes Localized in Focal Contacts We next examined whether the exogenous avian/3~ subunit could be correctly localized in focal contacts in the same Figure 3. Binding of hybrid integrins to fibronectin. Clone 1E cells were labeled with 1251 and extracts were prepared as described in Materials and Methods. One milliliter of extract was incubated with one milliliter of 120-kD fibronectin cellbinding fragment-Sepharose for 1 h at 4°C. After washing, the column was sequentially eluted using GRGE_SP and GRGD_SP as indicated at the top of A and B. 0.5-ml fractions were collected, and 100-#1 aliquots were immunoprecipitated with 363 antiserum (A) or 366 (B) antiserum as described in Materials and Methods. Both the endogenous mouse and the chicken-mouse hybrid integrin complexes bind to fragments of fibronectin and are specifically eluted using the ceil-binding site peptide GRGDSP. fashion as endogenous integrins (Chen et al., 1985;Damsky et al., 1985;Singer et al,, 1988;Dejanna et al., 1988;Marcantonio and Hynes, 1988). To examine the distribution of the receptor, cells were grown on coverslips and stained for immunofluorescence using a polyclonal antibody that will react with avian integrin, but not with integrins normally found in 3T3 cells. The results are shown in Fig. 4. Control 3T3 cells, as well as pSV2neo-transfected 3T3 cells not expressing the avian/3 subunit, show only background fluorescence; no typical focal contactlike structures are evident (Fig. 4 B). In contrast, two independent clones, 1D and 1E, expressing the avian subunit exhibit strong immunofluorescence in brushstrokelike patterns on the ventral surface of cells (Fig. 4, A and C). This staining pattern is characteristic of focal contacts and closely resembles that seen in chicken cells stained with this same antibody or with monoclonal antibodies specific for the avian /3 subunit (Damsky et al., 1985;Chen et al., 1985). Similar results are obtained if the cells are stained with the CSAT monoclonal antibody (Solowska, J., unpublished observations). Double immunofluorescence experiments, in which clone 1E cells are exposed to rhodamine-labeled phalloidin (to mark microfilament bundles) and the avian integrin-specific antibody, show that the actin-containing microfilaments terminate in the structures stained by the antiintegrin antibody (Fig. 5), confirming the identity of these structures as focal contacts. Deletion of the Cytoplasmic Domain Produces Partially Functional Hybrid Integrins To begin the analysis of the function of specific domains of the/3 subunit, we deleted a major portion of the cytoplasmic domain by in vitro mutagenesis. Fig. 6 shows a comparison between the avian integrin/31 subunit and the mutated form. The altered sequence contains a termination codon close to the beginning of the COOH terminal cytoplasmic domain. Plasmid pCINT/3~A761-803, which encodes mutagenized avian /3, was transfected into 3T3 cells together with pSV2neo. G418-resistant clones were isolated as described above and a stably expressing subclone A7E was further analyzed. Fig. 7 shows immunoprecipitation analysis of surfacelabeled/X7E cells. As before, antiserum 363 precipitates a set of integrins comprising at least two a subunits and a/3t subunit (Fig. 7 A). In this case, since the mutated/31 cDNA encodes a truncated form of the chicken/31 subunit lacking the cytoplasmic domain recognized by this antiserum, 363 precipitates only the endogenous murine integrins (see also below). Immunoprecipitation with antiserum 366 or monoclonal antibody CSAT, both of which are avian-specific, selects only those integrins containing the avian/3t subunit. The fact that these integrins also contain a subunits is confirmed by reprecipitation of CSAT-selected integrins with antisera specific for different subunits (Fig. 7 B). As expected, antiserum 363 fails to precipitate the avian 131 subunit confirming the fact that the cytoplasmic domain is missing. The truncated/3~ subunit is, however, precipitated by the avian-specific antibodies (Fig. 7, A and B). c~ subunits from these hybrid integrins are immunoprecipitated by antibodies specific for or5 (Fig. 7 B) and o~3 (data not shown). Thus, the truncated/3~ integrin can form heterodimers with endogenous ot subunits and be exported to the cell surface. Quantitation shows that the ratio of c~ subunits to mutant/3t subunits is lower than that for the wild type avian /3t subunit (Fig. 2). It is unclear whether this lower ratio of a to/3 subunits reflects a defect in assembly or in stability of the complex or the high level of expression of avian/3t integrin subunit in these cells. Nonetheless, it is clear that the Figure 5. Double label immunofluorescence analysis of transfected 31"3 cells. 3"I"3 cells transfected with pCINTfl~ were stained with a polyclonal antibody against avian integrin (Chickie II) to localize the avian fl~ subunit, and subsequently exposed to rhodamine-labeled phalloidin to mark actin-containing microfilaments. A, fluorescein-marked avian fl~ subunit distribution; B, same field showing rhodamine-marked actin-containing microfilaments. Note the termination of microfilament bundles in the focal contactlike structures stained with the antiavian flj (arrowheads). Magnification of 1,600. presence of the fl~ cytoplasmic domain is not essential either for dimerization or for processing and export to the cell surface (see Discussion). Analysis of the binding to fibronectin affinity columns of hybrid receptors is shown in Fig. 8. Extracts of surfacelabeled A7E ceils were analyzed as described previously. Heterodimers containing the truncated avian fl~ subunit were identified using antiserum 366 (Fig. 8 B) and elute in the same fractions as the endogenous murine heterodimers detected by antiserum 363 (Fig. 8 A). It is clear that the deleted form of the avian fl, subunit can form functional heterodimers with endogenous murine c~ subunits that bind to fibronectin in an RGD-sensitive manner. However, immunofluorescence analysis of the distribution of the mutant avian fl, subunit shows that the truncation does produce defects in localization of the integrin. Fig. 9 shows double label immunofluorescence analysis using antivinculin to mark focal contacts and antibodies specific for the avian/3 subunit to mark the location of hybrid receptors. In clone 1E cells expressing the unaltered avian fl subunit, hybrid receptor and vinculin colocalize on the ventral cell surface (Fig. 9, A and B). In contrast, A7E cells expressing the truncated form of the avian fl subunit display little, if any, hybrid receptor in the focal contacts marked by the antivinculin antibody (Fig. 9 C and D). Endogenous murine integrin in these same cells is localized in focal contacts (data not shown). Therefore, deletion of the fl~ cytoplasmic domain interferes with localization of the hybrid heterodimers into focal contacts. Because the truncated form of the avian fit subunit appeared to be somewhat deficient in its ability to form stable hybrid heterodimers, we quantitated the level of functional heterodimers in A7E cells. The integrins eluted from Figure 7. Immunoprecipitation of integrins from clone A7E cells. A, Extracts of t25I-snrface-labeled clone A7E cells were incubated with anti-#, cytoplasmic domain serum (363), monoclonal anti-chicken BrSepharose (CSAT) or polyclonal anti-chicken #, serum (366). Immunoprecipitates were recovered either directly (CSAT) or indirectly using protein A-Sepharose (363, 366) and analyzed by SDS-PAGE under nonreducing conditions. B, a nondenatured extract of '25I-surface-labeled clone A7E cells was immunoprecipitated using monoclonal anti-chicken BrSepharose (CSAT). The recovered complexes were either analyzed directly (lane 1) or were denatured in SDS. After addition of Triton X-100, the extracts were then incubated with anti-#t cytoplasmic domain serum (363), anti-chicken/3, serum (366) or anti-c~5 peptide serum (c~5), followed by immunoprecipitation using protein A-Sepharose and analysis by SDS-PAGE under nonreduced conditions. The truncated chicken #~ subunit is expressed on the surface and forms heterodimers with the endogenous mouse c~ subunits. fibronectin affinity columns (Fig. 8) are, by definition, both surface located and functional in ligand binding. Quantitation showed that 25.4% of the/3, integrin subunits eluted from the FN columns are avian and 74.6 % are murine. This should be compared with a proportion of 40-60% avian/3,containing integrins in 1E cells expressing the intact avian ~ subunit. Comparison of the pattern of avian integrins in 1E cells (Fig. 9 A) with that in A7E cells (Fig. 9 C) shows clearly that the hybrid heterodimers in A7E cells do not localize in focal contacts in anywhere near the proportions detected by binding to FN columns. We cannot rule out a small fraction localizing in focal contacts, but it is clear that the truncated avian/3~ subunit, while largely competent to form hybrid heterodimers that are exported to the surface and will bind ligand, is severely compromised in its ability to assemble into focal contacts. Discussion The experiments reported here show that an exogenous avian integrin/3, subunit can be functionally expressed in mouse 3T3 cells. We have also observed successful expression in rat, hamster, and monkey cells (Guan and Marcantonio, unpublished data). When expressed in heterologous cells, the exogenous/~, subunit forms heterodimers with endogenous oL subunits. These hybrid integrins can bind directly to fibronectin, are exported efficiently to the cell surface and are correctly localized in focal contacts. These data suggest that the heterologous subunit participates in the formation of fully functional integrins capable of interaction with molecules of both the extracellular matrix and the cytoskeleton. Thus, the high degree of sequence conservation between avian and mammalian ~/ subunits (DeSimone and is reflected in conservation of function. This conclusion focuses attention on segments of the/~, sequence that are most highly conserved. One of these is the cytoplasmic domain that is virtually identical in avian, human, frog (DeSimone and , and also murine integrins (DeSimone, D., V. Patel, H. E Lodish, and R. O. Hynes, unpublished data). The cytoplasmic domain is thought to interact with elements of the cytoskeleton. Avian integrin has been shown to bind to the cytoskeletal protein, talin, in equilibrium gel filtration experiments . This binding is competed by synthetic peptides containing the consensus tyrosine kinase phosphorylation site of the avian /3j cytoplasmic domain (Tapley et al., 1989). These results implicate the/~, cytoplasmic domain in interactions with the cytoskeleton. The behavior of the mutant form of avian/3, subunit that we have expressed is consistent with this supposition. The truncated form lacking a/3, cytoplasmic domain is efficiently expressed and exported to the cell surface. It is found in heterodimers with endogenous c~ subunits that arc still competent to bind fibronectin. These results suggest that the/3, cytoplasmic domain plays only a minor role, if any, in dimerization, processing or binding to the extracellular matrix, although we do not rule out subtle effects on affinity. In contrast, the mutant heterodimer fails to localize normally in focal contacts where the cytoskeleton is associated with the ventral membrane of cells (Fig. 9). Therefore, it appears that the/3, cytoplasmic domain is indeed involved in interaction with the cytoskeleton. Furthermore, it appears that interaction with the extracellular matrix may not be sufficient to maintain integrins in focal contacts. Several recent papers have shown that the nature of the external ligand plays a key role in or- Figure 8. Binding of mutant integrin to fibronectin. Clone A7E cells were labeled with t25I and extracts were prepared as described in Materials and Methods. 1 ml of extract was incubated with 1 ml of 120-kD fibronectin cell-binding fragment Sepharose for 1 h at 4°C. After washing, the column was sequentially eluted using GRG_ESP and GRG_DSP as indicated at the top of A and B. 0.5-ml fractions were collected and 100-/tl aliquots were immunoprecipitated with 363 antiserum (A) or 366 antiserum (B) as described in Materials and Methods. Both the endogenous mouse (A) and truncated chicken /~t-mouse hybrid (B) integrin complexes bind to fragments of fibronectin and are specifically eluted with GRGDSP. Figure 9. Double label immunofluorescence of clone IE expressing full length avian integrin flj subunit (A and B) and clone A7E cells expressing truncated avian fit subunit (B and C). Cells were stained with mixtures of mouse antivinculin antibody (B and D) and rabbit anti-chicken fla serum (A and C) followed by visualization using rhodamine-conjugated goat anti-rabbit IgG and fluorescein-conjugated goat anti-mouse IgG. The vinculin stain marks the focal contacts some of which are indicated by arrowheads. The intact avian integrin colocalizes with vinculin (A and B) while the truncated avian integrin does not (Cand D). In both cell types, murine integrins do colocalize with the vinculin (data not shown). ganizing specific integrins into these structures (Singer et al., 1988;Dejanna et al., 1988;Albelda et al., 1989). Our data suggest that, in addition, interaction of the cytoplasmic domain with the cytoskeleton or some other cytoplasmic component is necessary for correct localization and/or maintenance of the integrins in focal contacts. Photobleaching and recovery experiments (Duband et al., 1988) have demonstrated that integrins within focal contacts are extremely stable and replaced slowly. In contrast, those found outside the focal contact appear more mobile within the cell membrane. The data suggest a simple model in which the integrins exist within the plasma membrane as free heterodimers that undergo a conformational change upon occupancy by an extracellular ligand. This change favors the interaction of the receptor with the cytoskeleton. Once adhesion has been initiated, this interaction can lead to the stabilization of integrins within the focal contact or to the recruitment of more receptors into the region of the extracellular matrix. Presumably the deletion we have studied interferes with some step in this pathway, be it propagation of a signal, a conformationai change or interaction with the cytoskeletal complex. More subtle alterations in the cytoplasmic domain (and elsewhere) of both a and fl subunits will be necessary to elucidate these details. Such experiments are now in progress.
2014-10-01T00:00:00.000Z
1989-08-01T00:00:00.000
{ "year": 1989, "sha1": "1b59830df36a1de50d9e60929caefff810a88f84", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/109/2/853/1058309/853.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1b59830df36a1de50d9e60929caefff810a88f84", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
46734411
pes2o/s2orc
v3-fos-license
CERT Mediates Intermembrane Transfer of Various Molecular Species of Ceramides* Ceramide produced at the endoplasmic reticulum is transported to the Golgi apparatus for conversion to sphingomyelin. The main pathway of endoplasmic reticulum-to-Golgi transport of ceramide is mediated by CERT, a cytosolic 68-kDa protein, in a nonvesicular manner. CERT contains a domain that catalyzes the intermembrane transfer of natural C16-ceramide. In this study, we examined the ligand specificity of CERT in detail by using a cell-free assay system for intermembrane transfer of lipids. CERT did not mediate the transfer of sphingosine or sphingomyelin at all. The activity of CERT to transfer saturated and unsaturated diacylglycerols, which structurally resemble ceramide, was 5–10% of the activity toward C16-ceramide. Among four stereoisomers of C16-ceramide, CERT specifically recognized the natural d-erythro isomer. CERT efficiently transferred ceramides having C14, C16, C18, and C20 chains, but not longer acyl chains, and also mediated efficient transfer of C16-dihydroceramide and C16-phyto-ceramide. Binding assays showed that CERT also recognizes short chain fluorescent analogs of ceramide with a stoichiometry of 1:1. Moreover, (1R,3R)-N-(3-hydroxy-1-hydroxymethyl-3-phenylpropyl)dodecamide, which inhibited the CERT-dependent pathway of ceramide trafficking in intact cells, was found to be an antagonist of the CERT protein. These results indicate that CERT can mediate transfer of various types of ceramides that naturally exist and their close relatives. The intracellular transport of lipids from the sites of their synthesis to their appropriate destinations must occur, because various steps in lipid biosynthesis occur in different intracellular compartments. The trafficking of integral membrane proteins in eukaryotic cells is mediated by transport vesicles, which load the desired set of proteins and deliver them to the correct organelles. By contrast, many types of lipid synthesized in the endoplasmic reticulum (ER) 1 have been suggested to be sorted to other organelles by nonvesicular mechanisms, although some lipid flux routes such as the endocytosis of plasma membrane lipids occur by vesicle-mediated mechanisms (1)(2)(3). In mammalian cells, ceramide is synthesized at the ER and translocated to the Golgi compartment for conversion to sphingomyelin (4). There are at least two pathways by which ceramide is transported from the ER to the Golgi site for the synthesis of sphingomyelin: an ATP-and cytosol-dependent major pathway and an ATP-or cytosol-independent (or less dependent) minor pathway (5)(6)(7). The major pathway is impaired in a Chinese hamster ovary (CHO) mutant cell line, LY-A, without any deficiency in the ER-to-Golgi transport of proteins (5)(6)(7). We have identified CERT as a factor defective in LY-A cells by functional rescue experiments, and we have shown that CERT mediates the ATP-dependent pathway of ER-to-Golgi trafficking of ceramide in a nonvesicular manner (8). CERT is a tripartite cytosolic protein ϳ600 amino acids in length (8 -10). The amino-terminal region of ϳ120 amino acids is a phosphatidylinositol 4-phosphate (PtdIns4P)-binding pleckstrin homology domain, which can target the Golgi apparatus (11). The next region of ϳ250 amino acids (referred to as the middle region) contains coiled-coil motifs (9), which might play a role in homo-or hetero-oligomerization, and a motif that may participate in association with the ER (12). The carboxyl terminus of ϳ230 amino acids is a steroidogenic acute regulatory protein (StAR)-related lipid transfer (START) domain. START domains were initially recognized as putative lipidbinding domains of ϳ210 amino acid residues, which exist in various types of proteins implicated in intracellular lipid transport, lipid metabolism, and signal transduction (13,14). Although more than 200 proteins have been nominated so far as proteins having START domains in data bases (for example, see smart.embl-heidelberg.de/), only a few have been experimentally shown to bind or transfer specific lipids. For example, StAR and MLN64 proteins recognize cholesterol (14 -17), and phosphatidylcholine (PtdCho)-transfer protein is capable of intermembrane transfer of PtdCho in vitro (18,19). The silkworm Bombyx mori larvae produce a carotenoid-binding START domain (20). We demonstrated previously that the START domain of CERT can efficiently extract natural long chain C 16 ceramide but not other types of lipids, including sphingosine, sphingomyelin, PtdCho, and cholesterol, from phospholipid bilayers (8). We have also shown that the START domain of CERT greatly facilitated intermembrane transfer of C 16 -cer-amide in a cell-free system (8). However, many details as to the substrate specificity of the START domain of CERT remain undetermined, although various molecular species of ceramide exist in mammalian cells. In most types of mammalian cells, the hydrophobic moiety of complex sphingolipids is mainly composed of ceramide but also includes dihydroceramide and phytoceramide at low levels (21)(22)(23). Notably, dihydrosphingomyelin is abundant in human lens membranes (24,25). Moreover, the length of the amido acyl chain of the ceramide moiety is diverse; C 16 -C 26 acyl chains are observed in natural sphingomyelin. Notably, C 18 -and C 24:1ceramide is predominant for sphingomyelin in the brain (21,26,27), whereas C 16 -ceramide is predominant in many other tissues (22, 28 -30). Such structural diversity in the ceramide moiety may affect the nature of membranes where complex sphingolipids are abundant. The physiological importance of the diversity of the ceramide structure has also been recognized, based on differences in bio-modulation activity between ceramide and dihydroceramide (31) or between natural long chain ceramide and unnatural short chain ceramide (32). Hence, it should be of biological significance to determine whether CERT can catalyze the intermembrane transfer of various species of ceramide and its relatives in addition to C 16 -ceramide. In the present study, we show that CERT is capable of mediating the intermembrane transfer of various types of ceramides that naturally exist in its START domain-dependent manner. In addition, we show that an inhibitor of ER-to-Golgi transport of ceramide is an antagonist of CERT. Preparation of C 16 (34). In our standard method, 400 nmol of [ 14 C]palmitic acid (55 mCi/mmol) and 2 mol of appropriate sphingoid base (D-erythro-sphingosine, D-threo-sphingosine, L-erythrosphingosine, L-threo-sphingosine, D-erythro-dihydrosphingosine, or Dribo-phytosphingosine) dissolved in ethanol were mixed in a screwcapped Pyrex glass tube and dried under nitrogen gas stream. After adding 2 ml of buffer A (50 mM sodium phosphate buffer (pH 7.0) containing 0.2% Triton X-100) to the dried lipids, the tube was sonicated with a bath-type sonicator for 5 min. 2 ml of SCDase (1 milli-unit/ml in buffer A) was then added to the tube, and the mixture was incubated at 37°C for ϳ16 h. Lipids were extracted from the mixture by the method of Bligh and Dyer (35), dried, dissolved in 400 l of chloroform/methanol (19:1, by volume), and subjected to TLC with a solvent system of benzene, diethyl ether, ethyl acetate, methanol, 25% (w/v) ammonia (65:7.5:7.5:25:0.75, v/v). After detection of radioactive lipids separated on the TLC plate with a BAS1800 image analyzer (Fuji Film), gel containing a desired radioactive lipid was collected from the TLC plate by scraping. By this image analysis, the efficiency of conversion of the radiolabeled starting material to its ceramide product by the SCDase reaction was also estimated. For extraction of the radioactive lipid by the method of Bligh and Dyer (35), 1.9 ml of a solvent of 0.1 M KCl/chloroform/methanol (0.8:1:2, v/v) was added to the collected gel and mixed with a swirling mixer. After addition of 0.5 ml of chloroform and 0.5 ml of 0.1 M KCl to the mixture for phase separation, the mixture was centrifuged (1000 ϫ g, 3 min), and the lower organic phase was collected as a lipid extract. Then the extract was dried under a nitrogen gas stream, dissolved in 1 ml of chloroform/methanol (19:1,v/v), and stored at Ϫ20°C. The radioactivities of synthesized radioactive lipids were determined by liquid scintillation counting. We assumed that specific activities of the synthesized C 16 -[ 14 C]ceramide and its isomers were identical to the specific activity (55 mCi/mmol) of the precursor Purification of Recombinant CERT and CERT⌬ST-His 6 -tagged recombinant human CERT and its START domain-deleted mutant CERT⌬ST were purified as described previously (8,36). Intermembrane Ceramide Transfer Assay-Intermembrane transfer of ceramide and its relatives was assayed in a cell-free system that we briefly described previously (8). Here we describe the method in more detail. On the day of the lipid transfer assay, purified recombinant CERT and CERT⌬ST in 10 mM Tris-HCl buffer (pH 7.4) containing 250 mM sucrose buffer were diluted to 1 nmol/ml (71.8 and 45.7 g/ml for the His 6 -tagged CERT and CERT⌬ST, respectively) with buffer C (20 mM Hepes-Na buffer (pH 7.4) containing 50 mM NaCl, and 1 mM EDTA). Donor vesicles per assay consist of 32 nmol of egg PtdCho, 8 nmol of egg PtdEtn, 4 nmol of porcine lactosylceramide, and 0.5 nmol of radioactive ceramide (27.5 nCi for [ 14 C]ceramides or 10 nCi for [ 3 H]ceramides). Acceptor vesicles per assay consist of 320 nmol of egg PtdCho and 80 nmol of egg PtdEtn. Note that the excess amount of acceptor vesicles to donor vesicles is crucial to minimize donor-to-donor transfer of ceramide, which interferes with the donor-to-acceptor transfer reaction. When necessary, donor vesicles additionally contain [ 3 H]dipalmitoyl-PtdCho (125 nCi per assay) as a nonexchangeable lipid marker. According to the number of assays, appropriate amounts of lipids dissolved in organic solvents were mixed in a polypropylene tube (Eppendorf) and dried under a nitrogen gas stream. After addition of buffer C, phospholipid vesicles were prepared by sonication with a probe-type sonicator (model UP-50H, Dr. Hielscher GmbH, Teltow, Germany) at 80% output and 50% cycle for 10 min in a water bath at room temperature. The volume of buffer C that should be added at this step was 20 l per assay in donor vesicles and 60 l per assay in acceptor vesicles (note that at least 200 l of the buffer is required for the sonication step). To remove lipid aggregates, the sonicated samples were centrifuged at 20,000 ϫ g for 30 min at 4°C, and the supernatant fraction was collected as small vesicles. The radioactivity of the supernatant was determined by liquid scintillation counter for assessing the recovery yields after pre-centrifugation. In some cases, the recovery of lipids in the supernatant fraction was also assessed by the lipid phosphorous quantification method (37). Both assessments showed that over 90% of lipids were reproducibly recovered in the supernatant fraction. The prepared small vesicles were used for intermembrane ceramide transfer assay as follows. In typical experiments, 18 l of buffer C, 60 l of acceptor vesicles, and 2 l of recombinant CERT or CERT⌬ST (1 nmol/ml in buffer C) were mixed in a 1.5-ml polypropylene tube. Then 20 l of donor vesicles was added to the tube to start the ceramide transfer reaction. After tapping the tube quickly, the reaction mixture was incubated for 10 min at 37°C. For mock incubation, buffer C as the vehicle buffer was added in place of the recombinant protein. To stop the reaction, 30 l of R. communis agglutinin (2.5 mg/ml in phosphate-buffered saline) was added to the reaction mixture and mixed by pipetting. The agglutinin selectively aggregates donor vesicles by cross-linking of the terminal galactose residue of lactosylceramide embedded in donor vesicles. The mixture was chilled on ice for 10 min and centrifuged (20,000 ϫ g, 3 min, 4°C) to precipitate agglutinated donor vesicles. Then 115 l of the supernatant fluid was carefully retrieved, and the radioactivity of the supernatant was measured in 2 ml of ACS-II® (Amersham Biosciences) by liquid scintillation counting. To remove the radioactivity due to incomplete precipitation of donor vesicles, the radioactivity from the mock incubation without any CERT recombinants was subtracted from the radioactivity of each sample. When the effects of HPA-12 and its derivatives on ceramide transfer activity were examined, several modifications were made. Specifically, in preparation of donor vesicles, the amount of C 16 -[ 14 C]cer-amide added was reduced from 0.5 to 0.1 nmol. Then 913 l of buffer C, 20 l of donor vesicles, 1-5 l of 3 mM drugs and the vehicle dimethyl sulfoxide (the final concentration of dimethyl sulfoxide was adjusted to 0.5%), and 2 l of recombinant CERT or CERT⌬ST (1 nmol/ml) were mixed in a 1.5-ml tube. After preincubation of the mixture for 5 min at 37°C, 60 l of acceptor vesicles was added to the mixture and incubated for 30 min at 37°C. Then after addition of 30 l of R. communis agglutinin (2.5 mg/ml), the chilled mixture was centrifuged (20,000 ϫ g, 3 min, 4°C), and 960 l of the supernatant fluid was retrieved for liquid scintillation counting. Binding Assay of C 5 -DMB-Ceramide, C 6 -NBD-Ceramide, and LPS Alexa Fluor® 488 to CERT-On the day of the assay, frozen stocks of recombinant CERT and CERT⌬ST dissolved in 10 mM Tris-HCl (pH 7.4) containing 150 mM NaCl (Tris-buffered saline; TBS) were thawed and centrifuged (20,000 ϫ g, 4°C, 10 min) to remove aggregates. The supernatant fraction was retrieved, and its protein concentration was determined. For each binding assay, 400 pmol of His 6 -tagged CERT or CERT⌬ST (28.7 or 18.3 g, respectively) in 118 l of TBS was mixed in a polypropylene tube (Eppendorf). For a negative control, TBS was added in place of the recombinant proteins. After adding 2 l of 0.1 mM ethanolic stock solution of C 5 -DMB-ceramide or C 6 -NBD-ceramide and 60 l of TBS to the tube, the mixture was incubated at 37°C for 30 min for the binding reaction. Then 60 l of 50% (v/v) slurry of TALON metal affinity resin pre-equilibrated with buffer B was added to the binding reaction mixture and incubated for 10 min at room temperature with rotary shaking. After centrifugation (20,000 ϫ g, 10 s), the supernatant was retrieved as the "unbound fraction." For washing, the resin was suspended in 150 l of buffer B containing 10 mM imidazole and precipitated, and the supernatant was retrieved as the "wash fraction." This washing step was repeated. The TALON-bound protein was then eluted by incubation with 150 l of buffer B containing 250 mM imidazole for 5 min at room temperature with occasional tapping. After centrifugation (20,000 ϫ g, 10 s), the supernatant was retrieved as the "elute fraction." A 3.75-fold volume of chloroform/methanol (1:2, v/v) was then added to each retrieved fraction, mixed, and centrifuged (20,000 ϫ g, 10 s). In addition, to retrieve fluorophores that were nonspecifically bound to the resin and tube, 170 l of TBS and 750 l of chloroform/methanol (1:2, v/v) were added to the tube containing the resin used, mixed, and centrifuged (20,000 ϫ g, 10 s). The supernatant was retrieved as the "residual fraction." The DMB (excitation at 480 nm; emission at 515 nm) and NBD (excitation at 470 nm; emission at 530 nm) fluorophores in these fractions were quantified with a fluorescence spectrophotometer (model F-3000, Hitachi, Tokyo, Japan). When the binding stoichiometry of CERT and ceramide was analyzed, some modifications were made, because the amount of ceramide must be in excess to that of CERT for this analysis. Briefly, various concentrations of C 5 -DMB-ceramide were mixed with 40 pmol of recombinant CERT or CERT⌬ST and 15 l of 50% slurry of TALON metal affinity resin, and then the mixture (the volume of which was 135 l) was incubated at 37°C for 30 min. The amounts of the recombinant proteins distributed to the elute fraction were estimated by densitometric analysis after a portion of the fraction was subjected to SDS-PAGE and Coomassie Blue® staining, using calibration patterns made with known amounts of CERT and CERT⌬ST. The amount of C 5 -DMB-ceramide in the elute fraction was quantified as described above. Binding of LPS Alexa Fluor® 488 to recombinant proteins was assayed by essentially the same procedures except that the fluorescent intensity of LPS Alexa Fluor® 488 (excitation at 488 nm; emission at 538 nm) distributed to each fraction was measured without organic solvent extraction. Effect of (1R,3R)-HPA-12 on Metabolic Labeling of Lipids with [ 14 C]Serine in CHO Cells-LY-A, a CHO-K1-derived mutant cell line, is defective in the trafficking of ceramide from the endoplasmic reticulum to the Golgi apparatus because of a mutation in the CERT gene (6,8). The LY-A2 cell line is a stable transformant of LY-A expressing the ecotropic retrovirus receptor, and the LY-A2/hCERT cell line is a stable transformant of LY-A2 with a retroviral vector for expression of human CERT cDNA (8). For a concentrated stock, (1R,3R)-HPA-12 was dissolved in dimethyl sulfoxide at 10 mM. Cells were seeded at a density of 1.0 ϫ 10 6 per 6-cm dish in 5 ml of Ham's F-12 medium supplemented with 10% newborn bovine serum, penicillin G (100 units/ml), and streptomycin sulfate (100 g/ml) and cultured for ϳ16 h at 33°C in a 5% CO 2 atmosphere. After two washes with 2 ml of serum-free Ham's F-12 medium, the cells were incubated in 1.5 ml of Nutridoma medium (Ham's F-12 medium supplemented with 1% Nutridoma-SP (Roche Diagnostics) and 25 g/ml gentamicin) containing 1 M (1R,3R)-HPA-12 or the vehicle dimethyl sulfoxide for 15 min on ice and, after the addition of L-[U-14 C]serine (0.75 Ci) to the medium, were incubated for 2 h at 33°C. The metabolically labeled lipids were then analyzed as described previously (38). Determination of Protein Concentration-Protein concentrations were determined using the bicinchoninic acid protein assay kit (Pierce) with bovine serum albumin as the standard. Intermembrane Transfer of Lipids by CERT-We showed previously (8) that CERT efficiently extracts ceramide, but not nonceramide lipids, from phospholipid vesicles. In addition, we showed that CERT greatly facilitates intermembrane transfer of natural long chain C 16 -ceramide (8). However, the lipid substrate specificity of CERT-mediated intermembrane transfer remained unexplored. Thus, we tested the substrate specificity of CERT-mediated intermembrane transfer in a cell-free assay system. Because lipid transfer between artificial membranes might be nonspecifically enhanced by proteins, we used the START domain-deleted CERT⌬ST recombinant in control assays to assess the START domain-dependent transfer of lipids accurately. The lipid transfer assays showed that CERT catalyzes the efficient intermembrane transfer of ceramide, but not sphingosine, sphingomyelin, PtdCho, or cholesterol (Fig. 2), in a START domain-dependent manner. This pattern was consistent with the substrate specificity of the lipid extracting activity of CERT (8). The activity to transfer dioleoylglycerol and dipalmitoylglycerol was ϳ5 and ϳ10%, respectively, of the activity toward ceramide (Fig. 2), raising the possibility that CERT has the potential to transfer diacylglycerol in intact cells (see "Discussion"). Stereochemical Specificity of CERT-mediated Transfer of C 16 -Ceramide-Because ceramide has two chiral carbon atoms at positions C-2 and C-3 of the sphingosine backbone, there can be four stereochemical isomers of C 16 -ceramide, among which D-erythro is the natural configuration (Fig. 1). To examine the stereochemical selectivity of ceramide recognition by CERT, we synthesized the four isomers of C 16 -[ 14 C]ceramide by SCDasecatalyzed in vitro N-palmitoylation of sphingosines. The four isomers could be prepared in radioactively pure forms, although production yields of the unnatural isomers were much less than the yield of the natural isomer (Fig. 1). In lipid transfer assays using the synthesized stereochemical isomers of C 16 -[ 14 C]ceramide, CERT catalyzed the efficient transfer of D-erythro C 16 -ceramide, but not the unnatural Lerythro-, D-threo-, or L-threo-types (Fig. 3A). Thus, CERT recognizes only the natural isomer among the four stereochemical isomers of C 16 -ceramide. Recognition of Dihydroceramide and Phytoceramide by CERT-Although ceramide is the predominant hydrophobic backbone of complex sphingolipids in mammalian cells, some sphingolipids also contains dihydroceramide and phytoceramide (21)(22)(23)(24)(25). In vitro lipid transfer assays using synthesized radioactive substrates demonstrated that CERT is capable of catalyzing the intermembrane transfer of C 16-dihydroceramide and C 16 -phytoceramide with ϳ40% efficiency of the C 16 -ceramide transfer (Fig. 3B). CERT-mediated Transfer of Ceramides Having Fatty Acyl Chains of Various Lengths-To examine the effects of different acyl chain lengths of ceramide on CERT-mediated lipid transfer, we prepared D-erythro-[ 3 H]ceramides having C 14 -, C 16 -, C 18 -, C 20 -, C 22 -, and C 24 -saturated acyl chains and also a C 24:1monounsaturated acyl chain (Fig. 1). In vitro lipid transfer assays showed that the efficiency of the CERT-mediated transfer of ceramide is dependent on its acyl chain length (Fig. 4). Among the molecular species having different acyl chain lengths, C 14 -, C 16 -, C 18 -, C 20 -ceramides were similarly effective. When compared with the amount of C 16 -ceramide transferred, the transfer efficiency of C 22 -and C 24:1 -ceramide was ϳ40%. The transfer of C 24 -ceramide was less (Fig. 4). There might be the possibility that the same amount of different acyl chain ceramides was not incorporated into donor vesicles, thereby resulting in differences in the transfer effi- ciency. To rule out this possibility, we performed another control experiment. Donor vesicle preparations containing different acyl chain [ 3 H]ceramides were centrifuged at 20,000 ϫ g for 3 min in the presence or absence of Ricinus communis lectin. Regardless of the differences in the acyl chain lengths, most (Ͼ99%) of the radioactivity added to each preparation was precipitated in the presence of the lectin, whereas none (Ͻ1%) of the radioactivity was precipitated in the absence of the lectin. These results indicated that nearly 100% of these radioactive ceramides added to vesicle preparations were actually incorporated into donor vesicles. CERT Recognizes Fluorescent Short Chain Analogs of Ceramide- The fluorescent analogs of ceramide C 5 -DMB-ceramide and C 6 -NBD-ceramide have been widely used as probes mimicking natural ceramide in intact cells and in cell-free systems (6,39,40). Because these short chain fluorescent analogs of ceramide spontaneously transfer between membranes (6,40,41), it was difficult to determine accurately the CERT-dependent transfer of C 5 -DMB-ceramide and C 6 -NBD-ceramide in our cell-free transfer assay system. Therefore, to examine if CERT recognized these lipids, we performed a binding assay. CERT could clearly bind both C 5 -DMB-ceramide and C 6 -NBD-ceramide (Fig. 5, A and B). We were also interested in testing if CERT recognizes the endotoxin LPS, because LPS has a moiety that may be structurally similar to ceramide (42). However, we detected no binding of a fluorophore-conjugated LPS to CERT (Fig. 5C). Binding Stoichiometry of CERT and Ceramide-We next attempted to determine a binding stoichiometry of CERT and ceramide. For this, we used C 5 -DMB-ceramide as a ceramide ligand, because binding assays at various concentrations of C 5 -DMB-ceramide were feasible under liposome-free conditions. The molar ratio of C 5 -DMB-ceramide bound to CERT in the presence of large excess C 5 -DMB-ceramide was estimated to be about 0.8 (Fig. 5D). These results most likely indicated that the binding stoichiometry of CERT and ceramide is 1:1. The binding assays also suggested that the apparent dissociation constant between CERT and C 5 -DMB-ceramide was about 200 nM (Fig. 5D). Inhibition of CERT-mediated Transfer of Ceramide by (1R,3R)-HPA-12 in Vitro-(1R,3R)-HPA-12, a chemically synthesized artificial compound, acts as a selective inhibitor of the transport of ceramide from the ER to the site of sphingomyelin synthesis (38). Because (1R,3R)-HPA-12 has structural similarity to D-erythro-ceramide, we hypothesized that CERT might be a target of (1R,3R)-HPA-12. To test this hypothesis, we examined whether (1R,3R)-HPA-12 inhibited CERT-mediated transfer of C 16 -D-erythro-ceramide in the cell-free assay system. As shown in Fig. 6, (1R,3R)-HPA-12 inhibited CERTmediated transfer of ceramide with a 50% inhibitory concentration of ϳ0.5 M. In contrast, its stereochemical isomers and methoxy derivatives, which are inactive as inhibitors of in vivo ceramide trafficking (33), did not affect the in vitro ceramide transfer even at 4 M (Fig. 6). Thus, the inhibition of CERTmediated intermembrane transfer of ceramide by (1R,3R)-HPA-12 was not due to possible nonspecific events such as drug-induced denaturing of proteins or lipids. Collectively, these results indicated that (1R,3R)-HPA-12 is an antagonist of CERT. (1R,3R)-HPA-12 Inhibits CERT-mediated Trafficking of Ceramide in Intact CHO Cells-To see if (1R,3R)-HPA-12 really inhibits CERT-mediated trafficking of ceramide from the ER to the Golgi site for sphingomyelin synthesis, we examined the effect of the drug on de novo synthesis of sphingomyelin in various CHO cell lines. In wild-type CHO-K1 cells, (1R,3R)-HPA-12 inhibited de novo synthesis of sphingomyelin to the level seen in the drug-free LY-A2 cells, which have a mutation in the endogenous CERT gene (8) (Fig. 7). (1R,3R)-HPA-12 had no effects on sphingomyelin synthesis in mutant LY-A2 cells, consistent with our previous study (38). When LY-A2 cells were transfected with the human CERT cDNA, de novo synthesis of sphingomyelin was restored to the wild-type level (8), and the restored activity of sphingomyelin synthesis in CERT/LY-A2 cells was found to be again sensitive to (1R,3R)-HPA-12 (Fig. 7). These results confirmed that (1R,3R)-HPA-12 inhibits CERT-mediated trafficking of ceramide in intact CHO cells. DISCUSSION In the present study, we explored the substrate selectivity of CERT-mediated lipid transfer reactions in a cell-free assay system, and we showed that CERT is capable of mediating the efficient intermembrane transfer of various ceramide molecular species, including ceramide having C 14 -C 20 saturated acyl chains, C 16 -dihydrocermide, and C 16 -phytoceramide, that naturally exist in mammalian cells (Figs. 3 and 4). In mammalian tissues, "dihydrosphingomyelin" (phosphocholine dihydroceramide) widely exists in smaller amounts than sphingomyelin (21,25). Mammalian tissues might also have "phytosphingomyelin" (phosphocholine phytoceramide) in very small amounts (23,43). A homology search with publicly available tools and data bases predicts that mammals have no additional isoforms of CERT, except for a large splicing variant of CERT. 2 These results suggest that CERT and its splicing variant CERT L mediate the transport of various ceramide molecular species from the ER to the Golgi site, where sphingomyelin and its isoforms are synthesized. Different members of the Lag1-related family have been suggested to regulate de novo synthesis of different molecular species of ceramide (44 -46). Notably, C 18 -ceramide synthesized in human embryonic kidney 293 cells overproducing UOG1, a Lag1-related family member, is selectively used for the synthesis of glucosylceramide, but not of sphingomyelin (44). The UOG1-dependent channeling of C 18 -ceramide to glucosylceramide synthesis is unlikely due to a possible selectivity of ceramide species by CERT, because CERT catalyzes the efficient transfer of C 18 -ceramide as well as C 16 -ceramide (Fig. 4). However, it remains unclear whether different interactions of different Lag1-related family members with CERT might affect destinations of ceramide species synthesized de novo. In contrast to the broad specificity of CERT for the ceramide substrate, CERT mediates no transfer of sphingosine, sphingo-myelin, cholesterol, and PdtCho (Fig. 2). Nevertheless, small but significant levels of CERT-mediated transfer of diacylglycerols were reproducibly observed (Fig. 2), consistent with our previous result showing that CERT could extract dioleoylglycerol from artificial membranes even at a much lower efficiency than C 16 -ceramide (8). When one molecule of sphingomyelin is newly synthesized by the PtdCho:ceramide phosphocholine transfer reaction catalyzed by sphingomyelin synthase, one molecule of PtdCho-derived diacylglycerol should be generated. The generated diacylglycerol might cause a feedback inhibition of sphingomyelin synthesis, because diacylglycerol can inhibit the activity of this enzyme in vitro (47). Although the metabolic fate of diacylglycerol generated during the de novo synthesis of sphingomyelin is unknown, a previous study (48) suggested that diacylglycerol generated during the resynthesis of sphingomyelin was rapidly metabolized to triacylglycerol in baby hamster kidney cells treated with extracellular sphingomyelinase. Triacylglycerol synthesis is likely to occur predominantly at the ER (49). Thus, it would be interesting to hypothesize that CERT transports ceramide from the ER to the Golgi and, in turn, transports diacylglycerol from the Golgi to the ER. The intracellular redistribution of C 5 -DMB-ceramide from the ER to the Golgi region is impaired in LY-A cells having a mutation in the CERT gene and also in energy-poisoned wildtype CHO cells (6,8). Therefore, we have proposed that C 5 -DMB-ceramide may be a good probe for the CERT-mediated pathway of ceramide in cells (8). This proposal was further supported by the present study showing that CERT actually binds C 5 -DMB-ceramide (Fig. 5A). Binding assays with C 5 -DMB-ceramide allowed us to estimate the binding stoichiometry of CERT and ceramide to be 1:1 (Fig. 5D). This is consistent with the START domain of the PtdCho-transfer protein that harbors a single PtdCho molecule in the crystal structure of their complex (50). Notably, although no clear impairments of the ER-to-Golgi redistribution of C 6 -NBD-ceramide were observed in LY-A cells nor in energy-poisoned cells (6), the cell-free binding assay showed that CERT is also capable of binding C 6 -NBD-ceramide (Fig. 5B). Previous studies with model membranes have shown that C 6 -NBD-ceramide undergoes spontaneous intermembrane transfer at a much faster rate than C 5 -DMB-ceramide (halftimes of transfer equilibration (t1 ⁄2 ) for C 6 -NBD-ceramide and C 5 -DMB-ceramide are ϳ0.4 and ϳ7 min, respectively) (40,41). The t1 ⁄2 of natural C 16 -ceramide has been estimated to be in the order of days (51), and CERT enhances the t1 ⁄2 of C 16 -ceramide to the order of minutes (7). Thus, the rapid spontaneous transfer of C 6 -NBD-ceramide likely masks the CERT-mediated transfer of C 6 -NBD-ceramide in intact cells, even when the latter process occurs. (1R,3R)-HPA-12 is an inhibitor of an ATP-and cytosol-dependent transport of ceramide from the ER to the site of sphingomyelin synthesis (38). We also demonstrated here that (1R,3R)-HPA-12 inhibited CERT-mediated transfer of ceramide in a cell-free assay system (Fig. 6). HPA-12 derivatives incapable of inhibiting the CERT-dependent transport pathway in cells also did not affect the CERT-mediated intermembrane transfer of ceramide in the cell-free system (Fig. 6). Collectively, we conclude that (1R,3R)-HPA-12 is an antagonist of CERT. Bioinformatic studies have shown that the human genome encodes numerous proteins with putative lipid transfer domains such as START domains (13,14), Sec14-related domains (52), and oxysterol-binding protein-related domains (53) also candidates for new types of medicines. Our studies showing (1R,3R)-HPA-12 as an antagonist of a ceramide transfer protein will hopefully open the way to the development of antagonists of specific lipid transfer proteins.
2018-04-03T06:25:50.247Z
2005-02-25T00:00:00.000
{ "year": 2005, "sha1": "ccc671475b90d2658c2cbe9761b44dec8142e2ab", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/8/6488.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "d64d280bf1998317323cc82e5c0221f2f520869e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252663084
pes2o/s2orc
v3-fos-license
Business sustainability for competitive advantage: identifying the role of green intellectual capital, environmental management accounting and energy efficiency Abstract The manufacturing organizations are threatening the earth and its wildlife because of their growing concern about environmental pollution and industrial waste. Hence, in the present study, the three potential solutions, Green Intellectual Capital, Environmental Management Accounting and Energy Efficiency, are evaluated for excelling the organizational operations towards business sustainability and attaining the Competitive Advantage. With the assistance of ‘Partial Least Square-Structural Equation Modelling’ on the dataset of 364 respondents from the manufacturing organizations in China, the outcome reported the positive and significant impact of all of the studied potential solutions in excelling and enhancing business sustainability and competitive advantage. Based on the findings, it is proposed that manufacturing organizations need to apportion due attention to developing the green intellectual capital, improve the level of consumption of energy and need to disclose their environmental management through proper Environmental Management Accounting. Introduction Historically, the organizations considered the world and its respective natural resources a freely available commodity in unlimited quantities.This aptitude of the organizations headed them toward the 'tragedy of the commons' (Yusliza et al., 2020).This is a kind of tragedy where people assume that the consumption of shared resources will have minimal adverse consequences to the ecology and lead to greater depletion of resources and a larger level of pollution (Shaw et al., 2016).However, with the passage of time, there is a development in the understanding of the people that they have to preserve the resources, ecology and environment for the future generations, and they need to affirm the responsibilities pertaining to the destruction and preservation of the ecology, flora and fauna (Yusliza et al., 2020;Tiwari et al., 2021).Hence there is an introduction to the phenomena of sustainability in which there is an integration of the diversified aspects covering the ecology, economy and society (Bombiak & Marciniuk-Kluska, 2018). While the researchers are exploring different potential cleaner solutions for enhancing the business sustainability in order to sustain the Competitive Advantage, the majority of the researchers have explored it within the dynamics of manufacturing by smoothing the supply and value chain (Yildiz C ¸ankaya & Sezen, 2019), human resources having integration of sustainability (Zaid et al., 2018), and sustainable practices (Abdul-Rashid et al., 2017).Thus the major focus remains on the resources that are tangible in nature.However, the significance of intangible resources in nature and equally important has not gained the due attention (Yusliza et al., 2020).Despite that, a group of researchers ascertained the importance of intangible capital, referred to as intellectual capital (Yusoff et al., 2019).Moreover, intellectual capital has also emerged as one of the potential solutions for promoting sustainability within the operations of the organizations (Cavicchi & Vagnoni, 2017;Yusoff et al., 2019). In addition to this, the transition of the organizations towards sustainability is mainly because of the customer demand for the environment and societal friendly goods and services (Rehman et al., 2021a;Tiwari et al., 2022).Moreover, organizational inefficiencies have also been reported to lead to financial and environmental losses during manufacturing the goods because of the wastage of resources, energy, and capital (Sari et al., 2021).The elimination of such wastage is possible by assisting an efficient system in which environmental management is regularly accounted and updated (Tashakor et al., 2019).Thus the system and procedure of Environmental Management Accounting emerged as a tool through which the wastages are identified, monitored and eventually eliminated from the value stream of the products leading to improving the business sustainability of the organizations (Bresciani et al., 2022;Sari et al., 2021). It is undeniable that Manufacturing organizations are the largest contributor to carbon emissions and environmental pollution (Yusliza et al., 2020).In fact, according to Zailani et al. (2012), the earth and its wildlife is being threatened by the manufacturing organizations because of their growing concern about environmental pollution and industrial waste.This leads to the need to increase the efficiency and productivity of the consumption of resources, especially energy, as improving energy efficiency leads to various positive benefits for the organizations (Sidik et al., 2019).This is because when there is an improvement in terms of energy efficiency, there will be a lesser generation of non-value added activities leading to saving of the energy resources as well as decreasing the costs associated with the consumption of that un-wanted and excessive consumption of energy (Ahmed et al., 2020).Furthermore, through such kind of mitigation of the pollution, there is also an improvement in the ecology which will also assist in improving the societal well-being (Sidik et al., 2019). In accordance with the problem highlighted in the discussion mentioned above, it is extremely important to search for the solutions which can eliminate the imbalance among the three different dimensions and aspects of sustainability.Though each of them is equally important for attaining a competitive advantage, however achieving and then sustaining the competitiveness is extremely crucial.Hence, in the present study, the role of the three potential solutions, Green Intellectual Capital, Environmental Management Accounting, and Energy Efficiency, is evaluated for excelling the organizational operations towards business sustainability and attaining a competitive advantage.Nevertheless, the current study is an attempt to seek the answer to the following research questions: The arrangement of the residual of the study is that the next section comprised of discussion related to the hypothesized relationships among the studied phenomena, followed by methodology, statistical analysis and results whereas in the last the study is concluded and recommendations are proposed. Green intellectual capital, sustainability and competitive advantage In the current hyper-competitive business environment, the major focus and emphasis of the organizations for excellence in performance in these days, is to have assets that are intangible in nature rather keeping tangle assets that keep on depreciating over the period of time (Eisenhardt & Schoonhoven, 1996;Agostini et al., 2017).Researchers have agreed that the organizations' endurance has a strong reliance over the intangible assets (Obeng et al., 2014), which will also ensure their competitive advantage (Roos, 2017).These intangible assets are referred to as Intellectual Capital, whereby organizations possessing larger reserves of intellectual capital have more operational excellence compared with organizations with lower reserves of Intellectual Capital (Alcaniz et al., 2011;Ahmad & Ahmed, 2016).Similarly, when the intellectual capital incorporates awareness and concerns related to environmental pollution and ecological well-being, it is referred to as Green Intellectual Capital (Chen, 2007).However, despite its significance and essentiality, the phenomenon of Green Intellectual Capital has not been given its due attention and consideration by researchers and academicians (Yusoff et al., 2019). Despite of the ignorance by the researchers and academicians, Green Intellectual Capital has the tendency and capability to enhance the operational excellence of the organizations for achieving and meeting the sustainable development objectives led by the international bodies, transforming their products towards more environmental friendlier as per the requirements of the customers, and attaining the competitive advantage (Yusoff et al., 2019;Huang & Kung, 2011).In addition to this, organizations are revisiting their objectives and channelizing more investments to attain the three dimensions of sustainability: ecology, social, and finance (Cavicchi & Vagnoni, 2017).Moreover the role of green intellectual capital has been reported to be more supportive for achieving business sustainability (Tonial et al., 2019), which eventually assists the organizations in attaining and maintaining the competitive advantage (Yusliza et al., 2020;Roos, 2017).Hence it is assumed that: H1: Green Intellectual Capital of the organization will enhance the environmental performance. H2: Green Intellectual Capital of the organization will enhance the economic performance. H3: Green Intellectual Capital of the organization will enhance the social performance. H4: Green Intellectual Capital of the organization will enhance the Competitive Advantage. Energy efficiency, sustainability and competitive advantage The consumption of energy is pivotal for the organization in transforming the raw material into finished goods leading to revenue and profit generation.Hence, it becomes inevitable for the organization to clearly wipe out the usage of energy from their operations as a preventive measure to control the generation of pollution and greenhouse gases.This is because there is an existence of a direct relationship between the consumption of energy and organizational performance (Ahmed et al., 2020).However, the consumption of energy can be improved by improving its productivity and efficiency, whereas transformation towards environmentally friendly, green and renewable energy sources can also be a potential solution to attain a balance between development and pollution (Miku cionien_ e et al., 2014;Banerjee & Solomon, 2003).Moreover, it should be noted that the energy efficiency is not merely dependent on the increasing the level of productivity and consumption efficiency rather, it requires a serious attention towards transformation towards green sources, government support, prices, timely availability, taxation etc. (Hanley et al., 2009). Furthermore, the contribution of energy efficiency in achieving the business sustainability is reported by different researchers in different geographical settings.Precisely, through energy efficiency, a firm can decrease excessive energy consumption, which leads to the lesser generation of pollution and improves environmental performance (Hanley et al., 2009;Ahmed et al., 2020).Moreover, when there is lesser energy consumption, the excessive financial resources can be utilized in any other profitable alternatives, thus improving financial performance (Shin et al., 2018;Dangelico & Pontrandolfo, 2015).Furthermore, through this, the residual of the energy will be available for society to be consumed, thus improving social performance.On the other hand, when an organization saves less energy and progresses far better than the competitors, it actually contributes to the competitive advantage of the firm (Sidik et al., 2019).Hence it is assumed that: H5: Energy Efficiency of the organization will enhance the environmental performance. H6: Energy Efficiency of the organization will enhance the economic performance. H7: Energy Efficiency of the organization will enhance the social performance. H8: Energy Efficiency of the organization will enhance the Competitive Advantage. Environmental management accounting, sustainability and competitive advantage Environmental Management Accounting (ACC) has been explained as integration of financial and cost accounting with the objective of decreasing the level of environmental costs, effects and risks, which is an important element in any of the decision making made by the top management (Bresciani et al., 2022).It is considered as one of the viable solutions for attaining sustainability (Zhou et al., 2017;Burritt & Saka, 2006).Moreover, through this kind of accounting, the firms will be in a better position to fulfil their environmental protection responsibility at the minimum or no compromise on the finance and economics of the organizations (Ferreira et al., 2010).Moreover, ACC is becoming the criteria for assessing the level of sustainability that the organization possesses while comparing with the related competitors (Christ & Burritt, 2015).This attribute of ACC also makes it to be a differentiating factor for achieving a competitive advantage (Sidik et al., 2019). Despite its importance and significance, organizations have the implementation of ACC at a minimal rate (Doorasamy, 2015).This eventually decreases ACC's effectiveness by reducing waste, energy, and costs (Schaltegger, 2018).One of the recent studies conducted by Qian et al. (2018) confirms the assistance of ACC in eradicating carbon emissions through effective management and disclosure.The researchers reached this conclusion after conducting the analysis on the data collected from 114 organizations belonging to developed countries like Germany, Japan, Australia, and United States.Similar findings were validated by Phan et al. (2018), who reached this conclusion after analyzing the firm-level data of 208 organizations from Australia.Hence it is assumed that: H9: Environmental Management Accounting of the organization will enhance the environmental performance.H10: Environmental Management Accounting of the organization will enhance the economic performance. H11: Environmental Management Accounting of the organization will enhance the social performance. H12: Environmental Management Accounting of the organization will enhance the Competitive Advantage. Sustainability and competitive advantage The concept of sustainability entails three dimensions that need to be considered, which covers the aspects of society, ecology and finance or economics (Eklington, 1998;Asadi et al., 2017).Normally, organizations are only focused and concerned on the financial aspects for which they take decisions ignoring the societal and ecological concerns (Van der Byl & Slawinski, 2015;Neri et al., 2018).However, it should be noted that focusing merely on financial aspects will only benefit the organization in a shorter period of time.Hence, for long term survival and competitiveness, they have to pay equal importance to all of the three dimensions of sustainability (Neri et al., 2018).Moreover, organizations also prioritize these dimensions based on their comfort, resources and objectives (Fernando et al., 2019).However, all of these dimensions are interrelated.For instance, when organizations strive to eliminate waste, improve the efficiency level of energy consumption to improve their financial excellence, they are actually also taking sufficient steps to improve environmental performance (Khurshid & Darzi, 2016). Moreover, these initiatives will not be possible without the support of the human resources who actually drives the strategies and policies.Hence, they are also getting awareness regarding the preservation of the environment, which is also a small step toward improving social performance (Mehta & Chugan, 2015).Hence, through these initiatives, organizations are moving towards attaining a competitive advantage (Asadi et al., 2020).Thus it is assumed that: H13: Environmental Performance of the organization will enhance the Competitive Advantage. H14: Economic Performance of the organization will enhance the Competitive Advantage. H15: Social Performance of the organization will enhance the Competitive Advantage. Methodology Prior to the execution of any research study, the researchers have multiple options in terms of research approach and design.In terms of research approach, a study can be quantitative, qualitative or both, which is referred to as a mixed research approach.With every research approach, there are different pre-requisites, benefits, and challenges that a researcher needs to ascertain before the commencement of any study.In the present study, the researcher opts for the survey research design within the quantitative research approach.In this kind of research design, there is a collection of quantitative data from the potential respondents through the employment of the questionnaire, often referred to as a survey form.In terms of benefits associated with the survey research within the quantitative research approach, it enables to researchers with the collection of quantitative data, in which the outcome is estimated through the application of statistical analysis, which further assists the researcher in drawing the logical interpretations and conclusions (Hulland et al., 2018).Moreover, in this research design, the data is collected from the sample, which is relatively smaller in terms of size compared to the population; however, the outcome generated can generalize the findings over the maximum proportion of the population.In addition to this, the survey methodology is relatively cost-effective, whereas it also fulfils the requirements of randomness and reliability (Cooper et al., 2006). Common method variance Among the pre-requisites and challenges associated with the survey research within the quantitative research approach is the mitigation of the capturing of unwanted operational variance, which is often referred to as 'Common Method Variance' (CMV) (Podsakoff et al., 2003).This type of variance are un-willingly integrated during the execution of the research, hence requiring careful attention and consideration by the researcher executing the research.Several operational methods suggested by Podsakoff et al. (2012) enable the researcher to mitigate the CMV.Among these, the most crucial is developing and designing the survey questionnaire.It is said that if the survey form is designed in a way that it provides a hassle-free experience to the respondents while they are responding to that survey form, then this easiness will be reflected in the collected outcome, whereas it significantly mitigates the CMV at the operational stage.The level of easiness can be enhanced through different steps.This includes making the statements of questions easy so that it is easily be answered by the respondents.The other way is to improve the navigation of the survey form.When the questionnaire is designed to provide ease to the respondents while they are moving from one question to another, it will eventually improve the respondents' engagement with the research.Hence, these guidelines will be followed, which will assist in eliminating the CMV at the operational stage. Development of questionnaire As mentioned earlier, the questionnaire is the crucial element in any survey-based quantitative research; therefore, the questions that the survey form comprises must be reliable, robust, and valid.Therefore, in the current study, the researchers rely on the previously established scales that have reported a higher level of internal consistency in other related and similar research.Moreover, these adapted scales were measured on the Likert Scale, having 5 points where '1 represents Strongly Disagree,' '2 represents Disagree,' '3 represents neither Disagree nor Agree,' '4 represents Agree,' and '5 represents Strongly Agree.'The details of the adapted scales are listed in Table 1. Data collection and data screening In addition to this, as the current research is intended to assess the Business Sustainability for attaining the Competitive Advantage through understanding the role of Green Intellectual Capital, Environmental Management Accounting, and Energy Efficiency, the required sample for the research is the firms or organizations.Hence for the current study, the data is collected from the organizations that are operating in China.Moreover, the inclusion criteria comprised of green certifications attained by the firms like ISO 14001 etc.The reason for making this criterion as the requirement is that when the firms are certified with any of the green and sustainability, their operations are more inclined towards sustainability.In contrast, the collected data from these firm and the outcome drawn will assist the other firms in pursuing the green and sustainable initiatives. Initially, there was a distribution of 1000 questionnaires, of which 437 were responded back, leading to the response rate of 43.7%, which is unexpectedly great as similar studies have reported far less response rates.Nevertheless, among the collected 437 responses during the stage of data screening, there was further elimination of 54 survey forms because of containing missing values, leading to 383 responses.Among these 383 responses, 19 were further removed because of their capability to distort the data distribution.They were identified as uni-variate and multi-variate outliers as per the discussion by Hair et al. (2010).Hence the final data used for the current study comprised 364 respondents. Demographic profiles From the collected 364 data, the demographic profiles of the respondents were gauged through different criteria.Regarding gender, 40% of the data means 147 respondents identified themselves as females, whereas 60% of the data means 217 respondents identified themselves as males.In terms of age, 27% of the data, which means 97 respondents were reported to have age less than 30 years, 38% of the data, which means 138 respondents were reported to have an age between 31-40 years, 21% of the data which means 76 respondents were reported to have age between 41-50 years whereas 15% of the data which means 53 respondents were reported to have age greater than 51 years.In terms of the size of the organizations from which these respondents belong and the number of employees being hired by these firms, 20% of the data, which means 74 respondents were, belong to the organizations having less than 100 employees, 37% of the data which means 134 respondents belonged to the organizations having between 101 to 250 employees, 20% of the data which means 74 respondents belonged to the organizations having between 251 to 450 employees, 23% of the data which means 82 respondents belonged to the organizations having greater than 450 employees.In terms of nature of the industry from where the firms belong, 31% of the data, which means 112 respondents belonged to the automobile industry, 37% of the data, which means 135 respondents belonged to the electronics industry, 13% of the data which means 49 respondents belonged to the chemical industry, 16% of the data which means 57 respondents belonged to the Pharmaceutical industry, whereas rest of them which are 3% of the data that means 11 respondents belonged to the industry other than the industries mentioned above.The decomposition of the demographic profiles of the respondents is listed in Table 2. Estimations and results Based on the objectives, proposed hypotheses and direction of the relationships among the focused variables, the current study has applied the statistical technique which belongs to the second generation.The difference between first-generation and second-generation statistical techniques is their capability to handle multiple criterion variables even at different ends.For instance, referring to Figure 1, if a first-generation technique is applied on the same framework, linear regression will be applied four times as there are four criterion variables.On the other hand, the whole analysis can be performed in a one-go in any second-generation technique.Hence, to minimize the complexities, the current study has utilized the second generation technique. Moreover, within the second generation technique, the current study has applied 'Partial Least Square-Structural Equation Modelling' (PLS-SEM), which is superior to other co-variance based conventional SEM techniques in terms of their predictability, variation explanation, robustness and handling complex research models (Hair et al., 2019) as are the requirements of the present study.Moreover, for the application of PLS-SEM, Hair et al. (2016) have provided guidelines according to which the application should be considered legitimate if it comprised the assessments of the two level of the model.These are the inner model and the outer model.There are further evaluation criteria for assessing the model which is discussed in the subsequent sections.Apart from that, the application of PLS-SEM is made through the help of SmartPLS developed by Ringle et al. (2015).This software is considered as the software with the most user-friendly interface and robust generation of the outcome. Assessment of outer model At this level of the model, the assessment of observed variables, with the focused variables, which includes predictors and criterion as shown in Figure 1 is assessed.The observed variables are actually the survey questions upon which the data is collected from the respondents.Their themes and operationalization must be highly related to their respective latent variables.At the outer model, two types of validation need to be ensured, which are discussed in the following sub-sections. Convergent validity In this type of validity, the level of convergence that the observed variables reflect with the respective latent variables is assessed (Mehmood & Najmi, 2017).This degree of convergence eventually forces the formation of a latent variable and hence needs to be assessed as only those observed variables are converged that are operationally reflecting the same phenomena.In this validity, there are three evaluation criteria were assessed.The factor loadings, internal consistency, and 'Average Variance Extracted' (AVE).For factor loadings, the stated threshold by Hair et al. (2016) is the value greater than 0.7, which is found in the present study as shown in Table 3.For internal consistency, which is further assessed by Cronbach's Alpha and Composite Reliability, the stated threshold by Hair et al. (2016) is the value greater than 0.7, which is also found in the present study, as shown in Table 3.For AVE, the stated threshold by Hair et al. (2016) is the value greater than 0.5, which is found in the present study as shown in Table 3.Hence, through the output summarized in Table 3, the legitimacy of convergent validity is assessed and assured. Discriminant validity In this type of validity, the level of divergence that the observed variables reflect with the observed variables of other latent variables is assessed (Mehmood & Najmi, 2017).This degree of divergence eventually forces the formation of the latent variable and hence needs to be assessed as those observed variables that are operationally different must form different variables that are operationally reflecting the different phenomena.There are three different criteria utilized in the current study for evaluating this validity.This includes cross loadings, Fornell-Larcker criterion and 'Heterotrait-Monotrait ratio of correlations' (HTMT).The outcome of assessments of Discriminant Validity as per the above-mentioned criteria are discussed below. In the cross-loadings, the difference of the factor loading loaded on the latent variables and the loading within their respective latent variables is assessed.The difference as per Gefen and Straub (2005), must exceed 0.1.The assessment of Discriminant Validity through cross-loadings is shown in Table 4. In the Fornell-Larcker criterion (1981), the comparison is drawn of the square root of AVE of a construct with the value of the correlations of that particular construct with the other latent variables, whereas the square root of AVE of a construct must be higher.Referring to Table 5, the values positioned at the diagonal and are highlighted show the square root of AVE of a construct, whereas the other remaining variables reflect the value of the correlations.The successful assessment of Discriminant Validity through Fornell-Larcker criterion are shown in Table 5. The third criterion, which is the most recent among the three mentioned for assessing the discriminant validity, is the HTMT.This criterion is proposed by Henseler et al. (2015) and has established the robustness for assessing the discriminant validity among the available alternatives.Henseler et al. (2015) have proposed the cut-off threshold of HTMT, which is 0.85 as found in the present study.The successful assessment of Discriminant Validity through HTMT criterion is shown in Table 6. Assessment of the inner model At this level of the model, the assessment of the model's predictability and explanation capability is made, reflecting the predictor's relationship in explaining the criterion variables.For that purpose, there are two criteria assessed in the present study: 'coefficient of determination' and 'Cross-Validated Redundancy.'For the 'coefficient of determination' which is denoted by R-Square the stated threshold by Cohen (1988), is if the value exceeds 0.26 that it should be termed as substantial if the value is below 0.02 then it should be termed as low.In contrast, any value between the two should be considered as the medium.The other criteria used is 'Cross-Validated Redundancy,' denoted by Q-Square and for which Hair et al. (2016) suggested the acceptable value is anything greater than 0. The assessment of the Inner Model is shown in Table 7. Hypotheses testing Another edge of PLS-SEM over conventional SEM is that the significance of the relationship is computed through bootstrapping in this technique.This methodology computes the significance after drawing multiple sub-samples from the data set.The suggested number of sub-samples drawn by Hair et al. (2016) is 5000 sub-samples, followed in the current study.Firstly, the level of association of Green intellectual Capital is assessed with the other criterion variables.Considering the level of association of Green intellectual Capital with environmental performance, the output of PLS-SEM reported a positive impact b ¼ 0:327, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that 1% improvement in the level of Green intellectual Capital will enhance the environmental performance by 32.7%.In other words, when an organization invests in developing their intellectual capital with updated knowledge about greenness and awareness related to ecological issues, it will further increase the organization's confidence in green initiatives, which will further assist the organization in improving environmental performance.Considering the level of association of Green intellectual Capital with economic performance, the output of PLS-SEM reported a positive impact b ¼ 0:126, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Green intellectual Capital will enhance the economic performance by 12.6%.In other words, when an organization invests in developing their intellectual capital with updated knowledge about greenness and awareness related to ecological issues, it will further increase the organization's confidence in green initiatives, which will further assist the organization in improving economic performance.Considering the level of association of Green intellectual Capital with social performance, the output of PLS-SEM reported a positive impact b ¼ 0:334, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Green intellectual Capital will enhance the social performance by 33.4%.In other words, when an organization invests in developing their intellectual capital with updated knowledge about greenness and awareness related to ecological issues, it will further increase the organization's confidence in societal initiatives, which will further assist the organization in improving social performance.Considering the level of association of Green intellectual Capital with a competitive advantage, the output of PLS-SEM reported a positive impact b ¼ 0:141, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Green intellectual Capital will enhance the competitive advantage by 14.1%.In other words, when an organization invest in developing its intellectual capital with the updated knowledge and understanding about the greenness and awareness related to ecological issues, it will further increase the confidence of the organization towards green initiatives, and this will further assist the organization in attaining the competitive advantage over the competitors in the market.These findings are in accordance with the conclusions drawn by Sidik et al. (2019); Yusliza et al. (2020) and Yusoff et al. (2019). Secondly, the level of association of Energy Efficiency is assessed with the other criterion variables.Considering the level of association of Energy Efficiency with environmental performance, the output of PLS-SEM reported a positive impact b ¼ 0:271, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that 1% improvement in the level of Energy Efficiency will enhance the environmental performance by 27.1%.In other words, when an organization invests in improving the productivity and consumption of energy that is also environmentally friendly, it will further increase the organization's confidence towards green initiatives, which will further assist the organization in improving environmental performance.Considering the level of association of Energy Efficiency with economic performance, the output of PLS-SEM reported a positive impact b ¼ 0:309, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Energy Efficiency will enhance the economic performance by 30.9%.In other words, when an organization invests in improving the productivity and consumption of energy that is also environmentally friendly, it will further increase the organization's confidence towards green initiatives, which will further assist the organization in improving economic performance.Considering the level of association of Energy Efficiency with social performance, the output of PLS-SEM reported a positive impact b ¼ 0:129, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Energy Efficiency will enhance the social performance by 12.9%.In other words, when an organization invests in improving the productivity and consumption of the energy that is also environmentally friendly, it will further increase the organization's confidence in societal initiatives, which will further assist the organization in improving social performance.Considering the level of association of Energy Efficiency with a competitive advantage, the output of PLS-SEM reported a positive impact b ¼ 0:233, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Energy Efficiency will enhance the competitive advantage by 23.3%.In other words, when an organization invests in improving the productivity and consumption of the energy which is also environmentally friendly, it will further increase the organization's confidence in green initiatives.This will further assist the organization in attaining competitive advantage over the competitors in the market.These findings are in accordance with the conclusions drawn by Sidik et al. (2019). Thirdly, the level of association of Environmental Management Accounting is assessed with the other criterion variables.Considering the level of association of Environmental Management Accounting with environmental performance, the output of PLS-SEM reported a positive impact b ¼ 0:224, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that a 1% improvement in the level of Environmental Management Accounting will enhance the environmental performance by 22.4%.In other words, when an organization invest in improving the record-keeping, assessment and maintaining the record related to the environmental initiatives and the subsequent generation of pollution, it will further increase the confidence of the organization towards green initiatives, and this will further assist the organization in their improvement of environmental performance.Considering the level of association of Environmental Management Accounting with economic performance, the output of PLS-SEM reported positive impact b ¼ 0:214, p < 0:05 ð Þ which is also significant at 5% level of significance.It means that the 1% improvement in the level of Environmental Management Accounting will enhance the economic performance by 21.4%.In other words, when an organization invest in improving the record-keeping, assessment and maintaining the record related to the environmental initiatives and the subsequent generation of pollution, it will further increase the confidence of the organization towards green initiatives, and this will further assist the organization in their improvement of economic performance.Considering the level of association of Environmental Management Accounting with social performance, the output of PLS-SEM reported a positive impact b ¼ 0:294, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Environmental Management Accounting will enhance the social performance by 29.4%.In other words, when an organization invest in improving the record-keeping, assessment and maintaining the record related to the environmental initiatives and the subsequent generation of pollution, it will further increase the confidence of the organization towards societal initiatives, and this will further assist the organization in their improvement of social performance.Considering the level of association of Environmental Management Accounting with a competitive advantage, the output of PLS-SEM reported a positive impact b ¼ 0:194, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of Environmental Management Accounting will enhance the competitive advantage by 19.4%.In other words, when an organization invest in improving the record-keeping, assessment and maintaining the record related to the environmental initiatives and the subsequent generation of pollution, it will further increase the confidence of the organization towards green initiatives, and this will further assist the organization in attaining the competitive advantage over the competitors in the market.These findings are in accordance with the conclusions drawn by Sidik et al. (2019). Lastly, the level of association of dimensions of Business Sustainability is assessed with the Competitive Advantage.Considering the level of association of environmental performance with the Competitive Advantage, the output of PLS-SEM reported a positive impact b ¼ 0:293, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that a 1% improvement in the level of environmental performance will enhance the Competitive Advantage by 29.3%.In other words, when an organization invests in green initiatives to improve environmental performance, it will further increase the organization's confidence in green initiatives.This will further assist the organization in attaining competitive advantage over the competitors in the market.Considering the level of association of economic performance with the Competitive Advantage, the output of PLS-SEM reported a positive impact b ¼ 0:291, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of economic performance will enhance the Competitive Advantage by 29.1%.In other words, when an organization invest in green initiatives to improve the economic performance through the elimination of the activities that are damaging the environment at the cost of the economics, then it will further increase the confidence of the organization towards green initiatives, and this will further assist the organization in attaining the competitive advantage over the competitors in the market.Considering the level of association of Social Performance with the Competitive Advantage, the output of PLS-SEM reported a positive impact b ¼ 0:275, p < 0:05 ð Þ which is also significant at a 5% level of significance.It means that the 1% improvement in the level of social performance will enhance the Competitive Advantage by 27.5%.In other words, when an organization invest in green initiatives with the objective of improving the social performance through the elimination of the activities that are damaging the environment at the cost of the society, then it will further increase the confidence of the organization toward green initiatives, and this will further assist the organization in attaining the competitive advantage over the competitors in the market.These findings are in accordance with the conclusions drawn by Sidik et al. (2019).The outcome generated from the PLS-SEM is summarized in Table 8 and shown in Figure 2. Conclusion and recommendations Researchers are exploring different potential cleaner solutions for enhancing the business sustainability to sustain the Competitive Advantage; most researchers have explored it within the dynamics of manufacturing operations.Despite that, a group of researchers ascertained the importance of the intangible capital, referred to as intellectual capital that emerged as one of the potential solutions for promoting sustainability within the operations of the organizations. emissions by manufacturing companies.The majority of the literature from environmental economics explores the relationships at the country, panel, and/or global level, whereas the current study had explored the literature from the manufacturing industry's level.In addition to this, the current study explores the three potential solutions identified through literature based on their theoretical and practical significance and relevance.Furthermore, the current study explores the context of China, which is gradually expanding its operations from the manufacturing perspective.Statistically, the application of variance-based SEM in explaining the relationships among the studied variables can also be considered as a potential contribution.Lastly, the current study also potentially contributes in terms of attaining business sustainability and sustaining the competitive advantage. Limitations and directions for future research Similar to other research, the present study also has certain limitations.Firstly, in the current study, only three potential solutions for excelling the organizational operations towards business sustainability and attaining a competitive advantage are Green Intellectual Capital, Environmental Management Accounting, and Energy Efficiency.The literature is filled with other solutions which also require in-depth exploration.Secondly, a group of researchers have argued that there are three dimensions of Green Intellectual Capital which are human, structural and relational capital.Hence, exploring each of them precisely could better understand the nature of the relationships with business sustainability and competitive advantage.Thirdly, in terms of statistics, the current study is based on the exploration of linear relationships among the variables.This deficiency could be covered by exploring the phenomena through artificial intelligence-based estimation and prediction techniques.Lastly, the current study is based on the geographical context of China, which is developing very rapidly and speedily.Based on this limitation, there is a need to explore the contexts of the developing countries, which can be an essential contribution to the literature. To what extent does Green Intellectual Capital assist the organization in achieving business sustainability and Competitive Advantage?RQ2: To what extent does Environmental Management Accounting assist the organization in achieving business sustainability and Competitive Advantage?RQ3: To what extent does Energy Efficiency assist the organization in achieving business sustainability and Competitive Advantage?RQ4: how does business sustainability provide assistance to the organization in their achievement of Competitive Advantage? RQ1: Table 1 . Source of measures. Table 3 . Measurement model results. Table 4 . Results of loadings and cross loadings. Table 6 . Results of HTMT ratio of correlations. Table 7 . Predictive power of construct.
2022-10-02T15:23:56.239Z
2022-09-30T00:00:00.000
{ "year": 2023, "sha1": "5404fa5ea8702afb23026fa6a6ce407cb5b46012", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/1331677x.2022.2125035", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "0c78987899e827fc494f8529e0ecf07475886f29", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [] }
119350868
pes2o/s2orc
v3-fos-license
Supernova explosions of massive stars and cosmic rays Most cosmic ray particles observed derive from the explosions of massive stars, which commonly produce stellar black holes in their supernova explosions. When two such black holes find themselves in a tight binary system they finally merge in a gigantic emission of gravitational waves, events that have now been detected. After an introduction (section 1) we introduce the basic concept (section 2): Cosmic rays from exploding massive stars with winds always show two cosmic ray components at the same time: (i) the weaker polar cap component only produced by Diffusive Shock Acceleration with a cut-off at the knee, and (ii) the stronger $4 \pi$ component with a down-turn to a steeper power-law spectrum at the knee, and a final cutoff at the ankle. In section 3 we use the Alpha Magnetic Spectrometer (AMS) data to differentiate these two cosmic ray spectral components. The ensuing secondary spectra can explain anti-protons, lower energy positrons, and other secondary particles. Triplet pair production may explain the higher energy positron AMS data. In section 4 we test this paradigm with a theory of injection based on a combined effect of first and second ionization potential; this reproduces the ratio of Cosmic Ray source abundances to source material abundances. In section 5 we interpret the compact radio source 41.9+58 in the starburst galaxy M82 as a recent binary black hole merger, with an accompanying gamma ray burst. This can also explain the Ultra High Energy Cosmic Ray (UHECR) data in the Northern sky. Thus, by studying the cosmic ray particles, their abundances at knee energies, and their spectra, we can learn about what drives these stars to produce the observed cosmic rays. their supernova explosions. When two such black holes find themselves in a tight binary system they finally merge in a gigantic emission of gravitational waves, events that have now been detected. The radio interferometric data demonstrate that all of these stars have powerful magnetic winds. After an introduction (section 1) we introduce the basic concept (section 2): Cosmic rays from exploding massive stars with winds always show two cosmic ray components at the same time: (i) the weaker polar cap component only produced by Diffusive Shock Acceleration, showing a relatively flat spectrum, and cut-off at the knee, and (ii) the stronger 4π component, which is produced by a combination of Stochastic Shock Drift Acceleration and Diffusive Shock Acceleration, with a down-turn to a steeper power-law spectrum at the knee, and a final cutoff at the ankle. In section 3 we use the Alpha Magnetic Spectrometer (AMS) data to differentiate these two cosmic ray spectral components; these two cosmic ray components excite magnetic irregularity spectra in the plasma, and the ensuing secondary spectra can explain anti-protons, lower energy positrons, and other secondary particles. Cosmic ray electrons of the polar cap component interact with the surrounding photon field to produce positrons by triplet pair production, and in this manner may explain the higher energy positron AMS data. In section 4 we test this paradigm with a theory of injection based on a combined effect of first and second ionization potential; this reproduces the ratio of Cosmic Ray source abundances to source material abundances. We can interpret the abundance data using the relation of the total number of ions enhanced by Q 2 0 A +2/3 , where Q 0 is the initial degree of ionization, and A is the mass number. This interpretation implies the high temperature as observed in the winds of blue super-giant stars; it also requires that cosmic ray injection happens in the shock travelling through such a wind. Most injection happens at the largest radii before slowing down due to interaction with the environment. In section 5 we interpret the compact radio source 41.9+58 in the starburst galaxy M82 as Introduction The origin of Cosmic Rays (CRs), directly observed energetic particles, is still not fully understood. But with a large number of experiments we now have a basis to ask better questions. These particles, discovered in 1912 by Hess, extend in energy up to 10 21 eV. They have characteristic spectral features, referred to as the knee, where the overall spectrum turns down around a few 10 15 eV, as well as another transition near 3 · 10 18 eV, referred to as the ankle, where the overall spectrum turns up. This turn-up is often believed to be the likely transition between a cosmic ray origin in our Galaxy and an origin in extragalactic sources. The spectrum shows a final turn-down around 10 20 eV. Today we also have a variety of observational support for energetic particles elsewhere in the universe. In our Galaxy as well as in other galaxies and active galactic nuclei (2016), Amato & Blasi (2017). Earlier work is Berezinsky et al. (1990), with an extensive review by Ginzburg & Ptuskin (1976). Some older fundamental books are Heisenberg (1953), Ginzburg & Syrovatskij (1963), Hayakawa (1969), with a review by L. Biermann (1953) and the original article collection edited by Rosen (1969). A more advanced theory, called Diffusive Shock Acceleration (DSA), was then derived in a series of papers by Krymsky (1977), Axford et al. (1977), Blandford & Ostriker (1978), and Bell (1978a, b). A thorough review was given by Drury (1983). All these early studies are based on a supernova shock advancing through an ionized and magnetic medium. Jokipii (1982Jokipii ( , 1987 then added drift acceleration (here used as Stochastic Shock Drift Acceleration, StSDA); today we believe that both, DSA and StSDA, are important (e.g., Lee et al. 1996 CRs and magnetic fields mixing with normal plasma are discussed in Biermann (1994b). The unstable shocks were again emphasized in Bell & Lucek (2000, 2001 and Amato & Blasi (2006), who show how magnetic fields can be strongly enhanced, increasing violent motions. We also note that this turbulence leads to energy gains from higher curvature qualitatively by analogy with jets from AGN (Biermann 1994a, b), then quantitatively (Milgrom & Usov 1995Vietri 1995Vietri , 1996Vietri , 1998 Thoudam et al. (2016) show that to the understand the CR spectrum and chemical composition a contribution from very massive stars such as Wolf-Rayet (WR) stars is required, stars that explode into their winds. This implies that we also include RSG stars (e.g., Woosley et al. 2002). The Fermi and H.E.S.S. observations of cosmic ray interaction with ambient gas producing γ-rays at GeV to TeV gamma energies (Abramowski et al. 2014) are consistent with a production by nuclei at relatively high energy such as given by RSG and BSG star explosions. The SN rate in our Galaxy (Diehl et al. 2006(Diehl et al. , 2010, is approximately one per 75 Kronberg 1998) at a distance of only about 3 Mpc; this can be attributed to a higher SN rate, as well as to the higher pressure in M82's ISM. In a transition from a wind-SN to a Sedov-SN (i.e. energy conserving SN shocks) as the shock hits the surrounding medium, the luminosity scales with the magnetic field B as B 1.7 (Biermann & Strom 1993): Hence the lower ISM pressure and magnetic field in our galaxy imply in a steady state, supernova evolution in M82 occurs at a luminosity many powers of ten above that in our Galaxy. At the rate of one such explosion about every 600 years and a very short duration of high radio emission, the fact that we have never identified such an explosion in about 70 years of radio observations is understandable. Including stars down to about 25 M , so RSG star explosions as well, shortens this time scale and changes the rate to one every 400 years. It has been recognized early that propagating CR particles interact in the interstellar medium (ISM). Already in the 1940s it was clear that their propagation is chaotic, con-fined by magnetic fields, and adequately described by some sort of diffusion process (Chandrasekhar 1943, L. Biermann 1950, L. Biermann & Schlüter 1950, L. Biermann & Schlüter 1951). Predictions were made in the 1980s for anti-protons and positrons, and other secondary particles, by, e.g., Protheroe et al. (1981), Protheroe (1981Protheroe ( , 1982. One well known process is the formation of neutral or charged pions, which decay into electrons, positrons, neutrinos, and/or photons (e.g. Stecker 1970Stecker , 1971). Another process is, e.g., the formation of unstable nuclei that emit either a positron, an electron or a γ-photon Biermann et al. 2015). However, as the SN-shock hits the wind-shell around a star, we use a locally excited spectrum of magnetic irregularities (e.g. Biermann 1998, 2009). This constitutes a local "nested leaky box". The locally excited spectrum leads to an energy dependence of E −5/9 of the secondary/primary ratio in the low-energy regime. This is very close to the often used E −0.6 . At higher energy this locally excited spectrum yields the energy dependence In this article we will address how the recent observational data can be included in a theoretical description of cosmic ray production: 1a) We first describe the radio data of SNe of very massive stars, and the young radiosupernovae (RSNe) in the starburst galaxy M82. Using the approaches of Parker (1958) and Cox (1972) we then derive the radial dependence of the magnetic field and the shock speed for explosions into a stellar wind (for an early discussion see Völk & Biermann 1988), differentiating between RSG stars with a slow, dense wind and BSG stars with a fast, tenuous wind. 2) The AMS data on positrons, anti-protons, Lithium, Beryllium, Boron, and more common elements such as Hydrogen, Helium, Carbon, Oxygen, etc. can be explained with propagation and interaction of freshly accelerated cosmic rays in a turbulent wavefield excited by the two components of the cosmic rays themselves, in the immediate neighborhood of the exploded star and its wind. 3 With our treatment we aim to establish relevant scales for the violent motions in the cosmic ray/magnetic fields/ionized plasma shock system. These scales are important for deducing the proportions of gradient and curvature drift that contribute to the stochastic shock drift acceleration (StSDA), in addition to the diffusive shock acceleration (DSA), e.g. Jokipii (1982), Drury (1983). We recognize the spectrum of magnetic irregularities is excited by the CRs themselves. Important physical ingredients to consider are (i) pitch angle scattering, (ii) feeding into shock acceleration, and (iii) the dependence of these two processes on charge Q 0 and mass A of an ion. We propose a cosmic ray description that can be tested with further observations. 2. Massive star explosions observed at radio frequencies: M82 candidates and young radio supernovae For many years, data giving us insight on the magnetic field in exploding massive stars had been rare, but lately the situation has been greatly improved, and the limited data suggest, surprisingly, rather common properties. Fortunately there are now interferometric radio data on massive stars, stars that explode into their predecessor winds (see Völk & Biermann 1988, for an early discussion of the consequences for cosmic ray (CR) acceleration, and earlier data). These observations yield magnetic field, shock speed, and energetic electron spectrum as a function of radius and time; modeling these data also yields information on the prior wind mass loss. The following tables give information on these observations: Here the references are, in sequence, r1: Fransson & Björnsson (1998) . These papers also use data from other wave-lengths such as X-rays or optical, which never have the spatial resolution that the interferometric radio data can easily supply. If the inferred wind velocity of the In table 1 we show from left to right: in the first column the name of the supernova (SN) explosion; then type of star which exploded: a RSG or a BSG star; next the radius R sh at which the determinations were made, usually close to 10 16 cm, in the form log R sh cm , which is the typical radius for radio data due to optical depth effects; next the magnetic field determined in the shocked region B sh in the form log B sh Gauss ; then the velocity of the shock including a possible Lorentz factor of the shock Γ sh U sh /c in the form log(Γ sh U sh /c); and finally the references used. In table 2 we first show again the name as an identifier; then again the type of star exploded; then the stellar mass lossṀ determined from a The average and mean error for {log(R sh B sh {U sh /c} 2 )} is 14.3 ± 0.2, while the average and mean error for {log(R sh B sh )} is 15.9 ± 0.2. These latter two quantities correspond to characteristic energies, (1/8) e Z R sh B (U sh /c) 2 and (1/8) e Z R sh B, where e is the elementary charge, and Z is the numerical charge of a CR nucleus considered. We interpret these two quantities below as knee and ankle. The radio data, including those from the compact radio sources in M82 (almost all of which are interpreted as exploded BSG stars; see below), are consistent with an interpretation that the shocks running through winds have a radial run of the magnetic field using r −1 , corresponding to a density dependence of r −2 as long as the shock velocity is constant; the late-time dependence of these data also shows that, prior to the explosion, the wind sometimes changed. These arguments here emphasize the SN-shock racing through a stellar wind with such properties; below we will discuss the ensuing complications when the piston driving the shock runs out of steam, and also how the compact radio sources in M82 help us understand the long term evolution. Soderberg et al. (2010b) focussed on the early phase, when the non-thermal radio emission becomes optically thin at peak luminosity L ν,p , and showed both the distribution of the shock speed as well as the associated peak radio luminosity for a sample of radio SNe. Their distributions confirm that U sh /c is typically about 0.1, the magnetic field is typically 0.4 Gauss at the associated radius r of 3 · 10 16 cm. This is consistent with 1.2 Gauss at 10 16 cm at B ∼ r −1 , at their choice of nominal parameters: These parameters are (i) the fraction of post-shock energy density in electrons relative to that in magnetic fields e / B ; (ii) the filling fraction f /0.5; and (iii) the observed peak radio independent of the associated radio frequency. Their model is rather general, based on earlier work by Chevalier (1982Chevalier ( , 1996Chevalier ( , 1998 where k CR is the ratio of the energy density in energetic nucleons relative to electrons, using the observed ratio of 100. We assume equipartition between energetic nucleons and magnetic fields. We adopt the optically thin case, and yet at the nominal parameter values this yields a very similar result as in Soderberg et al. (2010a), as cited above. Applying this expression to the compact sources in M82 yields about the same value for the product r B, supporting the view that the expansion from a radial scale of 10 16 cm to pc scale is unimpeded, with indeed B ∼ r −1 , suggesting these were all BSG stars with tenuous fast winds. We discuss necessary piston masses further below. This finding will be important further below, when we discuss CR injection. We translate these observed numbers from the table into accelerated particle energies (see Biermann 1993, and below): This gives (i) at nominal parameters an energy of 10 17.5±0.2 Z eV which we interpret as the ankle of CRs, as (1/8) e Z r B: here Z is again the charge of the CR nucleus; (ii) the second energy is 10 15.9±0.2 Z eV, as (1/8) e Z r B (U sh /c) 2 , which we interpret as the knee. This relationship in acceleration is due to the dependence of the Jokipii (1987) perpendicular scattering coefficient, κ ∼ E r U sh /Z in a magnetic wind; balancing the acceleration time κ/U 2 sh with the flow time r/U sh then allows the spatial limit (Hillas 1984) to be reached (see also below, and Biermann 1993). We note here, that we do not assume that the magnetic field is parallel to the shock normal (see, e.g., Jokipii (1982Jokipii ( , 1987, Biermann (1993), Meli & Biermann (2006)); this assumption is implicitly made in Chakraborti et al. (2011), a follow-up paper to Soderberg et al. (2010b). In fact, the Parker (1958) limit solution (B φ ∼ r −1 ) taken together with the radio observations shown above (Table 1 and Table 2), and the M82 compact source observations (Kronberg et al. 1985; explained below) demonstrate that the magnetic field runs as r −1 all the way out for BSG star winds, the most favorable case. This implies that a SN-shock is perpendicular, so fulfilling the assumption in Jokipii's (1987) argument. We can summarize these radio data of observed massive stars exploding into their wind: (i) Reference radial distance r 0 for such radio observations r (due to optical depth effects); typical is r 0 = 10 16 cm. (ii) Upstream shock velocity U sh,1 ; typical is U sh,1 0.1 c. (iii) Magnetic field B(r 0 ); typical is B(r 0 ) 1 Gauss. (iv) Radial dependence; typical is B(r) ∼ r −1 . All these numbers and relationships are derived from radio observations. Another option is indeed some enhancement of the magnetic field during the supernova shock advance through wind, possibly due to the Bell-Lucek (2000, 2001 mechanism (but see also, e.g., Amato & Blasi 2006, Fraschetti 2013, Mizuno et al. 2014. An alternative could be that the magnetic fields are pulled along by the piston from the highly magnetized layers deep inside pre-SN star, now exposed and visible in star's wind abundances, and mixed into the post-shock region. Isotope abundances in cosmic rays observed might shed light on this speculation. This latter mechanism might allow us to understand, why the magnetic field is always the same order of magnitude. Supernova shocks in stellar winds Consider a shock driven by a supernova explosion running through the wind of the predecessor star (Parker 1958 Biermann 1997, Seemann & Biermann 1997. We assume that this wind has a density structure of r −2 (a steady wind), a magnetic field of B ∼ r −1 (Parker 1958: lines of force of the magnetic field coincide with the stream lines), a constant wind velocity of V W (again a steady wind), an associated Alfvén velocity V A < V W (Weber & Davis 1967, otherwise there would be excessive angular momentum transport), as well as a piston driving it of mass M piston . The shocked region in the wind comprises the radial fraction of 1/4 from the Rankine-Hugoniot shock conditions. We examine these assumptions below. The accumulated mass from sweeping up wind material can be written as where r is given byṙ = U sh , now allowed as a variable. The factor of 1/4 derives from the thickness of the shell of r/4, and the factor of 4 derives from the density jump in the shock (strong shock with adiabatic gas constant 5/3). As reference we use again r 0 = 10 16 cm. The accumulated mass slowly rises linearly with radius r. The energy equation can be written more generally as whereṙ = U sh . The energy can be written as an initial condition, since at first the accumulated wind mass is negligible, so that where U sh,init is the initial shock velocity; however, we will assume that this velocity is constant, until the accumulated mass exceeds the piston mass. This equation can be integrated to give It can be immediately seen that this expression has the correct limits for M piston −> 0. We obtain and for M piston large we obtain assuming here, that the shock velocity U sh is constant, with the switch-over at 4 π r r 0 2 ρ W,0 M piston . Now to put numbers into this, let us take the example of a BSG star, with wind velocity 3 km/s, and a stellar mass loss of 10 −5Ṁ −5 M yr −1 . The reference density ρ W,0 is given by giving with our example and an accumulated mass of ∆M W (r) = 4 π r r 0 2 ρ W,0 = 10 −4.8 M r r 0 . These numbers can be checked using observations of binary stars, in which one partner is a super-giant star, and the other partner is a compact star: Such objects have been seen in gamma rays and X-rays, allowing their column density to be determined (e.g. The large numbers for the column density deduced are consistent with the values implied here (10 21 to 10 24 cm −2 ), depending on whether we observe a compact object circling a BSG or an RSG star. Furthermore, the observations demonstrate that these winds are clumpy, consistent with expectations (e.g. Owocki et al. 1988). Assuming the wind itself to be super-Alfvénic requires the magnetic field in the unperturbed wind to be (written as a constraint on the magnetic field at radial distance r 0 , and assuming that the magnetic field runs as r −1 ): We note again that a sub-Alfvénic wind would transport angular momentum excessively (Parker 1958 This means for a possible final wind radius of 3 pc, that in the case of BSG stars the piston needs to exceed which is well within the uncertainties. The expression "final wind radius" is that radius when the shock stalls due to encountering the wind-shell, built up during the lifetime of the stellar wind; occasionally we just use "final" when the application is clear from the context. Below we show from the gamma ray line that the piston mass is in fact about 0.1 M , and so leads to a high energy. This translates into a kinetic energy (using free expansion all the way out, as implied by the M82 sources) of implying that the SN energy itself needs to be larger, consistent with many other arguments, as noted below. Using the 26 Al line at 1809 keV we can estimate the piston mass: The observed lines have a half width of about 300 km/s (Diehl 2017; full width 593 km/s). In our picture we interpret this number as momentum conservation of the ejecta, so that Assuming then the same piston mass for the conditions for a RSG star wind, with a density about 100 times higher, implies that equality is reached far below 3 pc: implies a radius of 10 1.7 r 0 = 10 17.7 cm, and beyond, the velocity goes down with r −1/2 , so up to 1 pc, for instance, would go down by 10 −0.4 ; to 3 pc it would go down about 10 −0.7 . These estimates are fairly uncertain, but the key consequence is that for RSG stars the final shock velocity hitting the wind shell is expected to be far below the initial velocity of 0.1 c. We will need this much lower shock velocity in RSG star winds later in our explanation of the anti-protons observed by AMS. At this point it helps to note that in explosions into the interstellar medium (ISM) we have first a free-expansion phase, when the piston mass dominates over the accumulated mass from the environment, then a Sedov-phase, i.e. the stage when the accumulated mass dominates over the piston mass, but the energy is still constant (e.g. Cox 1972), before cooling sets in. Analogously we can distinguish a free expansion phase, a wind-Sedov phase, and a final phase for explosions into winds. Since these explosions occur into a wind of density run ρ ∼ r −2 , all the dependencies on radius r for the wind-Sedov differ from the normal ISM-Sedov case. There might be a useful analogy between the transition from a piston-dominated stage to a wind-Sedov shock phase argued here, to the transition in Solar ejections driven by powerful Solar flares (Pinter & Dryer 1990) from piston-driven to energy-conserving shocks. We do identify here the compact sources in M82 with the slightly later stages of the If the magnetic field strength were due to the shock itself, then In the free expansion case, the radial run of the magnetic field B is the same as in the Parker limit (Parker 1958 well-observed cases is that the magnetic field is about the same. In the wind-Sedov limit (i.e. low piston mass M piston ) this is readily rewritten as so quite a bit steeper than in the free expansion case, when it runs as r −1 . One other speculative possibility is that the magnetic field derives from the interior of the stars, since the wind-base does expose (for BSG stars, at least) already the deeper layers, as visible in the chemical abundances. One can also estimate how far the wind-Sedov case may reach in case of a Red Super Giant (RSG) star, when the wind velocity is about 100 times lower than in the Blue Super Giant (BSG) stars, and so at the same rate of stellar mass loss the density correspondingly 100 times higher: The associated kinetic energy at full, observed shock velocity for RSG stars is suggesting that a RSG supernova shock may remain rather strong and super-sonic to pc scale, if the wind went that far and the energy were available up to 10 52 erg; Soderberg Therefore the density in the wind shell can go quite high, allowing drastic cooling when the SN-shock hits, and so running the remnant quickly into the cooling limit. This then also limits the lifetime of the radio emission, since the shock rapidly slows down due to this extreme cooling and so fails to accelerate electrons. This probably limits the radio emitting lifetime to a few hundred years; this in turn lets us understand the scarcity of such sources in our galaxy, and the abundance in M82. This suggests that the Pevatron On the other hand, CRs accumulate their particles right up to the escape time from the galaxy, thus much longer even for very energetic particles than human observations exist. We propose that these most massive stars live such a short time, that the wind-shell is only disrupted by the explosion itself. After the explosion, given some more time, it merges into the environment of other earlier explosions, an OB-star-super-bubble. The constancy of the observed radio emission is expected as soon as the wind-shock reverts to a Sedov expansion in the local ISM shaped by the earlier wind stages of the star. This then gives (i) a near constant magnetic field, (ii) an energy density of the CR particles produced as r −3 , and (iii) a volume covered as r +3 (Biermann & Strom 1993). Thus the total synchrotron luminosity is constant with radial distance r or, equivalently, time. As soon as cooling becomes relevant, the radio emission ought to decrease rapidly (Kronberg et al. 2000). Observations support this conclusion. The SN-explosion produces shocks racing through the winds of the massive stars; from the radio observations we know the magnetic fields in the shock regions, and so we can now work out the characteristic particle energies corresponding to the magnetic fields. We note here that the Rankine-Hugoniot conditions at a strong shock give a density jump of a factor of 4 for strong shocks, a corresponding velocity jump of also a factor of 4 in the shock frame, and as a result a thickness scale of the post-shock region of r/4 (all for an adiabatic gas constant of 5/3). As we demonstrated above, in a perpendicular shock (i.e. magnetic field direction perpendicular the shock normal) using the Jokipii (1987) scattering coefficient gives a maximal energy just limited by the available spatial scale, Figure 1 Here we show schematically the various components of the CR model for primary cosmic rays, with the ISM-SN-CR component (a), the wind-SN-CR 4 π component below the knee (b1), and above the knee (b2), the wind-SN-CR polar cap component (c), and the extragalactic CR component (d), in a graph of E 2.1 × CR flux versus particle energy E using the source spectra proposed in a log-log plot. So for wind-SN-CRs there are two CR components, here labelled (b) and (c), and this paper focusses on those two. Note, that between component (b1) and (c) there is always an upturn in the total spectrum. We argue below that this upturn has been detected by both CREAM and AMS. The Hillas (1984) limit is just the spatial limit, valid in the case that the magnetic field is perpendicular to the shock normal, or parallel to the shock surface. This is the standard case in the asymptotic magnetic field configuration in a magnetic wind, as the higher multi-poles of any multi-pole structure of the magnetic field just add as a factor (see Parker 1958, after eq. 26, p. 673), and the magnetic field-lines coincide with the flow lines. But only this magnetic field component is enhanced in a shock, and so in the post-shock region, the transverse component dominates strongly. where Z e is the charge of particle, r radial distance, B(r) magnetic field as function of r, and we use B = B 0 (r 0 /r). Here we use the characteristic radial extent of the shocked shell in a wind of r/4 as a spatial limit; this uses a strong shock, for which U sh,1 /U sh,2 = 4 for an adiabatic gas constant 5/3: We define U sh,1 as the upstream velocity in frame of shock, and U sh,2 as the corresponding downstream velocity. We also require the Larmor diameter (twice radius) of a gyrating particle to fit into this space. For simplicity we often refer to U sh,1 as U sh . However, it needs to be shown that this energy can be reached at all against all the various loss processes. The scattering coefficient in a configuration most perpendicular to the shock normal (for random direction of the magnetic field prevalent) has a limit (Jokipii 1987, eq. 10) of and we adopt this limit here. A large part of this acceleration is due to Stochastic Shock Drift Acceleration (StSDA). The acceleration time is then limited by turbulence time across the region, so (r/4)/(U sh /4), using the post-shock scales for both distance and velocity. Setting the acceleration time scale using this scattering coefficient equal to this limiting time scale allows the limit to be written as reproducing the energy limit derived above. Using the observed numbers this gives which we identify with the ankle. In a small fraction of space and time, which might be called magnetic islands, the magnetic field is parallel to the shock normal, and the acceleration is temporarily purely diffusive shock acceleration (DSA). The scattering coefficient in this case is given by where I(k) is the energy density of resonant fluctuations in the magnetic field, so that In the Bohm case we take this factor (B 2 /{8 π}(/(I(k) k) to be a constant, requiring I(k) ∼ k −1 in what is called saturated turbulence, quite different from Kolmogorov turbulence; we define this factor to be b > 1. We adopt for the limit itself, b = 3, which renders the expression maximally simple; we note that the integral of the irregularity spectrum can be maximally equal to the overall energy density of magnetic fields. The acceleration time is then given in the case of a strong shock by (Drury 1983) This requires that the ratio of scattering coefficients κ ∼ 1/B scale as the velocities on the two sides of the shock -this can be justified by noting that in the perpendicular case the magnetic field B scales inversely with the velocity on the two sides of the shock. Here the limiting time is shorter than the turbulent time, since particles can just escape along the magnetic field lines in r/c (in the perpendicular case they cannot), and so here the limit is where r 0 is a reference radial scale, here using 10 16 cm, and B 0 the magnetic field strength at that radius, chosen because we have radio data giving these numbers (see above). This gives a maximal energy E knee in this case of Using the observations listed above we obtain which we identify with the knee energy. The two expressions for the energies E ankle and E knee differ in their formal expression by (U sh /c) 2 , but we do not use the average from the data separately for B 0 r 0 and (U sh /c) 2 , but use the average of the combined expressions sh c 2 , and B 0 r 0 . To emphasize the difference, the knee energy is thus interpreted as the limit for the parallel case (magnetic field parallel to shock-normal), and also as the turn-down energy for the perpendicular case (magnetic field parallel to shock surface, or perpendicular to shock normal), that energy where the spectrum turns from one power-law to a steeper power-law. The ankle energy is the maximum energy reached overall. This is consistent with other arguments: The most important aspect is that E knee is just the limit for DSA, as StSDA is faster (Jokipii 1982(Jokipii , 1987Meli & Biermann 2006;Matsumoto et al. 2017). This is the limit for the polar cap cosmic ray contribution, the small part of momentum space 4 π, where the magnetic field is locally and temporarily radial, using only DSA. The combination of StSDA and DSA has a reduction in efficiency of acceleration at that same energy, E knee , so its spectrum steepens to a steeper power-law, and cuts off only at E ankle , as is argued below. The kink in the spectrum at the knee At this stage we need to dig a bit deeper into what scales are relevant in such a turbulence in a shocked layer (Biermann 1993). The basic conjecture is that a scattering coefficient downstream can be constructed from the relevant length scale r U sh,2 /U sh and the velocity difference across the shock U sh − U sh,2 ; in a strong shock the ratio U sh /U sh,2 is 4 and so we obtain For upstream we conjecture that the scattering is U sh /U sh,2 times stronger, and so we calculate There is a maximal lateral diffusion coefficient (Biermann 1993) constructed from the velocity difference across the shock squared, times the residence time of for the normal energy-dependent lateral diffusion coefficient, given by referring to curvature drift, so with a factor of > 1 to enhance curvature from the inverse of the pure radial scale 1/r to the thickness of the shocked layer 4/r or possibly even more. So the corresponding partial energy gain is strongly reduced, giving the corresponding break energy as With = 9/2 this yields the same expression as above, so E break can be equal to E knee , and in fact has to be quite close to it. So in the concept introduced here, these two energies are very close, and differ by at most some factor of order unity. This implies that the 4 π component, driven by a combination of StSDA and DSA, has a break to a steeper power law at the same energy, as the polar cap component (only driven by DSA) cuts off. The data suggest that at the energies where the energy content of the spectrum maximizes, near about 2 A m p c 2 , a ratio between these two components is of order 3 -10 at injection. Note that this refers only to the CR-components produced by massive stars with winds, when they explode. As noted earlier the normal proton component of CRs is probably dominated by lower mass stars that explode into the interstellar medium (but see also below). Knee and ankle energies, implications However, as commented on already elsewhere (Biermann 1995), this leads directly to several questions; the first one is whether all such stars could be the same in terms of their magnetic field properties. It is possible, but hard to believe that a single star dominates all the cosmic ray particles contributing to the knee. Such a hypothesis would imply that the number of contributing supernovae slowly decreases going from GeV energies towards the knee, ending at one supernova. It is hard to argue that such a concept would not give a significant bump in the spectrum of cosmic rays, by going from a few contributors to just one -but it is clearly not impossible. If, as seems likely, many supernovae contribute to the knee observed in cosmic rays, then all these explosions must be relatively similar, It is perhaps of interest that this mass range is the same that possibly produces black holes (Heger et al. 2003). That suggests a common mechanism for the magnetic field and the explosion, connecting the explosion to the magnetic field; and the one mechanism that might connect the two is the magneto-rotational mechanism of Bisnovatyi-Kogan (1970) There is another hint about the mechanism of the explosion. We have seen in SN 87A that there is extreme mixing up from deep inside the star (many papers, also Biermann et al. 1990). Since we have argued that the piston mass is high enough to sustain the shock velocity at sustained speed and also provide sufficient material to account for the cosmic rays observed, this implies that the piston mixes in with the material to contribute to the Cosmic ray particle spectra below and above the knee Here we re-derive the extra energy gain from shock drifts (Biermann 1993): The drift velocity is given by Here we assume as in Biermann (1993) that this is a combination of curvature and gradient drift. We refer everything to the case f d = 1 for simplicity. Thus the energy gain due to drifts can be calculated by the drift velocity, using the electric field induced, and the residence time. Working this out upstream gives with the corresponding expression for downstream being and so for a strong shock, for which U sh,2 /U sh,1 = 1/4 we obtain that the total energy gain from drifts and adding in the energy gain from standard first order Fermi gives an extra term U sh,1 /c: Combining these two expressions gives a total numerical factor of x = 9/4. Since the shock injects particles from a density law of r −2 , we have a parameter for this power-law of b = −2 radial power index for injection in wind density. Similarly we have a parameter for dimensionality to adequately describe adiabatic losses, d = 3. Here we need to emphasize that the spectrum is determined by the maximal time scale of a particle going back to the shock, while the acceleration rate is given by the shortest time scale. The fastest scattering (Jokipii 1982(Jokipii , 1987) is given by κ = r g U sh , while the slowest is given by (1/4) r U sh (both upstream; Biermann 1993). These two rates differ in perpendicular shocks, and are the same for parallel shocks. Note that the acceleration time back and forth across a shock is proportional to the scattering coefficient κ (this is the time scale to establish a spectrum and maximal particle energy), while the diffusion time out of a region is given by the inverse of the scattering coefficient (this is the time scale relevant for producing anti-protons). These two scattering coefficients differ by a factor of order E/E max , where E max is the maximal particle energy that can be contained. The spectral index is then given by This is given here as the difference in spectral index to -2; a positive value implies a steeper spectrum (this is eq. (39) in paper CR-I 1993, based on work by Krymskii & Petukhov 1980, Prishchep & Ptuskin 1981, Drury 1983). The parameter values entering here are (i) x = 9/4, describing the addition of DSA and StDSA; (ii) the radial density power-law b = −2; (iii) the dimensionality d = 3; and (iv) the strong shock condition U sh,2 /U sh,1 = 1/4. In a wind this equation gives a number of 1/3, by which the spectrum is steeper than E −2 , so that the 4 π source spectrum is E −7/3 . The polar cap source spectrum is E −2 , since there x = 1 and κ rr,1 /r U sh,1 << 1. To recap and proceed further, below the knee we used maximum curvature 4/r and argued that half of this would be average, so that the total energy gain is characterized by Beyond the knee we use no turbulence-induced curvature and in fact allow that the natural curvature also occasionally goes to zero due to very large scale motion; and so we use half the normal curvature rather than twice the curvature. This implies that the term 2/3 goes to 1/6, and thus we take This gives x = 1.625, and for the spectral index which is (Kolmogorov added) 3.1795, resulting in E −3.1795 as the predicted observable spectrum beyond the knee. The acceleration of electrons by drifts alone The observations reveal that the spectrum of the electrons in the RSNe of very massive stars is about E −3 , perhaps slightly steeper even. The question then arises, why the spectrum is just steeper by about unity compared to the spectra of nuclei. At first sight this suggest the Kardashev (1962) loss limit, or even secondaries. Inserting numbers demonstrates that the Lorentz factor of the electrons associated with the radio emission is so low that the energies are below the rest mass of protons, even though these electrons are relativistic. For any CR spectrum of nuclei, with a spectrum steeper than E −2 , the energy density of the population maximizes around a small multiple of the rest mass. Therefore one might well expect that the associated Larmor radius provides the main scale for the thickness of the shock transition layer. It follows that the electrons do not even "see" the shock transition, and only experience the shock drifts as acceleration. There is some similarity to the "shock surfing mechanism" discussed by, e.g., Lee et al. In the language of paper CR-I (Biermann 1993), also used above, this implies that the parameter x, denoting the total energy gained per cycle relative to the energy gain from undergoing pure DSA (see, e.g., Drury 1983) is of order 5/4. However, the "missing" shock transition can be thought of as another additional drift energy gain, using only the gradient, which can be crudely estimated as follows: We average the drift energy gain both downstream and upstream, so take 1/2 of the sum (in paper CR-I this is eq.23). Only 1/3 of this was from gradient drifts, so obtain 1/6 of the combined energy gain. This implies The spectral index is given by the expression given above, here simplified A clear prediction is then obviously, that the radio spectrum should become flatter as soon as the energy of the relevant electrons becomes large enough so that the electrons "see" the shock. This is confirmed by the compact sources in M82. Summary of the wind-SN-component cosmic ray spectra In this section we have described how to explain knee and ankle energies, and the spectrum both below and above the knee energy, and below the ankle energy, focussing on both RSG and BSG star winds. In the following two sections we will ignore for didactic simplicity all the uncertainties of these spectral indices, and just use them directly, taking E −2 for the polar cap component, and E −7/3 for the 4 π component below the knee. However, at every step we ought to be mindful of the underlying uncertainties: There are uncertainties due to the fact that we use strong shocks, with additional terms proportional to the inverse of the shock's Mach number squared (e.g. eq. 2.46 in Drury 1983). Other uncertainties pertain to the ratio of the wind velocity relative to the shock velocity, of order 1/10. The main uncertainty, of course, relates to the underlying non-linear model. All these uncertainties can be tested with data. We will also repeat various conventions, so that each section can be read independently of the others. here. For RSG explosions, the transition from free expansion to wind-Sedov expansion ought to be detectable. We return to these ideas in the section on the compact radio source 41.9+58 in the starburst galaxy M82 further below. 3. AMS data: The spectrum of anti-protons, positrons, and other energetic particles Introduction In this section we focus on secondary particle N s production resulting from interaction of primary accelerated particles N p . The scenario we envisage is that a supernova shock racing through a wind is loaded with energetic particles when it hits the wind-shell. These particles interact the entire time while going through the stellar wind, but the shock stalls when hitting the wind shell; for this reason it will be in this phase when most interactions can be expected (see, above, for the discussion of the compact radio sources in the starburst galaxy M82, all thought to be in such a stage). Both the secondaries and primaries escape from this region and populate the cosmic ray disk of the host galaxy. As noted above this constitutes a "nested leaky box approach". A general energetic particle transport equation is given, for example, in Ginzburg & Ptuskin (1976), their eqs. 3.18 and 3.36: The balance equation describes the interaction of primary nuclei producing secondary nuclei: with the stationary solution expressed as where the two time-scales τ int and τ esc describe interaction and escape. We assume here for simplicity that the interaction and escape time scales are both significantly shorter than the lifetime of this phase. For simplicity and didactic reasons we approximate the interaction as energy-independent; we have explored this approximation in a large number of calculations in de Boer et al. (2017) fitting the γ-ray data of the Galaxy, using proper numerical interaction codes. The main energy dependence of the secondary/primary ratio depends on the escape time, which is given by where H is the radial scale of the interaction region. This includes the wind-shell and its extension into the pre-supernova stellar environment in a molecular cloud; it may also involve an already evolved wind-bubble produced by the pre-supernova HII-regions, stellar winds, and earlier supernova explosions. The scattering coefficient κ in turn is given (see also above) by where I(k) is again the energy density of resonant fluctuations in the magnetic field, so that k −1 ∼ r g = (E)/(Z e B). In the following we focus on determining this wave-field as resulting from particle-wave interaction with the existing energetic particle population. A given energetic particle population with a certain spectrum excites a wave-field; this wave-field then gives a scattering coefficient, which in turn leads to the energy dependence of the escape time. In final consequence this allows us to determine the energy dependence of the secondary to primary ratio, the key answer we are seeking. One question which needs to be raised is whether the secondary particles in turn interact so much that they do not even escape. We can estimate this by checking the spectra of those nuclei that interact the most, Iron nuclei. As the time to interact is given by the escape time, lower energy nuclei will interact more, and so we expect that such an effect would render the primary particle spectrum slightly flatter (see, e.g., Wiebel-Sooth et al. 1998). Indeed the Iron spectra are slightly flatter, suggesting that a significant fraction interact, but still much less than one hundred percent. A fortiori this then must also be true for the secondary nuclei. Here we outline an attempt to understand these spectra. As above, the focus is on the properties of the stars that explode into their own winds, again emphasizing explosions of Red Super Giant (RSG) stars that have a slow, dense wind, and explosions of Blue Super Giant (BSG) or Wolf-Rayet (WR) stars that explode into a tenuous, fast wind. Anti-protons and other secondaries To understand the production of secondary particles by primary particle interaction, This shows that we have several sources of turbulence: First of all the spectrum of magnetic irregularities produced in the shock, as discussed above, first of all the Bohm case I(k) k ∼ const(k), and second the Jokipii case I(k) k ∼ k −1 . This turbulence is rapidly fading as the shock weakens running through all this material of the wind shell and the more distant environment. But we also have the newly created turbulence produced by the cosmic ray (CR) particles themselves. That allows it to dominate over any shock-related turbulence. We let observations guide us here. Kulsrud & Cesarsky (1971), Bell (1978a) and Drury (1983) show that excitation and damping of Alfvén waves can also be vastly different in dense media, as damping (at 10 4 K; for 10 2 K and 10 3 K the factor changes from 8 to 1.5 and 3, respectively) is given by where n H is the local neutral hydrogen density (n e is the corresponding electron density, both in cm −3 ); this is a damping of Alfvén waves by neutral-ion collisions (Kulsrud & Cesarsky 1971). A second damping mechanism is the cascade into sound waves, which happens whenever the speed of sound is less than the Alfvén velocity (Bell 1978a), which can easily be the case in a shock racing through a stellar wind of a BSG star (see above): is easily satisfied in the pre-shock region due to cooling, and possibly also satisfied in the post-shock region due to the strong enhancement of the magnetic field already observed, but barely due to equipartition. This leads to a damping only if the the sound speed is very much smaller than the Alfvén velocity, thus is not really applicable here. This is important since we need Alfvén waves to scatter CR particles resonantly as a key element of their diffusive transport (see, e.g., eq 3.18 in Ginzburg & Ptuskin 1976, used above). The density is (see above, and using the stellar mass loss and wind velocity adopted as an example there) about 10 −3 cm −3 at 3 pc for BSG star winds, and after going through a wind shock, 4 · 10 −3 cm −3 , without any enhancement, due to rapid cooling. The corresponding numbers for RSG star winds are about 10 2 higher. This enhances the excitation rate, and lowers the damping rate. The shock velocity is typically 0.1 c, and we have used that for scaling. If the energy density of the energetic particles is equal to the energy density of the magnetic field, at 3 pc, then we could have an enhancement over the ambient galactic cosmic rays of a factor of about 10 5 , and so the excitation could be that much faster, or, at the same rate, extend to energies 10 3 times larger than GeV, which is actually required. The greatest amount of cosmic ray particle injection (referring to the maximal shock speeds) happens in the last phase of the shock before it hits the wind-shell. We consider a shock region fully loaded with cosmic ray particles, of the two spectra E −2 (the polar cap component; less common) and E −7/3 (the 4 π component), and then calculate the excitation of a magnetic irregularity spectrum by these energetic particles. The spectrum of magnetic irregularities excited by a given cosmic ray particle spectrum has been treated by Bell (1978a, b), and reviewed by Drury (1983). The cosmic ray particle spectrum excites a magnetic irregularity spectrum I(k) in resonant wave-number k (Bell 1978a,b, Drury 1983, 2001, with k ∼ (Z e B)/(pc), with momentum p ∂ ∂t with the cascade time scale given by with an effective adiabatic index γ ef f , with ρ here the affected density, the ionized density. However, instead of using a single wave-length as a source of excitation, we can also consider excitation at all wave-lengths or wave-numbers together, say, by a spectrum of CR particles (Bell 1978a, b): with (Bell 1978a, b;Drury 1993) where f is the distribution function in momentum phase space p, with power index steeper by 2 as compared with E −7/3 . Here v is the particle velocity, V A the Alfvén velocity. The relativistic case corresponds to v = c. This formula essentially compares the energy density in resonant waves per log bin I(k) k with the energy density in the particles in resonance f (p) p c 4 π p 3 , in the relativistic limit and again per log bin. The factor is the Alfvén velocity, divided by the radial scale, and so depends on density. Using as the cosmic ray spectral index α, and the magnetic irregularity turbulence spectral index β we have then At very high energy the escape time is given by a convective velocity, resulting in energy independent interaction. Then the secondaries quasi automatically follow the primaries closely, and only the intrinsic variation of the production cross-section and multiplicity remain. This is equivalent to supersonic turbulence with I(k) ∼ k −2 , and the time scale of interaction independent of energy. Table 3 shows the systematics of an application of this expression. This table contains in the first line the two cases of the exciting CR spectrum; in the second line the corresponding spectrum of the irregularities as function of wave-number k; in the third the same, but using log bins of the wave-number (since in the scattering coefficient we have the term I(k)k corresponding to energy per log bin); in the fourth line we are writing this irregularity spectrum as a function of particle energy; in the fifth the secondary CR production time scale energy dependence; in the sixth the secondary spectrum; and in the seventh the secondary/primary energy dependence for action on a E −2 spectrum. We observe in CR particles the sum of different source classes and interactions, so we need to compare with perhaps other contributions. This is done in the next few lines for specific cases, and then the set of comparisons is done again for the case of acting on a E −7/3 spectrum. Applying these results to the secondary/primary ratio in cosmic ray particle interaction we obtain for the 4 π excitation E −5/9 , and for the polar cap excitation E −1/3 . The excitation rate depends on the spectrum of the exciting energetic particles, and the spectrum of the excited turbulence, written as a function of the resonant particle energy. To take the example of the k −5/3 wave spectrum excited by an E −2 particle spectrum: The energy density of the waves has then a corresponding energy dependence of E +2/3 , while in this example the energy density of the energetic particles is flat with energy. The ratio then runs as E −2/3 . Clearly they meet at a pivot energy, which is where weakest excitation takes place. This pivot energy depends on the density of the ionized matter through the Alfvén velocity dependence. For the 4 π component the energy density of the waves runs as E +4/9 with the energetic particle energy density running as E −1/3 , and the ratio running as E −7/9 . At the pivot energy, by definition, the ratio of the two energy densities is of order unity. Hence the number in front, basically the inverse of the Alfvén-time over the inverse of the flow time using it as the limit, determines where that pivot energy is. We are assuming here that overall instabilities of the flow determine that scale, so that the number in front is basically the inverse of the Alfvénic Mach-number times some numerical factor of order unity, which we have to determine empirically. Here we focus on is on the difference between the waves excited by the polar cap component, and the waves excited by the 4 π component. On the other hand we determine how the competition plays out between BSG star winds with their low density, and the RSG star winds with their high density. We summarize these connections between exciting cosmic ray spectrum all the way to the energy dependence of the secondaries and their comparison in Table 3: Next we calculate the normalization between polar cap component and 4 π component: For BSG star wind primaries such as Carbon and Oxygen nuclei in energetic particles we can discern in the data a transition between the 4π cosmic ray component with spectrum Table 3 CR spectra, excitation spectra and secondary spectra, all at source, for relativistic particles and in the diffusion limit. At very high energy the interaction is no longer diffusive, but convective, so that the secondary production time scale is essentially independent on energy. E −2 is the polar cap component, and E −7/3 is the 4 π component. Here B/C is the Boron/Carbon and Li/C is Lithium/Carbon ratio. relevant at higher energies. Considering then the other excited spectrum, we have it reach the pivot energy given by the condition A dom (E/GeV) −7/9 = 1, thus expressing its pivot energy by E pivot,4π = A 9/7 GeV. This then gives a spectrum of relevant at lower energies. These two irregularity spectra are equal at GeV. The is the secondary CR spectrum for the E 2/3 irregularity spectrum acting on the polar cap E −2 CR component; there is a related expression for element/isotope i for the example of a BSG star. Here X RSG,i is the relative abundance of element/isotope i in an RSG star, relative to the dominant ion. is the secondary CR spectrum for the E 2/3 irregularity spectrum acting on the E −7/3 CR 4 π component; again a similar expression holds for any element /isotope i from a BSG star. is the secondary CR spectrum for the E 4/9 irregularity spectrum acting on the E −2 CR polar cap component; again a similar expression holds for any element/isotope i from a BSG star. is the secondary CR spectrum for the E 4/9 irregularity spectrum acting on the E −7/3 CR 4 π component; again a similar expression holds for any element/isotope i from a BSG star. Due to this uncertainty, the predictions in Table 3 nent. This maximal energy is far below the knee energy, since by the same token, that the anti-proton production is strongly enhanced due to higher density and lower shock velocity, the final shock velocity is far below its initial value, and so the RSG knee energy is far below the BSG knee energy. The secondary cosmic ray particles Here we summarize the results for specific nuclei, and describe, what the model proposed predicts at higher energy. We propose to explain the anti-protons by using an enhanced production in the slowed down shock of a RSG star wind, so acting on the E −2 polar cap component to produce a At higher energy the primary polar cap component will take over and steepen the secondary/primary ratio to E −1/3 . This will change as soon as the diffusive approximation breaks down and we go over to a convective limit approach, when interaction becomes energy independent to a reasonably good approximation. Also, in the same vein, the dominant irregularity spectrum above a few tens of GeV is the E 2/3 spectrum (analogous to Kolmogorov) in BSG star winds, which gives a steepening by 1/3, so that the secondary/primary ratio here is E −1/3 , such as for the Lithium/Carbon or Boron/Carbon ratio. The newest AMS data confirm this dependence with a slope of can be expected to throw light on these questions. Triplet pair production and positrons Here we propose that the polar-cap component of energetic electrons with a spectrum ). This is a process known as "triplet pair production", in which an energetic electron encounters a photon, and produces an extra electron-positron pair (Haug 1975(Haug , 1981(Haug , 1985(Haug , 2004. this matches what is expected in the neighborhood of a massive star exploding. In these numerical programs, care has been taken to include all angles relevant in the collision; no short-cut has been used, as is usually done when treating this process. The spectral shape is a strong function of the energy above the threshold energy of this process. The key parameter is the density of these photons; the test is whether we have enough photons in the region. The data show directly a spectrum of E −3.0 from Fermi data (Abdo et al. 2009), referring to relatively high energy electrons, which obviously have "seen" the shock, unlike the much lower energy electrons induced from the radio emission of a radio-supernova (RSN; see above); the spectrum seen in the Fermi data is clearly the reduction of the spectrum by losses (Kardashev 1962); at the source, the spectrum is deduced to be E −2 . Any steeper spectrum would drastically reduce the level of the resulting secondary leptons, since the resulting flux depends on the fraction of electrons above threshold to produce pairs; thus it is a power of the lever-arm of the energy ratio between the threshold energy and the energy where the total energy of the population sits; this lever arm corresponds to very many powers of ten. One important caveat here is that this observed spectrum refers We conclude that the observational evidence is consistent with a flat component of E −2 for electrons in the sources. The features of the shape of the spectrum, as deduced from many new calculations, Haug (1975Haug ( , 1981Haug ( , 1985Haug ( , 2004, are the following. The positron spectrum rises sharply at γ L,cut where h ν is the typical energy of the environmental photon field. At higher energies the positron spectrum drops sharply at γ H,cut 10 6.25 E e,edge TeV , where E e,edge is the maximal energy of the adopted cosmic ray electron spectrum of E −2 . The ratio of these energies given by The spectrum multiplied by γ 2 + , the dimensionless energy of the positrons, has a maximum given by The ratio of γ H,cut and γ 2,max is so it is also just a function of the range x th above threshold. For a reasonable fit, the data suggest hν of order 1 eV or a bit larger (broad bump), and E e,edge >> 1 TeV; we use here 3 eV and 30 TeV in our example. The AMS positron data suggest γ L,cut 10 4.8 and γ 2,max > ∼ 10 5.5 , so we used these approximate numbers to select the parameters for our plot. The condition of using an appreciable fraction of energetic electrons locally to allow a slight distortion of the spectrum gives the lower limit required for the product of the photon density n ph and time spent τ CR -in an OB-star-super-bubble environment. We note that the total cross-section for triplet pair production, Haug (1975Haug ( , 1981Haug ( , 1985Haug ( , 2004 is with αr 2 0 = 10 −27.3 cm 2 and x th defined above, with x th = 2 as threshold. Obviously, the factor of the 1 x th -term must be such as to render the total cross-section positive above threshold. Using h ν = 3.0 eV and E e,edge = 30 TeV gives as total cross section (using just the first term) σ 3,tot = 10 −26.0 cm 2 . So to distort the CR-e spectrum, the condition is This requires the photon density to obey the condition τ CR 10 6 yrs with a photon density just somewhat higher than in the Galactic Center (GC) region the triplet production. To be affirmative, this test may require even higher precision than currently achieved, but even at current precision such a test could be used to rule out our model. The main uncertainty in such a calculation is the radiation field in the source region; since there may be a number of contributing source regions, this may not be a unique test. A second uncertainty is, of course, whether the injected electron spectrum is really well described by a pure E −2 law. Scenario In the following we will outline an argument demonstrating that the pattern seen in a special plot, the Binns-diagram (Fig.4) Table 4. The significance of separating the unstable isotopes into those with various lifetimes derives from the fact, that with Lorentz boosting, some of these isotopes can survive as cosmic ray particles; a condition is that the time between creation and acceleration is sufficiently short. If, for instance, an isotope is made directly at high energy from high energy spallation collisions, then it may survive, given a sufficiently high energy. However, if the time between its creation, for instance by nuclear reactions deep in the star, and its mixing into the acceleration zone and acceleration itself is too long compared with the decay time, then it may not survive. The chemical composition of the piston, mixing in with the shocked material at a late point of the shock's evolution, may become detectable. Given an observed shock speed of 0.1 c, we assume that the speed survives to about 3 pc (based on the discussion above of the compact radio sources here traced to the late stages of the explosions of BSG stars in the starburst galaxy M82). In such a case isotopes with a lifetime larger than about 10 2 yrs may become detectable if they are produced in the piston material just before the explosion. The source for the nuclear data in Table 4 is the 8th edition of the Karlsruhe Nuclear Tables (2012). The source for the ionization potential numbers is the online version of the Table 4 First and Second Ionization Potentials (I.P.) in eV for isotopes. Isotopes most abundant in normal matter are marked with + . Under unstable isotopes those with a lifetime larger than 10 days are listed, with those with a lifetime larger than 1 year (but less than 10 2 yrs) marked with * , and those with a lifetime larger than 10 2 yrs marked with * * . Those that decay only by electron capture, and are so effectively stable above a few 100 MeV/n, are marked with e , with the other signs omitted then (NNDC 2017 The data in Table 4 demonstrate the differentiation of refractories and volatiles into low and high 2nd Ionization Potentials (I.P.): a clear separatrix can be drawn between them. We include here the elements higher than Nickel (28). The elements Copper (29) to Zircon (40) also show the same separation, with the line in terms of the second I.P. separating the two regimes tending lower. This may be due to a higher recombination rate of the second level of ionization for these massive atoms beyond Iron as compared to atoms below Iron, possibly due perhaps to the many more sub-levels available for the same recombination. First and second order Fermi acceleration The argument here rests on the injection of cosmic ray particles from the post-shock population, and pitch angle scattering to attain isotropy (first stage), moving up in momentum initially by first order Fermi scattering (second stage) in the near-shock region. Further on they increase their momentum slightly, possibly also by second order Fermi acceleration at the shock using the entire post-shock region, or/and then by first order Fermi acceleration (second stage), also using the entire post-shock region. This entails that the initial first order Fermi acceleration uses the Bohm limit of magnetic irregularities (I(k)k ∼ 1), while the third stage uses the Jokipii limit of magnetic irregularities The Jokipii limit case is equivalent to the scattering, independent of particle momentum (in the relativistic case), see Jokipii (1982Jokipii ( , 1987 and the arguments inspired by Jokipii in Biermann (1993). The corresponding magnetic irregularity spectrum of k −2 spectrum can be thought of as the Fourier transform of a saw-tooth pattern of repeated shocks running through the region of interest, as would happen in an unstable shock region (cf. also Federrath 2013). We will refer to these two cases again as Bohm limit and Jokipii limit. Toy model Here we describe a toy model to help us understand injection and acceleration, and how they depend on the nucleon number of the nucleus A, and its initial ionized charge number Q 0 . We will study the case of a strong shock racing through a stellar wind as our example, although the details do not critically depend on this; we will show below where a shock expansion into a homogeneous medium would modify the result. As shown by i.e. parallel to the shock surface. We will focus on this component. The first step in this model is the pitch angle scattering, starting from the torus-like configuration in momentum phase space just behind a shock (i.e. particles running along a Larmor circle, so also circling in momentum space). The shock is highly unstable, producing a population of cosmic ray particles that can be thought of as a light fluid, being pushed by a heavy fluid (i.e. the thermal gas undergoing the shock transition). Biermann (1994b) discussed the associated instabilities, using observations (e.g., Braun et al. 1987 with radio observations of the supernova remnant Cas A). This can be considered as similar to an irregularly moving wave, advancing then slowing down, and advancing again, but irregularly with a grand cycle time of r/{4U sh,2 } = r/U sh,1 ; the waves keep coming, and their average location is approximately constant. In radio/X-ray images of Supernova Remnants (SNRs) this picture implies sharp edges to the emission, as observed, but a variable edge location corresponding to the different phases of the shock reforming, and moving ahead, again as observed. The second step is momentum scattering in the shock transition, so "first order Fermi acceleration" in the immediate neighborhood of the shock. Here we use the assumption of maximal scattering, so a Bohm-like scattering regime. The third step is acceleration over the entire region (r/4 in a wind) by all the disturbances. At the lower energies this may still be in Bohm mode, and there we can allow for a contribution from second order Fermi acceleration, but at the higher energies this surely will be in Jokipii mode, which implies that the particles gain energy independent of charge Q and mass A. This then transitions into a first order Fermi acceleration process with Jokipii-like scattering (Jokipii 1982, 1987, Biermann 1993, the main process at this stage, involving also the entire region (r/4 in a wind). In such a picture the polar cap component corresponds to the first regime, first order Fermi in a highly non-steady mode. The line of reasoning we are taking is the standard approach of following the flow of particles in appropriate phase space. We use the following nomenclature: The population of particles continuously freshly established as a torus in momentum phase space and highly anisotropic in pitch angle is called N torus . These particles are being provided from a freshly shocked particle population N shock coming in from upstream. Pitch angle scattering with a rate of 1/τ µ µ feeds into the first order Fermi process. This particle population N 1F B then feeds into the first order Fermi population accelerated using an magnetic irregularity spectrum of Bohm spectrum. Its rate of momentum change is 1/τ 1F B . As the next step the particle populationN 1F J is accelerated by the first order Fermi using the Jokipii spectrum. It gets fed from the population N 1F B . We then obtain the two differential equations as well as The solution is straightforward: Writing we find with the limit t/τ f low and t/τ both large compared to unity and the condition, that so that the parameter dependence of τ 1F B is superseded; or in other words This entails then finally that The time scales τ conv and τ f low are clearly independent of parameters charge Q and mass A, since they refer to overall flow, like the irregularities excited by the dominant ions, or the scales determined by the Rankine-Hugoniot conditions of the shock. This means that we have to work out the dependencies of the two rates 1/τ µ µ and 1/τ 1F B on the key parameters Q and A. This then determines the scaling of N 1F J with mass A and the initial value of charge Q 0 . We will have to ascertain that ionization is slow enough to allow this. We note that these conditions are the same as always invoked in deriving non-thermal power-law spectra: In incomplete Comptonization the escape time has to be of the same order of magnitude as the scattering to produce power-law spectra (e.g. Step 1: Pitch angle scattering The distribution in pitch angle starts with a narrow distribution, a torus, that is slowly broadened. Considering the pitch angle scattering, the key dependence is on Q 0 and A, We use again the following convention. I(k) is the energy density per wave-number k in magnetic irregularities, or P (ω) per frequency, for ω the resonant gyro frequency, ω = (Q e B)/(A m p c), with Q the charge, and Q 0 the initial charge state, A the mass number, and m p the mass of the proton (obviously this is only correct to within the modifications due to nuclear binding energies). Here we have r g = (pc)/(ZeB) and k ∼ r −1 g . v is the velocity of the particle, with p = v A m p for sub-relativistic speeds, This is derived from scattering with κ ∼ r g v for the Bohm case, and κ ∼ const in the Jokipii case and v the velocity of the particle using κ = 1 3 Following Jokipii (1966) we use here the energy in magnetic fluctuations as a function of resonant gyro frequency ω f luct = (Q e δB)/(A m p c), and with δB the fluctuating part of the magnetic field in resonance. Writing the < δB > 2 = P (ω) and P the power in these fluctuations per frequency we have For the Bohm case (P ∼ ω −1 ) we obtain a constant by multiplying with ω/ω, and for the Jokipii case (P ∼ ω −2 ) by multiplying with (ω/ω) 2 , and therefore obtain for the sub-relativistic Bohm case, that D µµ ∼ Q A −1 ∼ r −1 , and for the sub-relativistic as well as relativistic Jokipii case, that D µµ ∼ independent of A, Q, r. Here we adopt the sub-relativistic Bohm case. We see that the number of particles ∼ Q A −1 . This start of a flow in pitch angle space determines the character of the quasi steady flow (quasi steady over time scales much smaller than the over all flow turn-over time scale r/{4U sh,2 } = r/U sh,1 ). The rate 1/τ µ µ scales with D µµ , or as U sh /r g ∼ Q A −1 , so: The particle momentum distribution begins at momentum p a = A m p U sh,1 ∼ A. We generally set U sh,1 = U sh for simplicity in cases where no confusion can arise. 4.4. Step 2: Injection to first order Fermi acceleration in the Bohm limit Next we consider in this model that at the shock particles scatter back and forth across the shock in a maximally turbulent regime; this means that the magnetic field irregularity spectrum can be described as I(k)k ∼ const, the Bohm limit. Then the acceleration time can be derived starting from the spatial scattering, required to scatter the particles back and forth across the shock: The spatial scattering coefficient can be written as We note here again, that using the fast Jokipii limit of acceleration (Jokipii 1982(Jokipii , 1987 implies D xx = r g U sh , and so does not modify the argument here. The large scale Jokipii limit (D xx = (1/4) r U sh upstream: Biermann 1993) determines the spectrum, while the fast Jokipii limit gives the acceleration time. We work in the sub-relativistic case: The first order Fermi {1 F }-time-scale (acceleration at the shock), and assuming that D xx,1 /U sh,1 = D xx,2 /U sh,2 is and we assume this to be the second stage, with a spectrum of particles in momentum of p −2 . We also assume that at this stage the particles still have initially their original stage of ionization, Q 0 . The relevant term in the cosmic ray transport equation is then − ∂ ∂p (ṗf ) and: This gives the second rate in the full equations earlier, for N 1F J /N torus . The flow in momentum space due to this process is proportional to Q 0 A, so together with the flow in pitch angle space, we have here an enhancement by a factor of Q 2 0 . Allowing Q 0 to be either 1 or 2 immediately gives a factor of 4 between these two initial stages of ionization, consistent with our interpretation of the result of Murphy et al. (2016) as differentiating these two stages of ionization. The spectrum is then, including this factor or, with the proper normalization to an initial momentum p 1F B , which might be quite close to the initial momentum p a , 4.5. Step 3: Fermi acceleration across the entire affected region The first order Fermi {1 F }-time-scale (acceleration at the shock) is as long as the particle is sub-relativistic, and as soon as the particle is relativistic. We assume this to be the final stage, with Jokipii limit irregularities, so I(k) k ∼ k −1 (see above, and Federrath 2013), and a spectrum of particles in momentum of p −7/3 (Biermann 1993). Both spectra, the polar cap component p −2 , and the 4 π component p −7/3 occur concurrently (Biermann 1993), either as in a normal Parker (1958) configuration, separated spatially; an alternative is being separated episodically in regions where the local magnetic field is either temporarily radial or perpendicular to the shock normal, thus separated in momentum space. However, the maximum energy in the p −2 component is the knee energy E knee discussed earlier. The knee energy is related to the maximum energy overall by E knee where p 1F B cancels out, and p 1F J is the transition momentum from a Bohm-like spectrum of magnetic irregularities to a Jokipii-like spectrum, and therefore independent of both A and Q, since the rate of acceleration 1/τ 1F J is independent of both A and Q. Also, going from Bohm to Jokipii implies a switch of the spectrum from -2 to -7/3 (Biermann 1993, and above). Here we scale the entire spectrum to energy (momentum) per nucleon, so use p A ∼ A, and also use a reference interval of energy per nucleon d p A ; this yields the factor A −4/3 . All the rest of the factors here either cancel or are independent of both Q and A; as the transition into the 4 π-component is independent of A and Q for both first and second oder Fermi acceleration (see, e.g., Seo & Ptuskin 1994), p 1F J and also p 2F J are independent, while the other terms just cancel out, p 1F B . Not allowing for any contribution from second order Fermi requires that there be a transition from Bohm to Jokipii mode, while the spectrum is still p −2 , here expressed in the condition that the spectrum from p 1F B c to p 1F J c is still -2. In this model an initial p −2 contribution is necessary to get the This then finally says for the ratio of CR source abundances over source material abundances: The case of Carbon and Oxygen The numbers for Carbon and Oxygen recalculated for Fig.4 shock is relatively small. Considering the case of M82 this cannot be decided so easily. We have argued above that all the compact radio sources in M82 correspond to the BSG star case: In contrast a shock will diminish in strength racing through RSG star winds: This slow-down was actually instrumental in explaining the AMS anti-proton abundance and its spectrum. Furthermore, the abundances in RSG star winds are similar to ISM abundances, but not exactly the same (see below). Above we have speculated that some spallation might affect Oxygen and Carbon nuclei in BSG stars, possibly explaining the low energy proton and Helium AMS spectral data; if that were really the case, Oxygen and Carbon would be even more problematic. The ratio of Solar System vs enriched injection Here Ionization rate Third, we also need to check that the ionization does not depend greatly on radius. The ionization time scale should be longer than the initial fast acceleration processes. After all, the arguments on the dependence of injection on the initial degree of ionization need to work out over the entire range of radii which the SN-shock traverses: The ionization rate ζ can be written as follows (see, e.g., isotope with a short lifetime could be a decisive test, but such an isotope may depend on spallation during early propagation. This suggests that these elements are first incorporated into the ISM and then accelerated along with everything else in BSG star explosions, as explained above. This can be tested with the Binns-diagram using the ultra heavy elements. One difficulty in the model proposed will be that most interaction happens in the wind-shells around the BSG star winds. So the spallation is dominated by interaction in the wind-shell after acceleration in the SN-shock of the "old" ultra-heavy element nuclei. Unstable isotopes may shine light on the various time scales involved for all the steps: These include production, nuclear reactions during the explosion, spallation during transport, immersion into a massive star, renewed explosion, nuclear reactions again, acceleration, and spallation again. Injection conclusion This could explain the TA-Collaboration data already allowing protons near 10 20 eV. The only source for which such an idea might work is the strongest compact radio source, 41.9+58 (Kronberg et al. 1985). This source corresponds to a break-out south in the radio contours (Kronberg et al. 1985 This would support the now well-established idea that some UHECR particles derive from GRBs: The analogy between active galactic nuclei and GRBs in producing UHECRs was noted in Biermann (1994aBiermann ( , 1994b, and worked out quantitatively in Milgrom & Usov (1995, Vietri (1995Vietri ( , 1996, and Waxman (1995) . This mechanism gives rise to jet formation. As noted in Biermann (1993) and above, this mechanism allows understanding of the commonality of the magnetic fields observed in wind-SNe that is demonstrated above using common radio data: These numbers are all rather similar to each other regardless even of whether the progenitor star was a BSG or RSG star. Völk (1988) they find that the spectra cannot be described by a single slope, but cover a fairly large range of slopes. Therefore, at this time the uncertainties are too large to make any firm conclusion here. Considering the systematics in the errors, all the data are consistent with the conclusion that the lower energy proton and Helium CR particles derived from this range of ZAMS masses of massive stars, between 10 M and 25 M , and the predicted spectrum is consistent with the observed spectrum. + 58 Regarding explosions of very massive stars, with a ZAMS mass above 25 M , we consider the explosion (section 2), the ensuing shock racing through the stellar wind, and the CR acceleration in that shock. The supernova shock speeds through the wind; the magnetic fields in the wind have been determined through radio interferometric observations, both for blue super-giant star (BSG) explosions as well as for red super-giant (RSG) star explosions. Numbers for both kinds of stars are essentially the same within the limited statistics available, with upstream supernova shock velocity U sh,1 0.1 c, at radius r 10 16 cm the downstream magnetic field B 1 Gauss, and a general run of the magnetic field with radius r of about r −1 , supported by the data of the compact radio sources in the starburst galaxy M82 for BSG stars. The implied characteristic particle energies are E ankle = (1/8) e B(r) r = 10 17.5±0.2 Z eV; and E knee = E ankle (U sh,1 /c) 2 = 10 15.9±0.2 Z eV, consistent with the energies for ankle and knee determined from CR data; these energies run with the charge Z of a cosmic ray nucleus. Here we generalize our approach of 1993 (Biermann 1993), to an alternative of a fully chaotic description, where we also have magnetic fields nearly perpendicular to the shock normal over most of 4π, and magnetic fields parallel to the shock normal for a small fraction of the surface, like a small number of magnetic islands; this implies, for instance, that drift acceleration contributes significantly to the energy gain of particles; curvature drifts and gradient drifts are both important (Jokipii 1982, 1987, Biermann 1993 Drift energy gains are reduced beyond the knee-energy E knee , and so the power-law spectrum turns to a steeper power-law; it is just a fraction of the energy gain that is lost at higher energy at each shock crossing. So we propose that there are always two components of wind-SN cosmic rays: the polar cap component with a E −2 spectrum at source, and the 4 π component, which has E −7/3 at source; the polar cap component cuts off at the knee energy, and the 4 π component turns down to a steeper power-law with E −2.85 at the source; above we also discuss the errors inherent in such predictions. The steep energetic electron spectrum observed for the early SN radio emission can be interpreted by noting that the electrons necessary to explain the observed radio emission are so low in energy, that they do not "see" the shock, so that their energy gain is only from drifts. In section 3 we use the AMS data to differentiate the two cosmic ray spectral components of explosions into stellar winds and their environment: Here we use these two CR-components to derive the wave-spectra they excite: These wave-spectra in turn determine different secondary spectra of CRs. Thus there are four possible combinations of excited wave-spectra and CR-spectra that they act upon. The anti-proton data can be explained by considering the slowing SN-shock going through the dense wind of a RSG star, and so interacting considerably more than in the tenuous wind of a BSG star wind. The positrons can be explained by noting that the cosmic ray electrons of the polar cap component interact with the surrounding photon field with triplet-pair production. In section 4 we concentrate on testing this paradigm by proposing a theory of injection based on the combined effect of the first and second ionization potential, to reproduce the plots of the ratio of CR source abundances to the abundances in the putative source material obtained by Binns et al. (Binns et al. 2001(Binns et al. , 2005(Binns et al. , 2006(Binns et al. , 2007(Binns et al. , 2008 We can interpret the abundance ratio data by requiring the total number of ions to be enhanced by the simple factor of (Q 0 A) 2 (where Q 0 is the initial degree of ionization, and A is the mass number). Normalizing to a spectrum in energy per nucleon we get an additional factor depending on A −4/3 , obtaining finally a factor between cosmic ray abundances and source abundances of Q 2 0 A +2/3 . This interpretation implies the high temperature in the winds of blue super-giant stars, and requires that cosmic ray injection happens in the shock travelling through such a wind. Using the radio observations of their relativistic jets carve out a cone, visible in the radio data (Kronberg et al. 1985) around the compact source 41.9+58 in the starburst galaxy M82. If accompanied by a GRB, following the merger a hyper-relativistic jet may have accelerated the UHECR protons detected by TA. There should be analogously high-energy neutrinos, possibly pointing elsewhere. If this reasoning correctly interprets what happened in M82, the event ought to be detectable with interferometric radio data: it also follows that this type of event must be quite common in starburst galaxies. This speculation leads us to expect that such an event will be detected via gravitational wave detectors, γ-ray detectors, in UHECR particles (Aab et al. 2018) and other telescopes within a few years.
2018-03-28T17:43:00.000Z
2018-03-28T00:00:00.000
{ "year": 2018, "sha1": "e9819205c8f05befa3917dd0a9b1685084ffef87", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1803.10752", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e9819205c8f05befa3917dd0a9b1685084ffef87", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225479296
pes2o/s2orc
v3-fos-license
Physiochemical and Microbiological Analysis of Drinking Water in Chattogram City, Bangladesh Chattogram is the second most populated city in Bangladesh. This port city faces a serious threat mainly due to the lack of safe drinking water. This study was conducted for determining drinking water quality of groundwater sources in Chattogram city. The study was performed in the BCSIR laboratory, Chattogram. It was carried out for a period of six months from 1st July, 2018 to 31th December, 2018. Total six water samples were collected from three different locations (Baluchora, C&B colony and Khulshi area). Each sampling location consists of two separate sampling points. Physicochemical parameters of the collected samples like Temperature, pH, Electrical Conductivity (EC), Total dissolved solid (TDS), Hardness, Turbidity and concentration of Cl, As, Mn, Fe, Pb, Cr and Cd were examined. Microbial parameters like Total Coliform (TC) were also measured. All the analyzed parameters compared with BSTI and WHO drinking water quality standards to understand the overall ground water quality status of the study area. The results reveal that water samples in almost all locations were contaminated with microbial contamination and that the range of physico-chemical parameters was not adequate for consumption. Preliminary treatments like boiling, filtering etc are required before using groundwater for drinking and the necessary measures must be taken for a safe alternative source of drinking water. Original Research Article Nishan et al.; AJARR, 12(2): 42-50, 2020; Article no.AJARR.52708 43 INTRODUCTION Water is a vital element of every ecosystem and human being. After air; water is the most important need for life. Water performs a number of functions for the body. It serves as a body transport system; acts as a lubricant; regulates body temperature; etc. In fact; more than 2/3 of the human body is made of water. Safe access to safe drinking water for urban and rural populations in developing countries remains a challenge for sustainable development [1]. Drinking water quality is a vital concern for humanity, as it is directly related to public health. Drinking water quality has always been a serious problem in many countries, particularly in developing countries such as Bangladesh [2]. Water quality can be defined by the chemical, physical and biological content of water. The water quality of the surface water bodies of our country is decreasing both for conventional pollutants (heavy metals, pesticides), various organic or inorganic compounds and contaminants [3][4]. The environment, economic growth and developments in Bangladesh are strongly influenced by surface water due to their regional and seasonal availability [5]. Although drinking water is a key demand for people around the world, a large percentage of the world's population is deprived of pure drinking water, including Bangladesh [6]. Groundwater is being depleted prominently day after day in Asia, South America, North America and ecosystems are threatened [7]. Water pollution is now a global concern. Pathogenic bacteria and the presence of antibiotic-resistant bacteria in drinking water have become an emerging problem worldwide [8]. Bangladesh is a low-lying and deltaic country of three great rivers: the Ganges, the Brahmaputra and the Meghna. In the humid and tropical region of Bangladesh, this precious resource is increasingly threatened by the growth of the human population. Recent studies have shown that groundwater systems in Bangladesh are progressively vulnerable to microbiological and heavy metal contamination, particularly arsenic [9]. Chemical and physical contamination of water is no less serious, but potentially lethal contaminants in drinking water are of biological origin [10]. Contaminated drinking water is known to spread dangerous diseases such as hepatitis, cholera, dysentery, typhoid fever and diarrhea. Among these waterborne diseases, the most important is diarrhea. It is estimated that about 11% of all deaths in rural areas of Bangladesh are caused by diarrheal diseases [11]. It has been estimated that around 80% of all diseases and over a third of deaths in developing countries are caused by the consumption of contaminated water [12]. Among the diseases transmitted by water of bacterial origin, typhoid fever, bacillary dysentery and diarrhea are common in Bangladesh [13,14]. Despite the availability and promotion of the use of safe water sources, water-related diseases remain a major cause of mortality and morbidity in Bangladesh [15]. The availability of safe drinking water has increased in almost all parts of the world in recent decades, but around one billion people still do not have access to clean water [16]. Contaminated groundwater is used by the dwellers of Chattogram city for their drinking purposes. The situation is worse in low class residential areas i.e. slums [17]. For this reason, a detailed study of drinking water quality of Chattogram city is important. Although several reports on the assessment of drinking water quality based on physicochemical and microbiological parameters in different part of Bangladesh have been published by several researchers separately [6], [18][19], but limited research has done in Chattogram city. Considering the forgoing problem, the present section describes the overall status of drinking water quality of groundwater sources in Chattogram city. Study Area In an order to assess the ground water quality three sample areas were chosen namely Baluchora, C&B colony and Khulshi area (Fig.1). Six drinking water samples were collected from three different locations ( Table 1). The map of the study area is presented in Fig 1. Water Sample Collection Total six water samples were collected from six groundwater sources. Two water samples have been taken from each study area and got tested in BCSIR laboratory to find out the quality status of selected physicochemical parameters of the water. The samples were collected from electric pumps, the most common source of drinking water in Chattogram city. Before taking the water samples the sterile container were rinsed three times with sample water for finding the accurate result. The samples were collected in two sets of containers with a sterile lid; one of which is intended for physico-chemical tests and another for microbiological analysis. The samples for microbiological analysis were stored at 4ºC before the start of the analysis and the sample for physico-chemical tests was analyzed immediately. Hygiene and aseptic practices were performed during drinking water sampling. After it, the results of these parameters were compared and discussed with national and international standards [20][21]. Physical Analysis The quality of drinking water can be determined by its microbiological tests and by some important physico-chemical tests, such as pH, TDS and metal concentration. The detection of bacterial contamination and physico-chemical properties has attracted great attention worldwide due to the impacts on public health. So some physical characteristics of drinking water like Temperature; Turbidity; Total Dissolved Solids (TDS); Electrical Conductivity (EC) and pH were determined. Temperature measurement Temperature measurement was performed at the sample collection site using a mobile thermometer. This was done by immersing the thermometer in the sample and recording the stable reading [22]. Determination of pH The pH of the water samples were determined using the Hanna microprocessor pH meter (model no 6011). It was standardized with a buffer solution of pH range between 4 and 9 [22]. Determination of turbidity Turbidity was determined using the Lovibond Turbidirect turbidity meter. In this method; the water samples were taken in a vial and placed in the sample chamber after turning on the turbidity meter. Reading starts automatically after the countdown and has been recorded [23]. Determination of conductivity This was done using a conductivity meter (model no PCD-431). The probe was dipped into the container of the sample until a stable reading will be obtained and recorded [22]. Determination of hardness The total hardness of the water samples was measured by a hardness test kit. Chemical Analysis Some chemical parameters like concentration of chloride, Iron, arsenic, lead, chromium, cadmium and manganese (Cl, Fe, As, Pb, Cr, Cd & Mn) were analyzed. Determination of chloride (Cl) Chloride; in the form of Cl-ion is in water. In the Mohr method; the chloride in neutral or weakly alkaline solution containing chromate ion is titrated with silver nitrate. Silver chloride precipitates and the end point forms the silver chromate. The color of silver chrome is red [24]. Determination of arsenic (As) The arsenic test was performed using the Hach Arsenic Test kit. In this method; hydrogen sulfide is first oxidized to sulfate to avoid interference and the oxidizing environment is neutralized. Subsequently; sulfamic acid and zinc powder react to create strong reduction conditions in which inorganic arsenic is reduced to arsine gas. The arsine gas reacts with mercury bromide; impregnated in a test paper to form mixed arsenic / mercury halogenides (for example; AsH 2 HgBr). Mixed halogenides discolor the test strip in proportion to the concentration of arsenic in the sample. The color change goes from white to yellow to tan to brown [25]. Other heavy metal determination The heavy metal contents were determined by the AAS using a standard analytical procedure [26]. Sample collection is an important step in metal analysis. The samples were generally handled with care to avoid contamination. The glassware was cleaned properly and the reagents were of analytical quality. Distilled water was used during the study. Blank reagent determinations were used to correct instrument readings. Calibration curves were found for concentration vs. absorbance. The data were statistically analyzed using straight line rectification using the least squares method. For greater precision; a blank reading was also taken and the necessary corrections were made during the calculation of the concentration of different elements. Estimation of total coliform Most Probable Number (MPN) test was used to identify and estimate the total coliforms in drinking water samples. This method is performed sequentially in three stages as presumptive; confirmed & completed test which are described below [27]. Stage 1: Presumptive test for coliform group of bacteria or determination of Most Probable Number (MPN) This was done to determine the most probable number (MPN) of coliforms in a water sample in addition to its lactose fermentation and gas production properties. If gas was produced after inoculation and incubation of the lactose broth; it was assumed that coliforms were present in the sample. Stage 2: Confirmed test for coliform bacteria This test aims to differentiate coliforms from noncoliform bacteria; as well as gram-negative and gram-positive bacteria. In this test; the EMB agar inoculated from previous positive gas-producing tubes; showing small colonies with dark centers confirms the presence of Gram-negative; lactose-fermenting Coliform bacteria. Stage 3: Completed test for coliform bacteria This test is necessary for further confirmation.The final exam can help inoculate an inclination of nutritious agar and a Durham tube of lactose broth. This confirms the presence of coliforms. Statistical Analysis Data collected were presented as mean ± standard deviation using Microsoft Excel 2010. Physical Analysis In the present study, the measured values of physical parameters of the selected water samples were represented at Table 2. Temperature The chemical, physical and biological characteristics of water is influenced by temperature. Table 2 shows that, the value of temperature in Baluchora were ranges from 28.4-28.6ºC. In C&B colony temperature ranges were 28.5-28.6ºC and in Khulshi, this value were ranges from 28.6-29ºC. Hence, the temperature was in acceptable limit for drinking water which is recommended with BSTI standard [20] and WHO standard (1996) [28]. pH According to Sasikaran, et al. (2012) [29] pH is an important parameter in the evaluation of the acid-base balance of water. The pH of pure water refers to the measurement of hydrogen ion concentration in water. It is an important parameter that determines the suitability of water for various purposes. It is also the indicator of the acid or alkaline condition of the water state. In general, water with a pH of 7 is considered neutral, while a lower acid refers to an acid and a pH greater than 7 is known as a base. Normally, the pH of the water varies from 6 to 8.5. It is noted that low pH water tends to be toxic and with a high pH value it becomes a bitter taste. According to WHO standards pH of water should be 6.5 to 8.5. In Baluchora, it was ranges from 5.4-6.3; in C&B colony pH was 6.7-6.5 and in Khulshi pH values observed at 6.9. The pH value of C& B colony and in Khulshi were within the recommended limit. The identified pH value of Baluchora was below the pH range of the WHO standards for drinking water. The low pH indicates the acidity of the water and has a metallic or acid taste. Water with acidic pH levels can corrode the pipes and release the metal. Turbidity The Turbidity value was found 0.47-0.52 NTU in study area which is within the acceptable limit for drinking water. Total Dissolve Solid (TDS) A wide range of inorganic minerals and some organic ones, such as potassium, calcium, sodium, bicarbonates, chlorides, magnesium, sulfates, etc. are dissolved in water. These minerals are responsible for producing an unwanted taste and a color diluted in the appearance of the water. Total dissolved solids (TDS) in drinking water come in many ways from wastewater to urban industrial wastewater, etc. Therefore, the TDS test is considered a signal to determine the overall water quality and also an important chemical parameter of water [29]. Table 2 clears that, in Baluchora TDS values were ranges from 82-87 mg/l. In C&B colony TDS ranges were 41-62 mg/l and in Khulshi these values were from 116-158 mg/l. Hence, the values were found in acceptable limit. Water with a high TDS value indicates that the water is highly mineralized. High levels of TDS in groundwater are generally not harmful to humans; but a high concentration of these can affect people suffering from kidney and heart disease. Water that contains high solids content can cause laxative effects or constipation. Electrical Conductivity (EC) The electrical conductivity (EC) is a parameter which is used to indicate the total concentration of ionic species loaded in water. The standard limit of EC for drinking water is 1000 μs / cm [28]. The EC value was found 140-281 μs/cm in study area. The EC value found in the samples are lower than the value recommended by the WHO. A lower EC value clearly indicates that drinking water in the study area has not ionized considerably and has the lowest level of ion concentration activity due to small dissolution solids. Chemical Analysis The chemical constituents of collected drinking water samples from Baluchora, C&B colony and Khulshi area was represented in Table 3. In Baluchora, C&B colony and Khulshi area; the chloride (Cl) and manganese (Mn) value were ranges from 0-31 mg/l, 0.01-0.45 mg/l and iron content also found in acceptable limit. In this study, arsenic (As); lead (Pb); chromium (Cr); cadmium (Cd) were also measured where no arsenic was detected in the in water samples. Concentration of chromium (Cr) was within the limits of BSTI standards and WHO standards. Only DWS-1 sample from Baluchora area was found higher concentration (0.03 mg/l) of lead (Pb) in it. Higher cadmium (Cd) concentration was found in Khulshi and Baluchora area. Drinking water that contains high levels of metals such as cobalt; copper; iron; manganese; molybdenum; selenium and zinc or toxic metals such as aluminum; arsenic; barium; cadmium; chromium; lead; mercury and silver may be hazardous to health. Microbial Analysis The highest value of total coliforrm was found in Baluchora (4×10 2 cfu/ml) and lowest value was found in Khulshi (0 cfu/ml) ( Table 4). This result revealed that the values were excessively higher than the WHO guideline. Therefore, the total coliforms of the water samples exceed the permitted limit. So pretreatment like boiling, filtering etc. is needed before drinking this water. This contamination can occur due to the lack of sanitation and leakage around the pipe walls where contaminants can enter through the leak and can be mixed with the water lifting path [30]. Enteropathogenic E. coli causes diarrhea; foodborne disease and vomiting [31]. Moreover; E. coli is also responsible for urinary tract infection. Other coliforms such as Enterobacter aerogenes creates food spoilage and Klebsiella pneumonia causes urinary tract infection and pneumonia. The organism may be harmful to newborns and elderly patients because it can multiply and reach to harmful number very quickly [32]. CONCLUSION The overall study indicates that almost all the physicochemical parameter of sample water were not within recommended level, and also the microbial parameter were not match with acceptable limit. The study result indicated that almost all the samples from different locations were not suitable for drinking purposes or consumption without any primary treatment. So consumers of this drinking water are under the threat of water related disease & health risk. Therefore; those drinking water samples must be treated before drinking especially in terms of hygiene and to ensure strict compliance with guidelines as set by BDS standards. Awareness raising on chemical contents in drinking water at household level of this area is required to improve public health. Therefore; this study on drinking water will play a great significance in addressing the public health concerns in developing countries; especially Bangladesh. Moreover; this study will shed light on the necessary steps to ensure safe and quality drinking water supply in city areas.
2020-07-30T02:05:28.592Z
2020-07-25T00:00:00.000
{ "year": 2020, "sha1": "2cf3bace628da3259eeef6c0cc2b79542500b4c4", "oa_license": null, "oa_url": "https://www.journalajarr.com/index.php/AJARR/article/download/30286/56828", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6057ec519592eaac461eb9d424600071b134700c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
14952177
pes2o/s2orc
v3-fos-license
Detection of Anthocyanins/Anthocyanidins in Animal Tissues Dietary polyphenols may contribute to the prevention of several degenerative diseases, including cancer. Anthocyanins have been shown to possess potential anticancer activity. The aim of this study was to determine anthocyanin bioavailability in lung tissue of mice fed a blueberry diet (5% w/w) for 10 days or a bolus dose (10 mg/mouse; po) of a native mixture of bilberry anthocyanidins. All five anthocyanidins present in the blueberry were detected in the lung tissue using improved methods. The effect of various solvents on the stability of anthocyanins and their recovery from the biomatrix was analyzed. Detection of anthocyanins and their metabolites was performed by UPLC and LC-MS. Although anthocyanins were not detected, cyanidin was detected by UPLC-PDA and other anthocyanidins were detected by LC-MS, following conversion to anthocyanidins and selective extraction in isoamyl alcohol. The results show that anthocyanins can be detected in lung tissue of blueberry-fed mice and thus are bioavailable beyond the gastrointestinal tract. ■ INTRODUCTION Berries are gaining increased attention lately for their chemopreventive and therapeutic potential against several cancers. 1,2 Blueberry contains an abundance and distinct spectrum of anthocyanins, namely, the glycosides of cyanidin (Cy), delphinidin (Dp), petunidin (Pt), peonidin (Pe), and malvidin (Mv). A compelling body of literature suggests berry phytochemicals, including anthocyanins, have multifunctional chemopreventive and therapeutic effects, including antiinflammatory, 3 radiation protection, 4 and antioxidant. 5 Anthocyanins, which comprise the largest group of water-soluble pigments, are widely distributed in dark-colored fruits, vegetables, grains, and flowers and are responsible for their red, purple, and blue hues. Anthocyanins from different plant sources, including blueberry, have been shown to possess potential anticancer activities. 6−10 Several studies have shown that anthocyanins can inhibit cellular growth, induce apoptosis, and kill cancer cells in vitro. 3 Animal studies, although limited, have also demonstrated the chemopreventive potential of berries and their bioactive constituents such as anthocyanins and ellagitannins. The protective effects of these bioactives could be related to their potent antioxidant activity, as demonstrated in various in vitro and in vivo studies, 5,11,12 among other effects. In our previous studies, we demonstrated the chemopreventive potential of blueberry against breast cancer using the ACI rat model 8,13 and the therapeutic potential of blueberry anthocyanidins against lung cancer using the nude mouse xenograft model. 3 In the latter study, we also demonstrated that a mixture of anthocyanidins exhibited synergistic therapeutic activity compared with individual entities, both in vitro and in vivo. 3 The synergistic effects in this study presumably resulted from the effect of the anthocyanidins on some distinct and overlapping protein targets associated with cell proliferation, apoptosis, inflammation, invasion, and metastasis. 3 Despite reports of berry anthocyanins' protection against cancers and other diseases, 6−10 a significant gap exists between what was shown in many in vitro studies and what can be achieved in vivo. Several studies were conducted to evaluate the bioavailability/pharmacokinetics in blood and tissue using high doses of individual anthocyanins. 14 Extraction of anthocyanins or their metabolites from blood generally relies upon solid phases such as Water's Oasis HLB or C18 Sep-Pak cartridges. However, tissue bioavailability data are scarce because there are few standardized methods for extractions from tissue. Moreover, the stability of anthocyanins and anthocyanidins during the workup has always been a concern. Given these scenarios and multiple biological effects, anthocyanin/anthocyanidin bioavailability in non-gastrointestinal (GI) tissues is considered an important issue and needs to be demonstrated. Studies on the bioavailability of anthocyanins from a single berry can provide direct and valuable information about their absorption. Our unpublished data show 30−35% reduction of cigarette smoke-induced lung tumor in A/J mice by 2.5% w/w dietary berries. We have also demonstrated that anthocyanidins delivered intraperitoneally have antitumor activity against lung cancer xenograft. 3 Hence, this study was designed to evaluate and compare stability parameters of anthocyanidins and develop a sensitive method to assess the bioavailability of anthocyanins/anthocyanidins and their metabolites in lung tissues. ■ MATERIALS AND METHODS Chemicals. HPLC grade water, acetonitrile, methanol, and other HPLC solvents, hydrochloric acid (HCl), formic acid, acetic acid, and trifluoroacetic acid were obtained from Sigma Chemical Co. (St. Louis, MO, USA). Authentic anthocyanin standards were obtained from Chromadex (Irvine, CA, USA). All other chemicals used in the study were of analytical grade. Freeze-dried highbush blueberry powder (50:50 blend of Tifblue and Rubel) was received from the U.S. Highbush Blueberry Council (Folsom, CA, USA). The native mixture of anthocyanidins (∼95% pure) was isolated in our laboratory from 36% anthocyanin-enriched bilberry extract (Indena, Seattle, WA, USA), which contains five major anthocyanidins, delphinidin, cyanidin, malvidin, peonidin, and petunidin in the ratio of 33:28:16:16:7, as described in our previous study. 3 Extraction and Isolation of Purified Anthocyanins/Anthocyanidins from Bilberry. Extraction, enrichment, and hydrolysis of the bilberry were carried out using essentially the same method as described previously. 15 Briefly, the enriched bilberry extract powder was extracted with 75% aqueous ethanol containing 0.1% HCl and enriched by loading the concentrated extracts on an XAD-761/Diaion HP-20 (1:1) column. The polyphenols, including anthocyanins, were eluted with methanol. Pooled elutes were concentrated and hydrolyzed with 2 N HCl (∼5 volumes). Hydrolysates were purified using C18 Sep-Pak cartridges (Waters, Milford, MA, USA). Anthocyanidins and other polyphenols were eluted with acidified (0.01% HCl) methanol. The enriched extracts were dried under reduced pressure using a Savant Speed-Vac (Thermo Scientific, USA) and stored at −20°C until use. The enriched extracts were dissolved in acidified water, and anthocyanidins were selectively extracted in isoamyl alcohol and dried under vacuum. 16 The extracted anthocyanidins were further purified by loading on the C18 cartridges per the manufacturer's guidelines. Diet. AIN-93 M diet supplemented with blueberry powder at 5% (w/w) was prepared in pellet form by Harlan-Teklad (Madison, WI, USA) and stored at 4°C in the dark in vacuum-sealed bags until use. Animal Study. Animal experiments were performed in agreement with an approved protocol from the Institutional Animal Care and Use Committee of the University of Louisville. Female athymic nude mice (5−6 weeks old) were purchased from Harlan Sprague−Dawley, Inc. (Indianapolis, IN, USA). Two animal studies were conducted to test different routes of administration. In study 1, after a week of acclimatization, animals were randomized into two groups (n = 4) and provided either AIN-93 M diet or diet supplemented with 5% blueberry powder (w/w). Animals received diet and water ad libitum. The diet was changed every other day, and the food intake was monitored. No difference was found in food consumption in control and experimental groups. The animals had free access to food and water until euthanasia. After 10 days of treatment, animals were euthanized by CO 2 asphyxiation, and lung was collected, snap frozen, and stored at −80°C until use. Blood was collected by cardiac puncture; plasma was separated and stored at −80°C. In study 2, animals were treated with a bolus dose of the native mixture of anthocyanidins isolated from bilberry (10 mg/mouse) by gavage in 10% dimethyl sulfoxide. Two hours after the treatment, animals were euthanized; lung tissue was collected, snap frozen, and stored at −80°C until use. Blood was collected by cardiac puncture; plasma was separated and stored at −80°C. Effect of Acids/Solvents on Anthocyanidin Stability. To test the stability of anthocyanidins in various solvents, cyanidin chloride (100 μg/mL) was dissolved in (i) acetonitrile, (ii) methanol, and (iii) methanol acidified with 0.1% HCl. The solutions were immediately analyzed by ultraperformance liquid chromatography (UPLC). Samples maintained at room temperature were analyzed at 1 h intervals for 5 h. The concentration was plotted against percent of cyanidin chloride at initial time point. Because methanol provided complete stability as described under Results, all of the analyses from tissue were done by using methanol containing 0.1% HCl. Recovery of Anthocyanins from Biological Matrix. Anthocyanins in biological samples (see below) were analyzed at anthocyanidin level. Conversion of anthocyanins to anthocyanidins in the presence of biological matrix was optimized as follows: To 500 μL of plasma was added 100 μL of bilberry extract (1 mg/mL), the mixture was acidified with 0.1% formic acid, acetic acid, trifluoroacetic acid, phosphoric acid, or hydrochloric acid. Samples were incubated for 15 min at 37°C and then extracted with 5 volumes of acetonitrile. The sample was centrifuged at 10000g, and the supernatant was collected and evaporated to dryness under vacuum (Savant Speed-Vac). The dried extracts were dissolved in acidified (0.1% HCl) methanol and analyzed by UPLC. Extraction of Anthocyanidins from Tissues. Parameters such as the stability of anthocyanidins in different buffers and acid environment, extraction efficiency, and selectivity in different solvents, etc., were established before tissue extractions. First, we analyzed the recovery of anthocyanidins by spiking lung tissues collected from untreated rats from another study. Methods described previously 14 were used to detect berry anthocyanins/anthocyanidins, 14 except for the following modifications: extraction of anthocyanins in acidified (0.1% HCl) acetonitrile, evaporation of solvent, and reconstitution in 50% methanol containing 2 N HCl followed by acid hydrolysis and selective extraction of anthocyanidins in isoamyl alcohol without any use of solid−liquid chromatography. Furthermore, UPLC separation method was developed by identifying solvents to separate reference anthocyanidins and protocatechuic acid (PCA), a bioactive metabolite of Cy, 17 and spiking the tissue homogenate with Dp, Cy, and PCA. 14 Briefly, after the optimization, lung tissue from two mice was pooled (two pools per group) in both animal studies and homogenized in 400 μL of 1.15% KCl; anthocyanins were extracted in acetonitrile containing 0.1% HCl as described above. The supernatant was evaporated and reconstituted in 50% methanol containing 2 N HCl. The extract was then hydrolyzed (100°C/1 h) to convert anthocyanins to anthocyanidins, and the latter were selectively extracted in isoamyl alcohol. 16 Finally, the samples were dried under reduced pressure (Savant Speed-Vac) and reconstituted in 40 μL of acidified (0.1% HCl) methanol just before injection, and 10 μL was analyzed by UPLC. The limits of detection for the various anthocyanidins (Cy, Dp, Pt, Pe, and Mv) (0.3−0.75 ng) and PCA (0.2 ng) were established. UPLC Analysis. Anthocyanins and anthocyanidins were analyzed on a Shimadzu UPLC system composed of two LC-20AD-XR pumps, an SIL-20A-XR autosampler, and an SPD-M20A photodiode array detector (PDA) controlled by Class VP software (ver 7.4, SP3) attached to a Shim-pack XR-ODS-II column (3.0 × 150 mm; 2.2 μm). A linear gradient of 3.5% phosphoric acid (solvent A) and acetonitrile (solvent B) with a flow rate of 0.75 mL/min was used. In the gradient, solvent B was initially 15% for 2 min and increased to 20% by 3 min. Solvent B was further increased to 60% from 3 to 10 min, held for 1 min, and returned to 15% by 12 min. LC-MS Analysis. Reversed-phase chromatography of anthocyanidins was performed on a Thermo Scientific (San Jose, CA, USA) Accela LC system. The mobile phases consisted of buffer A, water/ formic acid (100:0.1, v/v), and buffer B, acetonitrile/formic acid (100:0.1, v/v). Five microliters of sample was injected onto a Hypersil GOLD C 18 column (50 × 2.1 mm, 1.9 μm, 175 Å) from Thermo Scientific. A step gradient at a flow rate of 100 μL/min was used to elute the compounds. The gradient started at 5% buffer B and increased to 40% buffer B in 10 min, then increased to 90% buffer B in 3 min, and was maintained at 90% buffer B for 7 min. Elute from the LC was directed to an LTQ-Orbitrap XL mass spectrometer (Thermo Scientific). The compounds were ionized by electrospray ionization and detected by Orbitrap at 30000 mass resolution (full scan, m/z 220−1000) or by multiple reaction monitoring (MRM). The spray voltage was 4.0 kV, and the capillary temperature was 250°C. The sheath, auxiliary, and sweep gas flows were set to 15, 5, and 0, respectively. In MRM, molecular ions of anthocyanidins were selected with 3.0 m/z isolation window and fragmented by collision-induced dissociation (CID). For CID, collision energy, activation Q, and activation time (mS) were 35, 0.25, and 30, respectively. Full scan MS/ MS spectra of the compounds (m/z 100−400) were acquired by Orbitrap at 7500 mass resolution. The transitions used for anthocyanidin detection are shown in panel C of Figure 4. The limits of detection by LC-MS for Dp and Cy were >0.5 and 0.25 ng, respectively, whereas those for Pt, Pe, and Mv were 2.5 pg. Isolation of Blueberry Phytochemicals and Their Characterization. Blueberry extract was applied onto the XAD-761/HP-20 column (1:1) for the enrichment of anthocyanins and other polyphenols, and the enriched extract was hydrolyzed to convert the glycones to aglycones (i.e., anthocyanins to anthocyanidins; Figure 1). The enriched anthocyanidins were extracted in isoamyl alcohol and further purified by C18 column chromatography. The final extract contained highly pure (>94%) anthocyanidins ( Figure 2). When analyzed by UPLC, the purified extract showed five anthocyanidins in the following descending order: Dp (33%), Cy (28%), Pt (16%), Ma (16%), and Pe (7%); a small amount of quercetin was also found in the sample (Figure 2). Effect of Solvent on Stability of Anthocyanins. The cyanidin was highly unstable in acetonitrile. The rate of degradation was slow initially, but nearly 60% was degraded in 5 h (Supporting Information Figure S1). The stability was better in methanol, where there was only a slight decline and 80% of the compound was found intact. Interestingly, acidified methanol provided complete stability to cyanidin, and no degradation or loss was observed even after 5 h at room temperature. Hence, all of the analyses from tissues were done by using methanol containing 0.1% HCl. Recovery of Anthocyanins and Anthocyanidins from Biomatrix. The recovery of anthocyanins and anthocyanidins was variable depending on the type of acid used. Formic acid and acetic acid did not provide any recovery of anthocyanins/ anthocyanidins from PBS, whereas trifluoroacetic acid, phosphoric acid, and hydrochloric acid gave good recoveries. When these three acids were tested for recovery of anthocyanins/anthocyanidins from plasma, the recovery was in the following order: hydrochloric acid > phosphoric acid > trifluoroacetic acid (Table 1). However, hydrochloric acid extraction also resulted in conversion of some anthocyanins to anthocyanidins. Nearly 55−95% of anthocyanins and 63−100% of anthocyanidins were recovered from plasma ( Table 1). Quantification of Pe and Mv was skewed due to the lack of separation of these two peaks in the UPLC conditions used. For measurement from biological matrices from animal experiments, the hydrochloric acid treatment was extended in a boiling water bath for 60 min to convert all anthocyanins to anthocyanidins and then quantified by UPLC-PDA. Extractions and Detection of Anthocyanidins in Vivo. Blueberry has been reported to contain a variety of anthocyanins. In this study, we demonstrated the presence of anthocyanins in the blueberry-fed rodent tissues following conversion to anthocyanidins. Sensitivity of detection of the individual reference anthocyanidins was determined by LC-MS and UPLC-PDA and ranged from 0.25 ng to 2.5 pg and from 0.3 to 0.75 ng, respectively. The detection limit for PCA, a metabolite of Cy, was 0.2 ng. Cy was readily detected by UPLC (Figure 3) in the samples extracted from mouse lung with a limit of detection of 0.3 ng, whereas the other anthocyanidins were undetectable due to higher detection limits. A small peak was detected at the retention time for Dp; however, it was not confirmed by MS. However, with LC-MS analysis by specific ion monitoring, Pe, Pt, and Mv were readily detectable below ≤0.25 ng; Pe could be detected in the picogram range. No peak corresponding to anthocyanidins was detected in the lung of control animals by UPLC or LC-MS. Several anthocyanidin peaks were found in the samples extracted from the lungs of mice fed blueberry diet by LTQ-MRM (low mass resolution). To confirm the presence of anthocyanidins in lung samples, an MRM with Orbitrap (high mass resolution) method was set up and used to reanalyze the extracted samples. Ion chromatograms of standards from Orbitrap MRM are shown in Figure 4, with a m/z window width of 0.06 that provides better selectivity. Dp and Cy were not detected in lung samples by Orbitrap MRM, but other anthocyanidins (Pe, Pt, and Mv) were detected by MRM ( Figure 5) and further confirmed by MS/MS. ■ DISCUSSION Several studies in recent years have focused attention on anthocyanin bioavailability both in humans and in experimental animals. Most of these studies reported that anthocyanins were poorly absorbed and were excreted unmetabolized. 18−20 Few studies have also reported that anthocyanins are bioavailable when delivered at very high doses; however, most of the bioavailability studies are focused on individual anthocyanidins, particularly Cy. 14,21−24 The plausible reasons for lack of anthocyanins' detection include (i) instability of anthocyanins, (ii) metabolism by gut microflora, (iii) high rate of excretion, and (iv) unavailability of suitable analytical techniques. The aim of this work was to develop a method to assess the stability of anthocyanins/ anthocyanidins, their selective extraction, and the impact on tissue bioavailability of anthocyanins. There are several reports on the solubility and stability of anthocyanins in acidic environment. 25,26 Acidified methanol enhances the stability of anthocyanidins compared to acetonitrile. In methanol, the degradation of anthocyanins was very slow, with >80% of anthocyanins remaining intact after 5 h, whereas after acidification no degradation occurred. These results are consistent with published data demonstrating higher stability of anthocyanins in acidic pH. 25,26 Several methods of anthocyanin extraction are available describing varying recovery of anthocyanins from a biological matrix. 27−30 These methods involve solid phase extraction using Water's Oasis HLB and Sep-Pak C18 cartridges. We standardized the extraction of anthocyanins from tissues of rats administered dietary blueberry by converting them to anthocyanidins, which reduces the number of compounds to five, and allowed us their detection due to their higher amounts. In complex matrices, where interference in peak separation was observed, use of isoamyl alcohol significantly improved the selectivity of anthocyanidins, albeit with a slight loss of recovery (6−7%, data not shown). In this study, first, we determined stability and extractability of anthocyanidins isolated from bilberry and then determined their tissue bioavailability. Bilberry contains 15 anthocyanins containing galactose, glucose, and arabinose derivatives of Dp, Cy, Pt, Pe, and Mv. These anthocyanins were converted to five anthocyanidins upon acid hydrolysis. Glycones of the same anthocyanidins are also reported in blueberry. 13 In the present work, we initially failed to detect anthocyanins in lung tissue from rats fed for 10 days with a 5% blueberry diet. The dietary route provides a slow ingestion of the berry phytochemicals. Under these conditions, the lung anthocyanin levels were presumably too low to be detected. Anthocyanins were also not detected in the lung tissue following a bolus dose (10 mg/mouse) by gavage. On the other hand, when the tissue anthocyanins were converted to their respective aglycons by acid hydrolysis and selectively extracted in isoamyl alcohol, Dp and Cy were detected in the lung tissue, indicating that anthocyanins can reach and exert their effects beyond the GI tract. The possibility of bioavailable proanthocyanins being converted to anthocyanidins during acid hydrolysis cannot be ruled out. This presence of anthocyanins in lung tissue could be explained by their ability to permeate the gastrointestinal barrier. 31 These data demonstrate that the conversion of anthocyanins to anthocyanidins prior to analysis is an effective method for detecting anthocyanins in tissues. However, conversion of proanthocyanidins to anthocyanidins in vivo could also enhance the levels on anthocyanidins in animal tissue. PCA, which is produced by degradation of Cy, 32 was also detected in lung tissue, suggesting that lung PCA and anthocyanins along with some unknown metabolites are presumably bioactives responsible for the known anticarcinogenic potential of blueberry in the lung. 33 A series of papers have shown the bioavailability of anthocyanins in brain tissue, 21,22 and a few other studies report the presence of anthocyanins in other organs including the lung following treatment of rodents with individual anthocyanin at high doses. 14,23,24 Our study is the first demonstration indicating the presence of anthocyanidins in lung tissue following low-dose dietary blueberry. The blueberry dose used in this study is the same or just 2-fold higher as used in our previous study in which it was found to inhibit estrogenmediated mammary tumorigenesis. 8,13 The results from our other study also showed the efficacy of the native mixture of anthocyanidins from bilberry against lung cancer xenografts. 3 In summary, we show for the first time that anthocyanins are bioavailable in the lung following a low dose of dietary blueberry powder. The detection has been possible by conversion of anthocyanins to their native anthocyanidins, followed by extraction in isoamyl alcohol. This technique can be utilized to demonstrate bioavailability of anthocyanins in different tissues and correlate their levels with disease inhibition. Notes The authors declare no competing financial interest.
2016-05-04T20:20:58.661Z
2014-03-21T00:00:00.000
{ "year": 2014, "sha1": "8784a228eecafaa5ea069de22d902d30551916a1", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://doi.org/10.1021/jf500467b", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8784a228eecafaa5ea069de22d902d30551916a1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
18363921
pes2o/s2orc
v3-fos-license
Birth Weight Reference for Triples in Korea An estimation of the baseline value of birth weight depending on gestational age is helpful for reducing morbidity and mortality following the early diagnosis and treatment of intrauterine growth retardation. In Korea, there are established baseline values for singletons and twins. But no definite criteria exist for triplets yet. Given the above background, we obtained the baseline value of birth weight depending on the gestational age in triplets with a gestational age of 27-38 weeks using a raw data about birth records which had been obtained during a 10-yr period from 1998 to 2007. This baseline value was compared with those of singletons and twins. During the 10-yr period, the total number of newborns who were born between gestational age 27 and 38 was 1,330,822. Of these, the number of singletons, twins and triplets was 1,330,822, 90,245, and 840, respectively. A mean gestational age was 37.3±1.5 weeks, 36.0±2.0 weeks and 33.3±2.4 weeks in the corresponding order. A mean birth weight was 3,071±490 g, 2,414±455 g, and 1,836±454 g in the corresponding order. A comparison of the birth weight depending on the gestational age of triplets was made with the normal value of singletons and twins. According to this, in the overall gestational age ranging from weeks 27 to 38, it was relatively smaller as compared with the birth weight of twins and singletons. The current study was of significance in that it first obtained the normal value of birth weight of triplets in the overall gestational age ranging from weeks 27 to 38, whose results are expected to be helpful for studies or treatments of triplets. INTRODUCTION With the introduction of ovulation-inducing agents in the late 1960s and assisted-reproductive techniques in the 1970s, the frequency of multiple pregnancies and multiple birth have been increasing (1)(2)(3)(4)(5). In Korea, the frequency of twins has recently been increased by three times for the past 20 yr (6). To date, however, no studies have been conducted to examine the triplets. In cases of multiples, as compared with singletons, the morbidity and mortality of fetus or newborns were relatively high. This has therefore been of increasing interest (5)(6)(7)(8). Particularly in cases of multiples with the intrauterine growth retardation, the morbidity and mortality have been reported to be relatively high (9,10). According to Blickstein (9,10) there were differences in the intrauterine growth pattern between singletons, twins and triplets. The author therefore noted that the differential criteria for birth weight depending on the gestational age should be established. In addition, criteria for birth weight depending on the gestational age vary depending on coun-try, ethnicity and sex. Accordingly, the differential criteria for birth weight depending on the gestational age should be established. In Korea, there are established baseline values for singletons and twins (11)(12)(13)(14). But no definite criteria exist for triplets yet. We conducted this study to establish the normal value of birth weight depending on the gestational age in Korean triplets. MATERIALS AND METHODS Of the data about the population status, which was collected by The Korean National Statistics Office, we used a raw data about birth records of a 10-yr period from January 1 1998 to December 31 2007. The total number of newborns who were born during this period was 5,278,646. Of these, excluding 20,519 newborns (0.4%) with unknown gestational age, birth weight or plurality, 840 triplets, aged between gestational weeks 27 and 38, were finally enrolled in the current study. The number of triplets, whose gestational age was short- Birth Weight Reference for Triples in Korea An estimation of the baseline value of birth weight depending on gestational age is helpful for reducing morbidity and mortality following the early diagnosis and treatment of intrauterine growth retardation. In Korea, there are established baseline values for singletons and twins. But no definite criteria exist for triplets yet. Given the above background, we obtained the baseline value of birth weight depending on the gestational age in triplets with a gestational age of 27-38 weeks using a raw data about birth records which had been obtained during a 10-yr period from 1998 to 2007. This baseline value was compared with those of singletons and twins. During the 10-yr period, the total number of newborns who were born between gestational age 27 and 38 was 1,330,822. Of these, the number of singletons, twins and triplets was 1,330,822, 90,245, and 840, respectively. A mean gestational age was 37.3±1.5 weeks, 36.0±2.0 weeks and 33.3±2.4 weeks in the corresponding order. A mean birth weight was 3,071±490 g, 2,414±455 g, and 1,836±454 g in the corresponding order. A comparison of the birth weight depending on the gestational age of triplets was made with the normal value of singletons and twins. According to this, in the overall gestational age ranging from weeks 27 to 38, it was relatively smaller as compared with the birth weight of twins and singletons. The current study was of significance in that it first obtained the normal value of birth weight of triplets in the overall gestational age ranging from weeks 27 to 38, whose results are expected to be helpful for studies or treatments of triplets. er than 27 weeks, as present at an extremely small. Accordingly, these triplets were not enrolled in the current study. Triplets whose gestational age was longer than 38 weeks were also not enrolled in the current study. We obtained mean birth weight, standard deviation and 10th, 25th, 50th, 75th, and 90th percentile values for each gestational age group by one week increment. Then, we investigated the birth weight distribution of each gestational age group by the normal Gaussian model. To establish final standard value of birth weight distribution by gestational age, we used the finite mixture model to eliminate erroneous birth weights for respective gestational age (15)(16)(17)(18). Then we made percentile curve of birth weight distribution by gestation for triples. A baseline value of birth weight depending on the gestational age in triplets, which was established as described herein, was compared with 10th, 50th, and 90th percentile values in singletons and twins (12). A comparison was also made with criteria for triplets which have been established in other countries (19,20). A statistical analysis was performed using STATA 8.0E (Stata Corp., College Station, TX, USA) for an analysis and an estimation of finite mixture model. RESULTS During a 10-yr period from 1998 to 2007, the total number of newborns who were born between gestational weeks 27 and 38 was 1,330,822. Of these, the number of singletons, twins and triplets was 1,330,822, 90,245, and 840, respectively. A mean gestational age was 37.3±1.5 weeks, 36.0 ±2.0 weeks, and 33.3±2.4 weeks in the corresponding order. A mean birth weight was 3,071±490 g, 2,414±455 g, and 1,836±454 g in the corresponding order. These results indicate that gestational age and birth weight decreased depending on the plurality (Table 1). Mean age of mothers was 29.7 yr in cases of singletons, 30.2 yr in cases of twins and 30.5 yr in cases of triplets. These results indicate that age of mother increased significantly depending on the plurality ( Table 1). The birth rate of triplets was increased from 0.06% in 1998 to 0.1% in 2007. Table 2 and Fig.1 presents birth weight percentiles for gestational age in triplets. A mean gestational age of male triplets was 33.3±2.6 weeks and that of female triplets was 33.3± 2.3 weeks. This difference did not reach a statistical significance (P=0.388). However, mean birth weight of male babies was 1,872±495 g and that of female babies was 1,799±407 g. This difference reached a statistical significance (P=0.001). In addition, birth weight percentile for gestational age was compared between male and female triplets. This showed that male triplets had a significantly greater birth weight percentile for the gestational age as compared with female triplets in overall gestational age (Table 3). Birth weight percentile for gestational age in triplets was compared with the normal value which we obtained using the same methods in singletons and twins in our previous study (12). According to this, in the overall gestational age ranging from gestational week 27 to 38, it was relatively smaller as compared with birth weight obtained from twins and sin- (Table 4, Fig. 2). And we compared 50th percentile curve of Korean triplets with those of USA and Norway, it was smaller than those in the USA and Norway (Fig. 3). DISCUSSION In Korea, the birth rate has been annually decreasing. However, with the advancement of assisted reproductive techniques such as ovulation-inducing agents, the birth of multiples has been increased. With the well-trained neonatal intensive care unit personnel treating newborns and the well-equipped facility, the survival rate of premature birth has also been increased (6,21). Multiples show a higher degree of the mortality of newborns as compared with singletons. They also are associated with such problems as premature birth, intrauterine growth retardation and low birth weight (5)(6)(7)(8). Although multiples have been of increasing interest, few studies have been conducted to examine triplets in Korea. The current study was conducted using a raw data about birth records which was collected during the recent 10-yr period by the Korean National Statistics Office. And it would be of significance in that it first obtained the normal value of birth weight depending on the changes in the plurality of triplets, mean gestational age, mean birth weight, mean age of mothers and gestational age. According to a review of English literature, triplets have been abruptly increased during the recent 20-to 30-yr period (1,(3)(4)(5). According to the current study, however, it was shown that triplets were increased by approximately two times during a 10-yr period from 1998 to 2007 in Korea. Other studies have shown that the birth rate of multiples was increased as mothers' age was increased (1,4). In Korea, it has also been shown that mean age of mothers of singletons, twins and triplets was increased. These reports suggest that the increased birth rate of triplets originated from the increased age of mothers and the development of assisted reproduction technology. In the present study, birth weight was significantly greater in male triplets when compared with female triplets at the same gestational age. These results were in agreement with the previous reports (22,23). This implies that an intrauterine growth pattern may vary genetically between male and female triplets. Triplets had a lower birth weight as compared with singletons or twins, which was also in agreement with the previous reports (22,(24)(25)(26) In particular, from gestational week 32 on, there was a great different from singletons (Table 3). In regard to this, other authors noted that no growth acceleration occurred during the third trimester of pregnancy due to the limitation of intrauterine space (9,10,22,27). Other authors noted, however, that there was no significant differ-ence in the birth weight between singletons, twins and triplets prior to the third trimester of pregnancy. According to the present study, however, there was a significant difference in mean birth weight from gestational week 27. This deserves further studies. When 50th percentile of birth weight of Korean triplets was compared with those of USA and Norway (19,20), it was smaller than those in the USA and Norway during a period from gestational week 28 to 36. This may suggest that an ethnic difference is present in triplets. It might be due to an insufficient amount of the data, however, that the normal value was relatively greater during a period ranging from gestational week 37 to 38 in Korea. In cases of triplets, however, the proportion of cases in which a delivery occurred at gestational week 36 or later was approximately 10% (28). This data could be applied to a clinical setting. Limitations of the current study are as follows: 1) In a statistical analysis of the birth records which were collected during a 10-yr period in Korea, the number of triplets aged gestational week 27 or earlier was extremely small. Accordingly, there was a lack of the data about these triplets. 2) The data which was collected from triplets aged gestational week 37 or later was also problematic. Despite these limitations, the current results are of significance in that it first obtained the normal value of birth weight during a period ranging from gestational week 27 to 38, which is clinically important, in Korean triplets. It is expected that the current results would be of help for the studies and treatment of triplets.
2014-10-01T00:00:00.000Z
2010-05-24T00:00:00.000
{ "year": 2010, "sha1": "c63daa661d8ef13941c71e053735eaa0eaf01ff8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3346/jkms.2010.25.6.900", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c63daa661d8ef13941c71e053735eaa0eaf01ff8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225062369
pes2o/s2orc
v3-fos-license
Modeling the US-China trade conflict: a utility theory approach This paper models the US-China trade conflict and attempts to analyze the (optimal) strategic choices. In contrast to the existing literature on the topic, we employ the expected utility theory and examine the conflict mathematically. In both perfect information and incomplete information games, we show that expected net gains diminish as the utility of winning increases because of the costs incurred during the struggle. We find that the best response function exists for China but not for the US during the conflict. We argue that the less the US coerces China to change its existing trade practices, the higher the US expected net gains. China's best choice is to maintain the status quo, and any further aggression in its policy and behavior will aggravate the situation. Introduction The trade conflict between the world's two largest economies-the United States (US) and China-initiated in 2018 has captured massive attention among academics and decision-makers. Major strands of literature discuss the direct economic costs of tariffs (Amiti et al. 2019, Itakura 2019) and repercussions on the international trading system (Lawrence 2018). Some scholars examine the causes of the friction and point out the crux of the issue involves not only the balance of payments but also technological competition (Zhang 2018, Chen et al. 2019). A few researchers analyze the issue through the lens of game theory, yet their studies are rather qualitative (Yin andHamilton 2019, Jiang et al. 2020). This paper attempts to model the bilateral conflict and understand the US' and China's strategic choices in mathematical terms. It presents an extension to the literature on the expected utility theory. The high-level theoretical framework designed in this paper can be applied not only to the economic conflict but to the technology friction between the two powers. Models in this paper capture both players' calculations on expected net benefits from fighting and consider two different situations with perfect information and incomplete information. Fighting in this paper does not mean a sequential trade conflict; instead, it refers to economic coercion and backfires. The paper solves inequality constrained optimization and proves the existence or non-existence of the extrema of expected net gains. Three findings are particularly worth noting. First, when each player's utility of winning increases, their expected net gains diminish. Second, the US best response function exists neither in the perfect nor the incomplete information games. Third, the best response for China is to preserve the status quo. Models In the trade conflict, each player has a chance to either win or lose. For the US, winning means China will be forced to change its trade practice and technology policy while losing indicates that, with the US coercion (e.g., impose massive tariffs on imported Chinese goods), China will not change its existing policies (aka the status quo) or its policies become even more aggressive (e.g., depreciate its currency and enlarge its current account surplus or slash imports from the US). On the flip side, winning and losing outcomes for China are the exact opposite of those of the US. The probabilities of winning and losing are measured in terms of relative capabilities (Cao 2013). In our context, the capacities of the US and China are substantial, constant values, denoted as C i , i = {US, CN}. US capability is more significant than China's. 1 Hence, the probability that the US wins can be written as α = C US C US +C CN > 0.5, and the probability that China wins can be expressed as β = C CN C US +C CN < 0.5. Because the US and China are rational players, the objective for each is to maximize their expected net gains of fighting. The net gains of player i consist of three components-the utility of fighting, cost, and the utility of not fighting (status quo), which will be defined in the sections below. Perfect information Utilities of winning and losing are denoted by W i and L i , respectively, where W represents winning and L means losing. When player i wins, the cardinal utility will be greater than the one for losing, that is, W i > L i . The cost of fighting is a function of the adversary's incentive to retaliate economically and its capacity. Under perfect information, both players have full knowledge of the opponent's incentive. Let I( k US ) denote the incentive function for China to blowback (e.g., add tariffs on the US goods or devalue the Chinese currency), where k = {W, L}. We assume I is an increasing odd function and I ∈ (−1,1). Additionally, I's first-order derivative, ′ , is bounded. Specifically, we assume is a very small positive value. After scaled by China's capacity, the cost to the US is C CN I( k US ). Similarly, define ( k CN ) as the US incentive function to fight back (e.g., add more tariffs on Chinese goods to threaten China to accede to the US demand). is also an increasing odd function and ∈ (−1,1). Moreover, Lastly, the utility of the status quo is denoted S i . The expected net gains are respectively As mentioned earlier, if the US cannot alter China's policies and behaviors, the state is defined as "lose". If the US can change China's, the state is defined as a "win". Also, the US utility of winning is China's payoff of losing, and the US payoff of losing is China's utility of winning, so we have W CN = S CN = − L US = − S US and W US = − L CN . (2) are simplified to Incomplete information To make our analysis more engrossing and realistic, we consider uncertainty in the incentive functions. For instance, factors such as internal economic problems might impact the commitment to the game effort. The opponent only knows the probability distribution of those factors. Here we define the incentive function for China to fight back as I( k US , ) where z ∈ [0,1] is a random variable (e.g., the impact of a pandemic on the domestic economy) that is independent of k US . f(z) is the probability density function. Likewise, the incentive function for the US is ( k CN , ) where random variable ε ∈ [0,1] and g(ε) is the probability density function. We adopt the same concept of expected net gains in 2.1. We have Analytical results We explore our models and calculate the extrema of i through mathematical analysis. The following propositions state our main analytical results. Proof Notice that the set defined by the three conditions, denoted D, is a pre-compact set in R 2 . Hence, the closure of D, say D ̅ ⊆R 2 , is a compact subset. Actually, D ̅ = {0 ≤ W US ≤ c}⋂{− ≤ S US ≤ −c}. Since US is continuous in D ̅ , so both maximum and minimum exist. We will first find the extreme values of US in D ̅ , then show that the maximum occurs in D ̅ − D and the minimum occurs inside D. We only consider US as a function of W US as S US is determined by China in the game. The partial derivative is US By the assumption of Because we consider C CN is much larger than c, so the minimum value is less than 0. On the other hand, US has a maximum in D ̅ − D when W US = 0, so maximum for US does not exist in D. Proposition 2 Consider CN in Eq. (4) subject to There exists a maximum for CN when W CN = c = − L CN , that is, China maintains the status quo, and policies are unchanged. At this point, China's strategy produces the most favorable outcome. The minimum exists when W CN = c , that is, China behaves to a certain level that is worse than the status quo despite the US' coercion. The maximum value is We only consider E CN as function of W CN as L CN is determined by the US. The partial derivative Hence, CN W CN < 0. Given conditions i. and ii., the maximum occurs when W CN = c = − L CN , the best response function exists, and the minimum occurs when W CN = c. Because we consider c is much larger than c and less than C US , the maximum value is greater than 0, and the minimum value is likely less than 0. Proposition 3 Under incomplete information, assume US in Eq. (5) is subject to the same conditions in Proposition 1. By the assumption that ( W US , ) = ( W US ) − ( ) and
2020-10-26T01:00:15.183Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "ccd9a470d349d2d305076ca913c5bd9f5bec364b", "oa_license": null, "oa_url": "http://www.hillpublisher.com/UpFile/202105/20210513172006.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ccd9a470d349d2d305076ca913c5bd9f5bec364b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
14342779
pes2o/s2orc
v3-fos-license
Co-association of methotrexate and SPIONs into anti-CD64 antibody-conjugated PLGA nanoparticles for theranostic application Background Rheumatoid arthritis (RA) is an autoimmune disease with severe consequences for the quality of life of sufferers. Regrettably, the inflammatory process involved remains unclear, and finding successful therapies as well as new means for its early diagnosis have proved to be daunting tasks. As macrophages are strongly associated with RA inflammation, effective diagnosis and therapy may encompass the ability to target these cells. In this work, a new approach for targeted therapy and imaging of RA was developed based on the use of multifunctional polymeric nanoparticles. Methods Poly(lactic-co-glycolic acid) nanoparticles were prepared using a single emulsion-evaporation method and comprisaed the co-association of superparamagnetic iron oxide nanoparticles (SPIONs) and methotrexate. The nanoparticles were further functionalized with an antibody against the macrophage-specific receptor, CD64, which is overexpressed at sites of RA. The devised nanoparticles were characterized for mean particle size, polydispersity index, zeta potential, and morphology, as well as the association of SPIONs, methotrexate, and the anti-CD64 antibody. Lastly, the cytotoxicity of the developed nanoparticles was assessed in RAW 264.7 cells using standard MTT and LDH assays. Results The nanoparticles had a mean diameter in the range of 130–200 nm and zeta potential values ranging from −32 mV to −16 mV. Association with either methotrexate or SPIONs did not significantly affect the properties of the nanoparticles. Conjugation with the anti-CD64 antibody, in turn, caused a slight increase in size and surface charge. Transmission electron microscopy confirmed the association of SPIONs within the poly(lactic-co-glycolic acid) matrix. Both anti-CD64 and methotrexate association were confirmed by Fourier transform infrared spectroscopy, and quantified yielding values as high as 36% and 79%, respectively. In vitro toxicity studies confirmed the methotrexate-loaded nanosystem to be more effective than the free drug. Conclusion Multifunctional anti-CD64-conjugated poly(lactic-co-glycolic acid) nanoparticles for the combined delivery of methotrexate and SPIONs were successfully prepared and characterized. This nanosystem has the potential to provide a new theranostic approach for the management of RA. Introduction Rheumatoid arthritis (RA) is one of the most common and severe autoimmune diseases affecting the joints. This chronic inflammatory disease, in which the immune system attacks healthy tissue lining the joints, leads to functional disability and reduced quality of life, as a result of bone and cartilage destruction, joint swelling, and pain. RA is a widely prevalent systemic disease and affects 1% of the population around the globe. [1][2][3] Since the RA inflammatory process remains unclear, finding effective therapies and tools for early diagnosis has been extremely challenging and remain non-existent or with limited efficacy. [1][2][3] Diagnosis of RA can be a demanding task, considering that the disease may occur even before symptoms start to manifest themselves. Additionally, confirmation of the presence of this autoimmune disease requires use of several different criteria to establish a definite diagnosis, leading to a high risk of overtreatment. 4 Magnetic resonance imaging (MRI) has been attracting considerable medical interest for early disease detection and drug therapy monitoring. 5,6 Superparamagnetic iron oxide nanoparticles (SPIONs) have emerged as highly effective contrast agents for MRI, 6 but active targeting strategies are required in order to increase their accumulation at tissues of interest while decreasing nonspecific biodistribution in order to reduce background interference. 7 Currently, the gold standard for RA therapy is methotrexate (MTX), a drug approved by US Food and Drug Administration. 8 This drug is usually administered together with other disease-modifying antirheumatic drugs, and sometimes in combination with short-term, low-dose glucocorticoids or tumor necrosis factor inhibitors. 9 However, due to the lack of targeting ability using the intravenous formulations available, this therapeutic strategy does not allow specific distribution of MTX to the affected joints, and leads to drug accumulation in healthy tissues, causing harmful side effects. 1,3,10 Therefore, additional research is required in order to develop novel strategies for achieving effective and major long-term approaches for RA therapies, aiming to prevent joint destruction and associated comorbidities. In the particular case of RA, recent studies have proposed that insufficient apoptosis of synovial inflammatory cells, especially macrophages, may contribute to persistence of the disease. Since macrophages play a pivotal role in progression of the disease, effective imaging and therapy systems may rely on the ability to target these cells. 3 Bearing this in mind, a new approach for RA theranostics may take advantage of the vast potential of nanomedicine. A new wave of medical innovation is emerging due to the possibility of multifunctionalization in nanomedicine-based strategies, since nanoparticles (NPs) may have the ability to: carry therapeutic agents; be conjugated to specific ligands, namely antibodies, to target a specific tissue or organ; and amplify imaging signals, by coencapsulating contrast enhancers; among other possibilities. 10,11 This study aimed to develop a nanoparticulate system that can actively target macrophages for RA imaging and therapy by intravenous administration. The work consists of the association of SPIONs as a contrast agent for MRI, and MTX for RA therapy into poly(lactic-co-glycolic acid) (PLGA) NPs. Combining these two agents in a single platform, it may be possible to simultaneously monitor and provide therapy for RA. In addition, the work comprises functionalization of PLGA NPs with a monoclonal antibody against the macrophage-specific cell surface receptor, CD64, which is overexpressed in RA. 12 Different PLGA-based NPs were prepared in order to compare the effects of each component (ie, MTX, SPIONs, and anti-CD64 antibody) on the properties of the NPs. Preparation of nanoparticles Formulations containing PLGA were prepared using a solvent emulsification-evaporation method based on an oil in water (o/w) single emulsion technique. 13 Following the standard procedure, 200 mg of PLGA were dissolved in 2 mL of ethyl acetate, and then added to 8 mL of a 2% (w/v) poly(vinyl alcohol) aqueous solution. The emulsion formed was homogenized using a sonicator (VibraCell VCX 130 equipped with a VC 18 probe, Sonics & Materials Inc., Newtown, CT, USA) at 70% amplitude for 30 seconds. The previous emulsion was then added to 15 mL of a 0.2% (w/v) poly(vinyl alcohol) aqueous solution and the organic solvent was removed by evaporation using a rotavapor for 90 minutes (300 hPa, 35°C). NPs were then recovered by centrifugation (21,000× g, 10 minutes, 4°C) and washed three times with 20 mL of water. After final redispersion, the NPs were transferred to 20 mL aluminum-sealed screw-neck vials (La-Pha-Pack ® GmbH, Langerwehe, Germany) and stored at 4°C until further analysis. The same method was used to associate MTX and SPIONs into PLGA NPs, both separate and simultaneously, by adding the previous components (40 µL of SPIONs and/ or 20 mg of MTX) to the organic phase. 14 A schematic of the preparation process is shown in Figure 1. conjugation of anti-cD64 antibody to nanoparticles Considering that CD64 is a FcγRI receptor, 12 the anti-CD64 antibody should be linked through the F ab fragment, leaving the F c region available for macrophage recognition. Therefore, the coupling reaction was carried out in the presence of EDC and NHS, allowing the carboxyl-terminated NPs to react with the primary amine of the antibody present at the F ab region, and yielding an amide bond. Figure 2 summarizes the conjugation process. Following an adapted protocol, 15 10 mL of purified NPs were centrifuged (21,000× g, 10 minutes, 4°C) and redispersed in 10 mL of MES buffer (pH 5.0). The pH was maintained at 5.0 in order to maximize the attachment of EDC to the PLGA carboxyl groups. 15 Activation was achieved by adding 1 mL of 0.1 M EDC and 1 mL of 0.7 M NHS (both dissolved in MES buffer, pH 5.0) to the NP suspension, which was kept at room temperature under moderate stirring for 1 hour. To remove the remaining reagents, the activated NPs were centrifuged (21,000× g, 10 minutes, 4°C) and redispersed in phosphate-buffered saline, yielding a final concentration of 1.0 mg/mL. To conjugate the antibody to the activated NPs, 10 µL of the anti-CD64 antibody solution were added to 1 mL of activated NP suspension. After homogenizing with a vortex mixer, the suspensions were incubated at 4°C for 24 hours. The conjugated NPs were again centrifuged (21,000× g, 10 minutes, 4°C) to remove excess unconjugated antibody and remaining reagents. The supernatant was stored for further antibody quantification analysis. Particle size and zeta potential measurements The produced NPs were characterized for particle size, size distribution (polydispersity index), and zeta potential. Mean hydrodynamic diameter and polydispersity index were assessed by dynamic light scattering using a 90 Plus particle size analyzer (Brookhaven Instruments Corporation, Holtsville, NY, USA) and zeta potential was determined by phase analysis light scattering using a ZetaPALS zeta potential analyzer (Brookhaven) at 660 nm, with a detection angle of 90° at 25°C. All samples were diluted in water to a suitable scattering intensity and measurements were performed with three independent batches of NPs (six runs, ten cycles each). scanning electron microscopy In order to evaluate the surface morphology of the NPs, scanning electron microscopy was performed using a high resolution Quanta™ 400 scanning electron microscope (FEI Company, Hillsboro, OR, USA). Samples were mounted on metal stubs and coated with a gold/palladium thin film by sputtering for 60 seconds, with a 15 mA current, using a SPI Transmission electron microscopy The morphological features of the developed NPs and the presence of SPIONs were assessed by transmission electron microscopy. Samples were prepared by placing 10 µL of NP dispersion on a copper-mesh grid and, after 2 minutes, excess water was removed by capillarity using filter paper. For contrasting, 10 µL of 0.75% uranyl acetate solution was added and left at room temperature for 30 seconds. The grids were then observed using a JEM-1400 transmission electron microscope (JEOL Ltd., Tokyo, Japan), with an acceleration voltage of 80 kV. Methotrexate association efficiency The association efficiency of MTX on NPs was determined by calculating the ratio between the amount of MTX measured in the NPs and the total amount of MTX, both quantified in the NPs and in the three supernatants collected during the purification protocol, as follows: Association efficiency MTX in NPs Total amount of MTX (%) = × 10 00%. The quantification was performed by high-performance liquid chromatography (HPLC) with ultraviolet detection. The HPLC system comprised a MD-2015 multi-wavelength detector (Jasco, Easton, MD, USA) programmed for peak detection at 302 nm, a high-pressure pump (PU-2089), an autosampler (AS-2057), and a controller (LC-Net II/ADC) mastered by ChromNAV software. A reversed-phase monolithic column Chromolith RP-18e (100×4.6 mm internal diameter; Merck) connected to a guard column of the same material (5×4.6 mm internal diameter) was used as stationary phase. Separation conditions were adapted 16 Standard MTX solutions were prepared at 1, 3, 6, 10, 25, 50, and 100 µg/mL in mobile phase. To prepare the NP samples for HPLC analyses, 100 µL of NP dispersion or 100 µL of supernatant were added to 900 µL of mobile phase, to a final concentration of 10% (v/v). MTX-free PLGA NPs, namely PLGA NPs and SPIONs-loaded PLGA NPs, were also analyzed and no interference was observed on the chromatograms. anti-cD64 antibody conjugation efficiency The Bradford assay was performed using the Coomassie Plus™ (Bradford) assay kit to assess the efficiency of anti-CD64 antibody conjugation to multifunctional NPs, 17 following the instructions. Diluted bovine serum albumin standards were prepared (2.5-25 µg/mL) using the same solvent as used for the samples (supernatant of the centrifugation of activated PLGA NPs). Coomassie Plus™ reagent was added to the supernatant of the centrifugation of anti-CD64 conjugated NPs. The protein concentration for each unknown 4915 Multifunctional nanoparticles for rheumatoid arthritis theranostic sample was determined and the conjugation efficiency (%) was assessed, as follows: where A is the initial amount of antibody and P is the protein in supernatant. Fourier transform infrared spectroscopy NPs were characterized by Fourier transform infrared spectroscopy (FT-IR), using a Frontier FT-IR spectrometer with universal attenuated total reflectance sampling accessory (PerkinElmer, Waltham, MA, USA). For each NP spectrum, a 50-scan was collected with 4 cm -1 resolution in the midinfrared region (3,600-600 cm -1 effect of nanoparticles on cell viability and cytotoxicity Following exposure to the developed NPs, MTT and LDH assays were performed to measure cell viability and cytotoxicity, respectively. 18 Briefly, RAW 264.7 cells were cultured in 96-well plates at a density of 2.5×10 4 cells/mL and cultured for 24 hours before use. The following day, the culture medium was removed, and the NP dispersions or free MTX were added at different concentrations (corresponding to 0.01-100 µg/mL MTX). MTX-free NPs were added at concentrations corresponding to the polymer concentration of the MTX-loaded NPs (approximately 0.15-1,500 µg/mL). Two controls, ie, cells treated with culture medium and cells treated with Triton™ X-100 2% (w/v) in culture medium, were also included. For the MTT assay, 18 after 24 hours of incubation, the culture medium was removed and replaced by 200 µL of MTT diluted in fresh DMEM at 0.5 mg/mL. The plate was incubated for 4 hours at 37°C in the dark. The MTT solution was discarded and formazan crystals were solubilized using 200 µL of dimethyl sulfoxide. The plate was shaken for 10 minutes at room temperature, and absorbance (590 nm, 630 nm) was measured using a Synergy™ HT Multi-mode microplate reader (BioTek Instruments Inc., Winooski, VT, USA). The LDH assay was performed following the instructions. Briefly, after 24 hours of incubation, the plate was centrifuged (250× g, 10 minutes, at room temperature) and 100 µL were collected and transferred to a new 96-well plate. The LDH cytotoxicity detection kit reaction mixture was added, and absorbance (490 nm, 630 nm) was read after 20 minutes of incubation at room temperature in the dark. Cell viability and cytotoxicity were assessed and expressed as a percentage in relation to both controls. statistical analysis Statistical analysis was performed using IBM ® SPSS ® Statistics version 21.0 (IBM Corporation, Armonk, NY, USA). The results are reported as the mean ± standard deviation for a minimum of three independent experiments. The twotailed Student's t-test and one-way analysis of variance were performed to compare two or multiple independent groups, respectively. When the group was significantly different (P0.01), differences between groups were compared with the Tukey's post hoc test. Paired samples were analyzed with the paired-samples two-tailed Student's t-test. Differences were considered to be statistically significant at P0.01. Results and discussion For a successful RA-targeted theranostic approach, it was paramount that all components in the devised PLGA NPs, ie, SPIONs (for imaging diagnosis), MTX (therapeutic drug), and the anti-CD64 antibody (for specific RA macrophage targeting), were effectively integrated in the nanoparticulate system, without significantly altering their known drug delivery characteristics. The NPs were designed as part of an intravenous administration strategy for RA-targeted therapy and imaging. Therefore, the physicochemical properties of the developed NPs, which influence their physical stability and interaction with biological tissues, deserved detailed attention. The NPs were characterized in terms of their particle size, polydispersity index, zeta potential, association with SPIONs and MTX, anti-CD64 antibody conjugation, and their effect on cell viability and cytotoxicity. The mean particle size of the developed NPs, measured by dynamic light scattering, is presented in Table 1 In order to avoid sequestration of NPs in spleen sinusoids and liver fenestrae, the size of a nanosystem should not exceed 200 nm. Further, NPs with a diameter smaller than 6 nm can be excreted by the kidneys, so are rapidly eliminated from the bloodstream. 19 Consequently, the sizes obtained for the developed NPs are suitable for intravenous administration. Focusing on the non-conjugated NPs, SPIONs and MTX association did not significantly affect particle size for any of the formulations, suggesting that they do not considerably interfere with NP formation, and that it is possible to create a complex multifunctional nanoparticulate system maintaining the primary properties PLGA NPs. However, conjugation with the anti-CD64 antibody interfered slightly with particle size. NPs underwent a shift in mean particle size, increasing in the order of 30-50 nm (Table 1), which could be explained by the presence of the antibody on the surface of the particles, as well as by a higher water content on hydration of the conjugate. 20 The polydispersity indexes obtained were between 0.1 and 0.3 (Table 1), indicating that well defined and monodispersed nanoparticulate populations were produced with uniform and consistent sizes, and without suffering aggregation. Regarding surface charge, all formulations tested had markedly negative zeta potential values (Table 1). A negative charge is typical of carboxyl-terminated NPs owing to the contribution of the carboxyl groups, which are deprotonated at physiological pH or, in this particular case, at the pH of double-deionized water. 21 Zeta potential values around -30 mV contribute to the stability of hydrophobic particles in aqueous dispersion, avoiding formation of aggregates. 22 The zeta potential values decreased significantly in all formulations after conjugation of anti-CD64 (Table 1). The principle behind the antibody conjugation relies on establishment of a covalent amide bond between the amine and carboxyl termini of the antibody and PLGA, respectively. Consequently, it is expected that partial surface charge shielding occurs due to the depletion of carboxyl groups at the surface as they were involved in the reaction with the amine termini of the antibody. Scanning electron micrographs allowed gathering information about the surface morphology of the NPs. The images show the spherical shape of the NPs as well as their smooth surface, which is devoid of pores ( Figure 3). The NPs had sizes below 200 nm, confirming the results previously obtained by dynamic light scattering. The homogenous and flat surface, not varying between different formulations, suggests that both MTX and SPIONs are entrapped within the PLGA polymeric matrix. Images of the anti-CD64-conjugated PLGA NPs show a more aggregated and gel-like state, possibly due to the functionalization protocol and the presence of the antibody on the NP surface (Figure 3Aii, Bii, Cii and Dii). Transmission electron micrographs (Figure 4) allowed further confirmation of the results obtained by dynamic light scattering and scanning electron microscopy with regard to particle size. The micrographs show a monodispersed population of individual, smooth, and spherical particles with well defined sizes. Association with MTX and SPIONs, as well as anti-CD64 antibody conjugation, did not considerably affect particle shape or overall size. Figure 4Bi-Bii shows SPIONs-loaded NPs in which the SPIONs are evident inside the PLGA NPs as smaller and 4917 Multifunctional nanoparticles for rheumatoid arthritis theranostic electronically denser, well dispersed spots, confirming their efficient association, both alone and when coassociated with MTX (Figure 4Di-Dii). Figure 4Aii, Bii, Cii and Dii shows micrographs of the NP formulations after conjugation with the anti-CD64 antibody. A denser and thicker "corona" is apparent surrounding the lighter NP core (Figure 4Aii 4919 Multifunctional nanoparticles for rheumatoid arthritis theranostic A previously described HPLC method was used to quantify the association of MTX in the devised NPs. 16 Given that MTX has a solubility of 0.01 mg/mL in water at 20°C, 8 the MTX-loaded PLGA NPs were prepared using a single emulsion technique in order to achieve elevated values of association efficiency. High efficiency was demonstrated for both MTX-loaded and MTX-and SPIONs-loaded NPs, being 79.1% and 75.5%, respectively (Table 1). These values did not differ significantly between the two different formulations (P=0.32), indicating that a co-association of both agents is possible in a PLGA-based theranostic approach. MTX is a very effective drug against RA but is extremely toxic and has serious side effects, limiting the dose that can be administered, thereby compromising RA therapy. 3,8 A high association between MTX and PLGA in a targeted nanosystem could provide a new opportunity for RA therapy, since the bioavailability, safety, and efficacy of MTX would be improved. The anti-CD64 conjugation efficiency of each formulation is shown in Table 1. The amount of anti-CD64 present in the nanoparticulate system was shown to be approximately 3.5 µg per mg of NPs. Different antibody/NPs ratios were obtained in previous works when conjugating PLGA NPs with different monoclonal antibodies. 15,21,24 However, the conjugation efficiency obtained in this work was significantly higher (31%-37%), because considerably lower amounts of antibody (at least 20-fold less) were used for functionalization of the PLGA NPs. Statistical analysis of the conjugation efficiencies did not show significant differences between the formulations (P=0. 26), indicating that association with both MTX and SPIONs did not considerably affect the main features of the NPs and their interaction with the antibody. FT-IR spectra of the samples were obtained to confirm the association of MTX into PLGA NPs and the functionalization of the NPs with the anti-CD64 antibody. The FT-IR spectra of PLGA NPs and MTX-loaded PLGA NPs were compared to the FT-IR spectrum of free MTX ( Figure 5A). At 1,750 cm -1 , a marked peak indicates the presence of a carbonyl bond (C=O stretching vibration), which is characteristic of PLGA. 25 Further, in the MTX spectrum, it is possible to observe a different peak at 1,638 cm -1 (C=C stretching vibration), which is characteristic of the drug molecule but not of PLGA. 26 The characteristic peak from PLGA was not altered in the MTX-loaded NPs spectrum, and the carbon-carbon double bond typical of MTX is evident in the spectrum, confirming that MTX was successfully associated into the PLGA NPs. The FT-IR spectra of PLGA NPs and anti-CD64 conjugated PLGA NPs were compared with the spectrum of the anti-CD64 antibody ( Figure 5B). At 1,750 cm -1 , the PLGA-characteristic peak is also present in the antibody (C=O stretching bond). 25 26,27 Despite this, in the anti-CD64 spectrum, an additional peak at 1,560 cm -1 stands out, corresponding to the amine groups of the antibody (N-H bending vibrations). 27 In the case of the anti-CD64 conjugated NPs spectrum, this characteristic peak emerges, confirming that anti-CD64 was present in the functionalized PLGA NPs, either covalently linked or physically adsorbed. The effect of NPs on cell viability and cytotoxicity after 24 hours of incubation was studied in vitro on RAW 264.7 cells, performing MTT and LDH assays, respectively, as a function of the devised formulations and the different concentrations of MTX (0.01-100 µg/mL). Both MTT and LDH assays reveal the same tendency and allow similar conclusions to be drawn. All formulations displayed a concentration-dependent effect, with toxicity increasing proportional to the concentration ( Figure 6). The results also demonstrated that the toxicity of MTXloaded NPs was greater than of the free drug, suggesting that a future approach for RA therapy based on these NPs may enhance the therapeutic efficiency of MTX. Additionally, with the exception of the highest concentration, MTX-free NPs did not significantly affect cell viability, confirming the safety profile of the devised nanosystem ( Figure 6). Anti-CD64-conjugated NPs did not originate higher cytotoxicity when compared with non-conjugated NPs. This is justified by the fact that RAW 264.7 macrophages, being a mouse cell line, do not express the human CD64 receptor, and these cells were used in this assay as a cell model that does not express this receptor. In Figure 6, it is apparent that anti-CD64-conjugated MTX-loaded NPs are not as toxic as non-conjugated MTX-loaded NPs, demonstrating that this system may work as a targeted approach. Future work using cell models that express or are modified for overexpressing the CD64 receptor will allow studying of the targeting ability of the anti-CD64-conjugated NPs, aiming for the envisioned theranostic application. In this work, MTX, SPIONs, and anti-CD64 antibody were successfully co-associated into PLGA NPs for the management of RA. The physicochemical features of the devised NPs, ie, their size, zeta potential, morphology, high MTX association efficiency, association with SPIONs, anti-CD64 4921 Multifunctional nanoparticles for rheumatoid arthritis theranostic functionalization, and in vitro safety profile are key elements for a future biomedical and pharmaceutical approach. This new design for a targeted RA theranostic strategy could be considered and studied in order to find new means for RA therapy and also work as an enhanced imaging tool for techniques such as MRI. Conclusion Multifunctional PLGA-based nanocarriers for drug targeting and in vivo imaging are of particular interest due to their biodegradability and biocompatibility. In this work, by effectively co-associating MTX and SPIONs into PLGA NPs, and successfully functionalizing them with an anti-CD64 antibody, a novel attempt was made to achieve targeted therapy and imaging for RA. Overall, the association of both MTX and SPIONs did not significantly affect the properties of the PLGA NPs. The NPs had a reduced particle size and were stable in solution, which are paramount requisites for their application as drug delivery systems. Consequently, the proposed nanoparticulate system may potentiate the action of MTX without injuring healthy tissues and organs, simultaneously providing a non-invasive and specific imaging tool for RA. After their development and thorough characterization in this study, these NPs are now ready for further in vitro studies aiming for the assessment of their performance in targeting RA macrophages and reducing inflammation at sites of RA.
2016-05-12T22:15:10.714Z
2014-10-23T00:00:00.000
{ "year": 2014, "sha1": "8e36542a742ff9c7fad113c4fa68ddec764a1f57", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=22145", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53a9a8a99b938bac7e299b0740d0c0d4387ee219", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
265696353
pes2o/s2orc
v3-fos-license
Assessing wineries' performance in managing critical control points for arsenic, lead, and cadmium contamination risk in the wine-making industry: A survey-based analysis utilizing performance indicators as a results tool Human health hazards appear in wine production. Wineries have implemented food safety management systems to control food hazards through Hazard Analysis Critical Control Point (HACCP). Wine-making industry applies HACCP by evaluating Critical Control Points (CCPs). One of the CCPs that exhibits inadequate control is the potential contamination risk of arsenic, cadmium, and lead throughout the winemaking procedure. Wineries performance level about controlling CCPs related to contamination risk by arsenic, cadmium and lead in the winemaking were analyzed. A sixteen-question questionnaire was made to achieve this research. Three indicators were calculated for training, legislation, and analysis performance components in CCPs control. Results revealed that wineries fault in analysis and legislation components. Identification and updating of legislation about As, Cd and Pb contamination risk is in starting performance level for wineries that produce less than 250,000 L/year wineries. Analysis performance level is even lower than legislation. Only one out of every three wineries possess information regarding the concentrations of arsenic, cadmium, and lead in the soils of vineyards where grapes are cultivated. Furthermore, the availability of data on their available concentrations in the soil solution is even more limited. Those wineries that controlled As, Cd and Pb concentrations make it according to official recommendations using techniques based on atomic absorption spectrometry. However, there is a lack of this spectrometry equipment in the wineries own laboratories. Introduction Food hazards in winemaking arise from various sources, including improper practices by winery staff, equipment and infrastructure used in winemaking, and environmental factors.Cross-contamination and allergens have been identified as the primary causes of food safety incident [1].The accumulation of residues, which can have physical, chemical, or microbiological origins, is a major concern in wine production.In this context, studies have shown that excessive intake of ions such as arsenic, lead and cadmium can be toxics for human health [2,3]. Metals and metalloids present in grapes primarily come from the soil and the application of fertilizers, pesticides, and fungicides containing substances like cadmium, copper, manganese, lead, or zinc [4].Spanish soils naturally have elevated concentrations of these elements, particularly arsenic [5][6][7].High arsenic concentrations in vineyard soils in central Spain have been reported by Jimenez-Ballesta (2023) [8].Andersson and Nilsson found that chemical elements like cadmium, lead and arsenic from sewage sludge fertilization (84 t/ha) remained in the top 20 cm of the soil for twelve years [9].However, vines and grapes are not hyperaccumulators of potentially toxic soil elements (PTEs), with higher concentrations of PTEs found on the outer parts (leaf and petiole) compared to the inner parts of the grape (skin, pulp, and seed) [10]. Controlling metals and metalloids food hazards is achieved by addressing associated food risks.Wineries utilize Food Safety Management Systems (FSMS) as to manage these risks.Good manufacturing practices (GMP) in winery operations establish hygienic conditions aimed at preventing the presence of hazardous agents [11]. FSMS typically include Prerequisites Programmes (PRPs) [12] and a Hazard Analysis Critical Control Point (HACCP) [13,14] in accordance with the reference regulatory framework in the European Union [15].PRPs ensure appropriated environmental and operational conditions necessary for producing safe and healthy food.PRPs address issues to the supply and use of sanitary water, equipment and facility cleaning and disinfection, pest control and prevention, good manufacturing practices or staff knowledge of food safety, allergens and food traceability [16]. HACCP is a globally recognized and standardized methodology for ensuring food safety.It is based on seven principles that focus on identifying and controlling food safety hazards.The first three principles involve hazard identification, determining CCPs and establishing of critical limits of these CCPs to ensure food safety [17,18].Also, a HACCP plan encompasses control measures that can be employed to proactively prevent, mitigate, or eliminate potential hazards.Regular monitoring and verification procedures, including Table 1 Main CCPs and oPRPs in the production of young wine.rigorous testing, serve to validate the efficacy of CCPs, with swift corrective actions promptly implemented upon detecting any contamination.Meticulous documentation and comprehensive record-keeping throughout the entire HACCP process stand as imperative requirements for ensuring adherence to regulatory standards and facilitating traceability. A critical control point (CCP) is a point in a step or procedure at which a control must be applied and is essential to prevent or eliminate a food safety hazard or reduce it to an acceptable level [1].Conducting a hazard analysis of CCPs enables the identification, evaluation, and control of significant CCPs throughout the food production process.Once potential hazards are identified, their status as a CCP is assessed, and reference limits or critical limits are established through the implementation of preventive measures to prevent deviations from these limits [13,16,19].Identification of CCPs is executed by employing the decision tree framework as outlined by Codex Alimentarius, which was further tailored with ISO 22000: 2018 (E) criteria [20].This procedure helps determine whether the hazards can be effectively addressed as Operational Prerequisite Programme (oPRPs) or whether specific operational protocols, including defined measures, are required for the management of CCPs [20,21].Inadequate control of CCPs can lead to contamination of grapes and wines with microorganisms, residues from phytosanitary products, or traces of heavy metal or metalloids from soils. Christaki, T. ( 2002) identified main CCPs in red wine production [22].Benito, S. ( 2019) conducted a study on the identification and control of CCPs during winemaking to mitigate the levels of various compounds, including biogenic amines, ethyl carbamate, ochratoxin A, and sulfur dioxide [23].Martinez-Rodriguez (2009) studied CCPs associated with microbial safety during wine production, with a particular focus on Ochratoxin A [24].Table 1 shows main CCPs and oPRPs in the production of young wine based on previous research [20,[22][23][24][25]. Fig. 1 shows a flow diagram of wine-making process and corresponding CCPs to each stage of this process based on technical document [26] and Table 1. Lopez-Santiago (2022) demonstrated that the presence of traces of heavy metals and metalloids in grapes and wines were inadequately controlled CCPs in wineries [27]. These metals and metalloids contamination hazards correspond to CCP 3.1, which involves controlling the presence of metals and metalloids (Cd, Pb, As) in grapes at the Harvest reception stage in the winery, and CCP 9.1, which focuses on controlling the limit concentrations of metals (traces of As, Cu, Pb) in red wine during the Cold stabilization stage in the flow diagram of the young winemaking process as shown in Fig. 1. According to Lopez-Santiago, fifty percent of the wineries exhibited a complete lack of control over the contamination hazards of arsenic, lead and cadmium in grapes and wines [27]. Herce-Pagliai determined that the concentration of arsenic in Spanish wines ranged from 2.1 to 14.6 μg/L, and the average total arsenic concentration was similar across all wine samples [28].A review study by the Organisation Internationale de la Vigne et du Vin (OIV) showed that Spanish wines consistently comply with lead concentration limits.The study analyzed sixty-five white and red wines obtaining that lead concentration was below 0.05 mg/L [29]. International and national regulations establish maximum allowable levels of heavy metals and metalloids in grapes and wines to prevent toxicity issues for consumers.The European legislative framework sets maximum permitted concentrations levels for arsenic, lead and cadmium, along with guidelines for monitoring these levels [30][31][32][33][34].According to OIV, the maximum acceptable limits for certain metals in wine are 0.2 mg/L for arsenic, 0.01 mg/L for cadmium, 0.15 mg/L for lead [35].German legislation sets 0.1 mg/L for arsenic, 0.01 mg/L for cadmium, and 0.3 mg/L for lead, while Italian legislation sets it at 0.3 mg/L [36]. The control of metals and metalloids in grapes and wines is achieved through analytical methods recommended by the OIV, primarily based on atomic absorption spectrometry due to its selectivity, sensitivity, and ability to directly measure these elements.Graphite furnace atomic absorption spectrometry (GFAAS) or electrothermal atomization (ETAAS) are used for arsenic, cadmium, and lead analysis.GFAAS allows detection limits to be lowered to the parts per billion (ppb) range with relative simplicity and eliminating the need for prior extraction techniques [37].Table 2 presents the OIV recommended methods for determining Arsenic [38], Cadmium [39] and Lead [40] in wines and must. Within this framework, the primary aim of this research is to assess the effectiveness of wineries in managing Critical Control Points (CCPs) associated with contamination risks posed by arsenic, cadmium, and lead in grapes and wines.Additionally, the study aims to develop a methodology for evaluating their advancement, incorporating the use of performance indicators within the HACCP plan to highlight the element of training as a corrective action.Furthermore, the study aims to identify the challenges that impede achieving adequate control. Table 2 International methods to determine arsenic, cadmium, and lead in wines and must recommended by the OIV. Study design The research study design was proposed by conducting a survey and its later analysis using right statistical methods.The sample was selected among Spanish wineries from different wine regions.During last half of 2022, the survey was conducted, and then, SPSS Windows software SPSS was used to analyse the data (IBM Corp. Released 2020.IBM SPSS Statistics for Windows, Version 27.0.Armonk, NY, USA: IBM Corp.).The calculated statistics were frequencies and central position values.Non-parametric tests were estimated obtaining Spearman correlation coefficient (ρ) and Kendall's Tau coefficient (τ) for nonparametric data, with a significance level of p < 0.01.Non-parametric Mann-Whitney U Test for two independent samples were applied, with a significance level of p < 0.05. Sample selection Spain had approximately 4133 wineries in 2020 [41] and one hundred and one Wine Protected Designation of Origin (WPDO) [42].One hundred-thirty-nine wineries were selected from different WPDOs for this research.The sampling methodology selected was the non-probabilistic method [43,44].Researchers used previous information to make the sample selection, instead of random selection [45].The criteria for configuring the sample were that there was diversity in wineries′ sizes according to their annual wine production, wineries must belong to a WPDO, and has been HACCP implemented.Wineries were asked about their performance in controlling the risk of arsenic, lead, and cadmium contamination critical control point in the winemaking. The questionnaire was sent twice to all the wineries in the sample, and additionally, the questionnaire was sent once again to fifty of them through the 'Contact' section of their website.Thirty-two wineries answered the questionnaire, which represents 23 % of the wineries sampled. Survey preparation The survey design consisted of a questionnaire divided into four sections, with a total of fifteen closed-ended questions.The questions were developed based on previous research studies [2,3,10,19,29,46,47].The questionnaire can be found in Appendix A. This type of questionnaire is commonly used in causal, descriptive, and conclusive research [48].qualitative scale and a quantitative variable represented by a Likert scale (ranging from 0.33 to 1).Variable V G6 indicated the level of workers trained in CCPs, with values of 0.33 representing "No workers have training in control and monitoring of CCPs," 0.66 representing "More than half of workers have training in control and monitoring of CCPs," and 1 representing "All workers have training in control and monitoring of CCPs."Question ID.1 was a multiple-choice question assessing winery performance in identifying legislation related to arsenic, cadmium, and lead.It was coded using a Likert scale, with the variable V ID1 (ranging from 0.33 to 1) representing winery performance regarding legislation identification.Question ID.2 was a dichotomous (yes/no) question asking about winery identification of updated information from the Spanish Agency for Food Safety and Nutrition (AESAN) on heavy metals and metalloids food risk.Variable V ID2 took values of 0 for "No" and 1 for "Yes." Question CS1 was a dichotomous (yes/no) question about winery information regarding vineyard soil physical and chemical analysis.Variable V CS1 took values of 0 for "No" and 1 for "Yes."Question CS2 was a dichotomous (yes/no) question about winery information regarding fertilizer use in vineyard soils.Variable V CS2 took values of 0 for "No" and 1 for "Yes."Question CS3 was a multiple-choice question with eight options regarding winery information on soil chemical properties.Variable V CS3 calculated the cumulative value (0.125) assigned to each selected option, indicating the level of winery knowledge regarding specific soil chemical properties. Question V CB1 was a dichotomous (yes/no) question about whether the winery had its own laboratory for chemical analyses.Variable V CB1 took values of 0 for "No" and 1 for "Yes."Question C B2 was a dichotomous (yes/no) question about whether the winery used an external laboratory for chemical analyses.Variable V CB2 took values of 0 for "No" and 1 for "Yes."Question CB3 was a dichotomous (yes/no) question about whether the winery had its own atomic absorption spectrometer and staff for metal analyses.Variable V CB3 took values of 0 for "No" and 1 for "Yes".Question CB4 was a dichotomous (yes/no) question about whether the winery used an external laboratory for metal trace analyses.Variable V CB4 took values of 0 for "No" and 1 for "Yes".Finally, there was a multiple-choice question about the job position of the survey respondent. We have formulated three hypotheses to be examined in this study.Hypothesis one (H1) proposed that workers who received adequate GMP and CCP training demonstrate a satisfactory performance level of CCP controlling in the wineries.Hypothesis two (H2) stated that legislation performance component regarding to its identification and updating about the contamination risk posed by arsenic, cadmium, and lead has reached a mature level in the wineries.Hypothesis three (H3) proposed that wineries possess information about the concentrations of arsenic, cadmium, and lead in the vineyard soils from which grapes (raw material) are harvested and that wineries have adequate spectrometric equipment for their identification. Results Fig. 3 shows the wineries distribution in five groups according to their answers about their yearly wine production.Regarding the FSMS implementation, 96.9 % of the wineries had implemented PRPs according to food hygiene legislation, and a 93.8 % of the wineries had implemented a HACCP. Performance of wineries in relation to food safety training component Table 3 shows the results for the percentage of workers trained in good manufacturing practices in winemaking (GMP), the percentage of workers trained in the monitoring and CCPs, and number of workers median and arithmetic mean by type of winery.All workers in wineries with annual production over 250,001 L/year have received GMP training and all workers in wineries with annual Fig. 3. Wineries sample distribution according to their yearly wine production (L/year). production over 500,000 L/year have received CCPs training.Although some wineries producing between 25,001-250,000 L/year have all workers without GMP or CCPs training, it can be said that most of the wineries have some GMP-trained and CCPs-trained workers. Spearman correlation coefficient (ρ) is 0.686 and Kendall's Tau coefficient (τ) is 0.653 between variables V G5 and V G6 .It shows there is a positive correlation between the GMP Workers Training and CCPs Workers Training. Results show that as the winery gets bigger according to its yearly wine production, it has more workers trained in GMP and CCPs.However, the percentage of trained workers is also high in smaller wineries.This is due to the number of workers ranging between two and three in this winery group, and therefore having trained a worker already reaches values of fifty percent.This finding is in agreement with the study conducted by Lee J.C. et al. [49], which identified significant increases in the application of GMP, GHP, and equipment design prerequisites, as well as all HACCP systems, in European companies. In general, wineries train a higher percentage of workers in GMP than in CCPs.One in two wineries has half of its workers untrained in CCPs.This is a difficulty for identification and control of CCP related to the risk of contamination by arsenic, cadmium, and lead during the winemaking process.Wineries that supply training to their workers do so in both GMP and CCPs. A quantitative analysis of food safety worker training (FSWT) was performed based on an indicator defined by equation ( 1) [50,51]: Where. • W fswt is aggregated FSWT variable for each winery, • V G5 is variable that stands for the level of workers trained in GMP, and takes values 0.33, 0.66 or 1. • V G6 is variable that represent level of workers trained in CCP, and takes values 0.33, 0.66 or 1. • n is number of variables that has been aggregated, and its value is 2. Obtaining FSWT Indicator (I fswt ) for each group of wineries according to their yearly wine production size by equation ( 2): Where. • I fswt is FSWT Indicator for each group of wineries according to yearly wine production, • W fswt is FSWT variable for each winery, • m is number of wineries of related group.This indicator is dimensionless and expresses the grade of progress achieved regarding food safety worker training in each of the winery groups, according to their annual production.The grades of progress are defined as Star (I fswt between 0 and 0.33), In progress (I fswt between 0.34 and 0.67), and Maturity (I fswt between 0.67 and 1).I fswt values for winery size group regarding to their annual wine production are showed in Table 6. Performance of wineries in relation to heavy metal and metalloids food contamination risk legislation component Second section results are collected in Table 4.This table shows the heavy metal and metalloids food contamination risk (HMFCR) legislation identification and HMFCR legislation updating the through National Agency (AESAN) by type of winery. There is a low wineries percentage that have identified arsenic, cadmium, and lead contamination risk legislation.Rates are higher in wineries with wine production over 250,000 L/year.In this case, two out of three wineries have identified arsenic, cadmium, and lead legislation.Wineries between 100,001-250,001 have the lowest rate (28.6 %). National Agency thar integrates and performs the functions related to food safety within the competence framework of the General Administration of Spain is the Agencia Española de Seguridad Alimentaria y Nutrición (AESAN) [52].Information on applicable arsenic, cadmium and lead contamination food risks legislation is available and updated on the AESAN website [53].AESAN information is used by only one-third of small to medium-sized wineries (up to 250,000 L/year).information.Bigger wineries (over 250, 001 L/year) have a better rate (66.7 %), but it is still insufficient.HMFCR legislation is not clearly identified in wineries, and this occurs especially in wineries with annual wine production under 250,001 L/year.The Spearman correlation coefficient (ρ) is 0.894 and Kendall's Tau coefficient (τ) is 0.873 between variables V ID1 and V ID2 .It shows there is a strong positive relationship between the identification of HMFCR legislation and it is and updating through AESAN.Besides, correlation coefficients have been calculated for V ID1 and V ID2 .The Spearman correlation coefficient (ρ) is 0.413 and Kendall's Tau coefficient (τ) is 0.393.A positive correlation exists between the workers′ CCPs training and the HMFCR legislation identification in the wineries. A quantitative analysis of legislation identification and updating (LIU) was performed based on an indicator defined by equation (3): Where. • W liu is aggregated LIU variable for each winery, • V ID1 is variable that stands for winery performance about legislation identification about arsenic, cadmium, and lead, and takes values 0.33, 0.66 or 1. • V ID2 is variable that represented winery performance about update legislation information through AESAN and takes values 0 or 1. • n is number of variables that has been aggregated, and its value is 2. Obtaining LIU Indicator (I liu ) for each group of wineries according to their yearly wine production size by equation (4): Where. • I liu is LIU Indicator for each group of wineries according to yearly wine production, • W liu is LIU variable for each winery, • m is number of wineries of related group. This indicator is dimensionless and expresses the grade of progress achieved regarding the analysis of legislation identification and updating in each of the winery groups, according to their annual production.The grades of progress are defined as Star (I liu between 0 and 0.33), In progress (I liu between 0.34 and 0.67), and Maturity (I liu between 0.67 and 1).I liu values for winery size group regarding to their yearly annual wine production are showed in Table 6. Performance of wineries in relation to chemical analysis of the vineyards regarding to arsenic, cadmium, and lead Most wineries have data about vineyard soils′ physical and chemical analysis, and fertilizer information used in the vineyard soils.Spearman correlation coefficient (ρ) between V CS1 and V CS2 is 0.686.This positive correlation demonstrates that wineries that possess information about the physical and chemical analysis of vineyard soil tend to also have information about the fertilizers used in those vineyard soils. The percentage of wineries that have information related to the soil chemical analysis of the vineyards where the grapes come from shown in Fig. 4. However, this information is mostly about soil pH (87.5 %) and soil electric conductivity (71.9 %).The number of wineries that have chemical information about arsenic, cadmium, and lead concentrations decreases considerably.Table 5 shows data on the percentage of wineries, segmented according to their level of annual wine production, which have information regarding arsenic, cadmium, and lead total/available concentrations. One out of every three wineries possess data regarding the cumulative levels of arsenic, cadmium, and lead concentrations in the soil.Besides, wineries percentage decreases when the information is about soil solution concentrations of arsenic, cadmium, and lead.Hence, one in ten possesses data concerning the presence of arsenic in vineyard soils, while two in ten wineries have information pertaining to the concentrations of cadmium and lead in the same soil samples. Information data about total cadmium and lead concentrations are the ones that most have the wineries, especially the largest wineries (over 250,001 L/year).Instead of the information about total arsenic concentration, it says that it is an extremely low percentage in all winery groups.The percentages of wineries relating to the information about the concentrations of arsenic, cadmium, and lead in the soil solution are lower those relating to the total concentration.It is the information related to the concentration of cadmium available in the soil solution that presents the highest percentage, being 66.7 in the wineries between 250,001-500,000 L/ year and 28.6 in the wineries between 100,001-250,000 L/year.The lack of information on arsenic, cadmium, and lead soil concentrations is a difficulty for the CCP controlling as it impedes an adequate assessment of the risk that grapes used for winemaking may have been contaminated during cultivation or harvest. A high percentage of wineries (78.1 %) have their own laboratory in their facilities to make chemical analyses of grapes and wines.Two out of every ten wineries do not have their own laboratory.In this case, most of them (92.3 %) use an external laboratory service to make chemical analyses of grapes and wines.In this context, control of arsenic, cadmium, and lead in grapes and wines is made by analytical techniques based on atomic absorption spectrometry.Only one out of every ten wineries possess the necessary technological equipment and qualified staff capable of conducting heavy metal analysis using atomic absorption spectrometry.A high percentage of wineries (93.8 %) do not have them.Thirty-five-point-two percent of wineries without atomic absorption spectrometry equipment and qualified staff use external laboratories to analyse about heavy metals and metalloids concentration in grapes and wines.This result shows that the lack of technical means for qualitative and quantitative analysis in wineries is a barrier to good performance related to controlling arsenic, cadmium and lead contamination risk in grapes and win es.A quantitative analysis of arsenic, cadmium, and lead critical control point chemical analysis performance (CCP-MCHEM) was evaluated based on an indicator defined by equation ( 5): Where. • W ccp− Mchem is the aggregated CCP-MCHEM variable for each winery, • V CS3 is variable that stood for chemical information about arsenic, cadmium, and lead concentrations in soil that winery had. V CS3 = ∑ 8 j=1 a j , a j is each item of this multiple-choice question (yes = 0.125, no = 0), • V CBrx is variable that represented winery capacity to hold chemical analysis by their own or external means. • V CBry is variable that stood for winery capacity to hold arsenic, cadmium and lead chemical analysis by their own or external means. • n is the number of variables that has been aggregated, and its value is 3. W fswt as a dependent variable V ID2 as an independent variable which stood for two groups (V ID2 = 0, V ID2 = 1).Z = − 2.673 Bilateral significance = 0.008 Exact significance = 0.013 Monte Carlo significance = 0.009, lower limit = 0.007, upper limit = 0.011 p < 0.05 Rejected hypothesis null. (Differed) Did grade of progress of training component differ according to wineries capacity to do arsenic, cadmium and lead chemical analysis by their own or external means or did not? Accepted hypothesis null. (Did not differ) Did grade of progress of the legislation component differ according to wineries capacity to do arsenic, cadmium and lead chemical analysis by their own or external means or did not? Rejected hypothesis null. (Differed) Did grade of progress of the legislation component differ according to whether wineries conducted identification and updating arsenic, cadmium, and lead contamination risk legislation available in AESAN or did not? Rejected hypothesis null. (Differed) Did grade of progress of the analysis component differ according if winery had information related to vineyards soil physical and chemical analysis or did not? Rejected hypothesis null. (Differed) Did grade of progress of the analysis component differ according if winery had fertilizer information used in vineyard soils or did not? Obtaining CCP-MCHEM Indicator (I ccp− Mchem ) for each group of wineries according to their yearly wine production size by equation ( 6): Where. • I ccp− Mchem is CCP-MCHEM Indicator for each group of wineries according to yearly wine production, • m is number of wineries of related group. This indicator is dimensionless and expresses the grade of progress achieved regarding to arsenic, cadmium, and lead critical control point chemical analysis performance in each of the winery groups, according to their annual production.The grades of progress are defined as Star (I ccp− Mchem between 0 and 0.33), In progress (I ccp− Mchem between 0.34 and 0.67), and Maturity (I ccp− Mchem between 0.67 and 1).I ccp− Mchem values for winery size group regarding to their annual wine production are showed in Table 7. Matrix and graph of the grade of progress in wineries generated by the performance indicators The three performance indicators allow to determine the degree of progress that each group of wineries has reached related to the control that they conduct on the contamination risk of arsenic, cadmium, and lead in grapes and wines.Table 6 shows values I fswt , I liu and I ccp-Mchem by winery sizes groups and grade of progress. Effectiveness in conducting risk control is divided into three components: the training component, the legislation component, and the analysis component. Each indicator stands for the degree of progress on a component.I f swt is performance indicator that shows progress in the training component.I liu is the indicator that shows progress in the legislation component and I ccp− Mchem is the indicator that shows progress in analysis component.Components performance level by each winery sizes group on the contamination risk by arsenic, cadmium and lead in grapes and wines in Fig. 5. Training is the component with the greatest maturity while analysis is the component with the least maturity regarding to this contamination risk control.Wineries below 250,000 L/year are in the starting performance level of the analysis component, and wineries between 25,001 and 250,000 L/year are in the starting performance level of the legislation component. Non-parametric Mann-Whitney U Tests (MWU) were applied to find relationships among wineries performance variables.Table 7 shows relationship questions, their imply variables and MWU results.Grade of progress of training component differed according to whether wineries conducted identification and updating the legislation related to risk of arsenic, cadmium, and lead contamination available in AESAN, but it was not different regarding to whether wineries had capacity to do arsenic, cadmium and lead chemical analysis by any means.Grade of progress of the legislation component was different according whether wineries had capacity to do arsenic, cadmium and lead chemical analysis by any means and differed regarding to if wineries conducted identification and updating arsenic, cadmium, and lead contamination risk legislation available in AESAN or did not. Grade of progress of the analysis component differed according if winery had information regarding to vineyards soil physicalchemical analysis and, according to if winery had fertilizer information used in vineyard soils or had not. Discussion An evaluation of how wineries are managing critical control point about controlling the contamination risk by arsenic, cadmium and lead in their winemaking processes is to prevent poisoning and diseases in consumers.Studies have identified health problems caused by As, Cd, and Pb [54,55]. Fertilizers or the environment are sources of arsenic, cadmium and lead that can contaminate grapes and wines used in wine production [9,55,56]. GMP and CCP workers training FSMS are tools to accomplish with food hazards control as heavy metals and metalloids traces in grapes and wines.There is an important level of implementation of FSMS in wineries.96.9 % of wineries have a Prerequisites Programmes and 93.8 % have HACCP.Besides, workers trained in GMP and CCPs have a satisfactory level for all wineries. The study reveals that hypothesis 1 (H1) is true.The training component has reached an adequate maturity level of performance.65.6 % of wineries have all their workers trained in GMP and 53.1 % of wineries have all their workers trained in CCPs.These percentages arise to 100 % in wineries with annual production over 500,001 L/year.Results show that wineries′ workers who receive training in GMP, also receive training in CCPs.Similarly, other related studies focusing on controlling OTA mycotoxin contamination in wines have also demonstrated that training in GMP and CCPs has been identified as a significant contributing factor to the successful prevention of such contamination [24].Our findings are consistent with the research conducted by Lee, J.C. et al. [49], wherein the importance of aspects related to food safety culture is underscored, particularly about human factors and specialized training. Knowledge and application of food safety legislation Hypothesis 2 (H2) is not fulfilled.Legislation component is still in progress level regarding performance maturity in four up five wineries groups.European legislation on food safety related to contamination control risk from arsenic, cadmium and lead is accessible.Study shows that only one out three wineries with annual production less than 250,000 L/year has identified and updated the HMCR legislation.The HCMR legislation identification percentage is 66.7 % in wineries with annual production over 250,001 L/year.Two out of three wineries that have the HCMR legislation identified conduct its update through the information available in the Food Safety Spanish Agency (AESAN).Mere publication of food risk information on the AESAN website is not enough for wineries to incorporate it into their control system.Two out of three wineries do not have updated legislation and do not use the information provided by AESAN. Most of the wineries that have identified the HCMR legislation, also have their staff trained in CCPs.Identification of food risk control legislation is different according to the wine production level of the wineries and their workers′ knowledge.The public administration does not provide sufficient references and means for wineries to properly develop and implement the control of CCP related to the risk contamination of arsenic, cadmium, and lead.Our result is aligned with Vela A.R. & Fernández [56], that demonstrated public administrations received a low score from companies regarding to the references provided by the administration (reports, books, and articles) to facilitate the development and implementation of FSMS.Matches with the alcoholic beverage industry's weakness in compliance with food safety legislation demonstrated by Kourtis L.K [57]. Charlotte Yap [58] found that knowledge about the general principles of food safety and its requirements is low in small and medium-sized agrifood businesses.This often leads to regulatory requirements being underestimated and not considered.This lack of knowledge regarding the risk of heavy metal and metalloid contamination in food is a weakness observed in wineries' control over their winemaking processes. Our findings also align with the study conducted by Allam, M. et al. [59], which identified areas where organic food producers and processors in several European countries require further guidance and support in food safety, particularly in their proficiency in performing hazard analysis and creating documents and records following HACCP principles. Analysis and availability of chemical data related to the control of contamination by As, Cd, and Pb Hypothesis 3 (H3) is not accomplished.Analysis is the lower performance component for all winery groups, except 100,000 to 250,000 L/year.We found that many wineries have not information about arsenic, cadmium, and lead concentrations in the vineyards soils, and the wineries percentage that owns this information is different according to their wine annual production level.Nine of ten wineries have information about soil physical and chemical characteristics, such as pH and EC, and information about the fertilizers used in the vineyards.However, these values strongly decrease about the arsenic, cadmium, and lead concentrations in these soils.The proportion of wineries equipped with data on the cumulative concentrations of arsenic, cadmium, and lead is remarkably low.However, these percentages become even more diminished when considering the availability of information concerning specific arsenic, cadmium, and lead available concentrations in the soil.Out of the ten wineries, two wineries have this chemical concentration information about cadmium and lead, and one winery has this chemical concentration information about arsenic.A deficient control of these the arsenic, cadmium, and lead in soils implies that food hazards as the appearance of heavy metal traces in grapes or wine may occur [60,61].The wineries use techniques based on atomic absorption spectrometry for the identification of the arsenic, cadmium, and lead in the grapes and wines.This technique is recommended by the OIV.However, there is a lack of this spectrometry equipment in the wineries own laboratories, so they need to use external analytical services.Results show that half of the wineries (48.59 %) use atomic absorption spectrometry analysis to detect the presence of arsenic, cadmium, and lead in the grapes and wines through external analytical services.Courtney K. Tanabe (2019) identified the absence of on-site chemical analytical tools as a factor that influences the arsenic content in grapes and wines [62]. Limitation and strength of the study One of the principal weaknesses of this study lies in the low response rate obtained from the surveyed wineries, potentially impacting the representativeness and generalizability of the data to the entire wine industry.However, a fundamental strength of this research is rooted in the meticulous development of performance indicators based on the questionnaire methodology.This tool provides a robust foundation for data analysis and enables the evaluation of wineries' effectiveness in managing Critical Control Points (CCPs) associated with contamination risks posed by arsenic, cadmium, and lead in grapes and wines. Conclusions This research shows that although most wineries have FSMS implemented, the CCP identification and control related to arsenic, cadmium and lead contamination risk needs to be improved. Wineries must adequately control the risk of arsenic, cadmium, and lead contamination in the wine production process.To do it, first, wineries must be aware of the need to know, update and implement European legislation.European legislation sets the guidelines to prevent health risks that may arise from the intake of arsenic, cadmium, or lead by setting limits on the admissible concentration of these metals in wines.In addition, wineries′ workers′ training in GMP and CCPs control is a success factor in preventing contamination by arsenic, cadmium, and lead in wines. A strength in the performance of CCP control aimed at assessing the risk of arsenic, cadmium and lead contamination is high training level of workers about this topic.However, the wineries performance about applicable legislation identification of the heavy metal contamination risk is very low.This constitutes a difficulty for good performance of CCP controlling relative to the risk of contamination by arsenic, cadmium, and lead.In addition, another difficulty for this performance is the lack of information regarding the sources of contamination by arsenic, cadmium, and lead in grapes.Soil analysis data about pH and EC are available to wineries, but data about the concentrations of As, Cd and Pb in the soil are often missing. Another barrier to a good performance of the CCP related to contamination of grapes by arsenic, cadmium, and lead is the lack of spectrometry equipment in the laboratories of the wineries.Even if external services are hired for spectrometric analysis, the percentage of wineries that perform it is very low.Providing wineries with laboratory spectrometry equipment and human resources to carry out complete chemical analyses of soils, grapes and wines would allow the adequate control of CCP related to contamination risk control by heavy metals and metalloids of their grapes and wines. Information available on arsenic, cadmium and leads in the raw material (Critical Control Point) ID 1.The winery has identified the legislation relating to food contamination by: (check all those you consider).Arsenic/Cadmium/Lead. ID 2. The winery uses the updated information available from the Spanish Agency for Food Safety and Nutrition (AESAN) on heavy metals food risk.* (Mark only one). Yes/No.CS 1.Does the winery have information related to physic-chemical analysis of where do the grapes used in winemaking come from?* (Mark only one). Yes (Skip to question 11)/No.CS 2. Does the winery have information on the fertilizers used in the fertilization of the soil from which the grapes used in winemaking come from?* (Mark only one). Yes/No. Fig. 1 . Fig. 1.Flow diagram of the young wine making process and corresponding CCPs to each stage of this process. Fig. 2 Fig. 2 . Fig.2.Structure of the questionnaire that encompassing four sections with its specific content. Fig. 4 . Fig. 4. Percentage of wineries that have information related to the soil chemical analysis of the vineyards regarding to arsenic, cadmium and lead concentrations, Soil pH, and EC. Fig. 5 . Fig. 5. Performance level in each component by annual winery production. Table 3 GMP workers training and CCPs workers training by type of winery. Table 4 Heavy Metal Food Contamination Risk (HMFCR) legislation identification and HMFCR legislation updating through National Agency (AESAN) by type of winery. Table 5 of wineries that have information related to the soil chemical analysis of the vineyards regarding to arsenic, cadmium, and lead concentrations, by type of winery. Table 6 Wineries performance indicators and grade of progress, by type of winery. Table 7 Relationship questions among wineries performance variables, their imply variables and MWU results. Information available of the concentration of arsenic, cadmium, and lead in the soil CS 3. The available information on the analysis of vineyard arable soil holds data on: Select all that apply.Total Arsenic Concentration in Soil.Concentration of Arsenic Available in Soil.Total Cadmium Concentration in Soil.Cadmium Concentration Available in Soil.Total Lead Concentration in Soil.Lead Concentration Available in Soil.Soil pH.Electrical conductivity of the soil.Control of the raw material (analysis procedures in the winery) CB 1.Does the winery have its own laboratory to perform chemical analysis of grapes and wines?* (Mark only one).Yes/No.CB 2. If you do NOT have your own laboratory, do you use an external laboratory to perform chemical analysis of grapes and wines?* (Mark only one).Yes/No.CB 3. Does the warehouse have the technology and personnel to perform metal analysis using atomic absorption spectrometry?* (Mark only one).Yes/No.CB 4. If you do NOT have your own laboratory, do you use an external laboratory to perform metal analysis using atomic absorption spectrometry on grapes and wines?* Yes/No.Professional profile that performs the survey You can tell us about your job inside the winery.(Mark only one).
2023-12-07T16:06:57.540Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "dc525ef84cb56c472b256b6c8c3a41c2fbcf7d46", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.heliyon.2023.e22962", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e22b9e980219fcc682416005dc7af132b3707bc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
253276925
pes2o/s2orc
v3-fos-license
Saccharomyces boulardii as effective probiotic against Salmonella typhimurium in Mice This study was designed to investigate the protective role of Sacchromyces bouladii on intestinal section of mice infected with Salmonella typhimurium . Mice were divided into four groups. Acontrol group is uninfected with bacteria represent (negative control) , a second group was infected with bacteria S.typhimurium 0.1 ml (2.5 ×10 7 cfu/ ml) only represent (positive control) , the third group Induced mice received oral dose of S. boulardii 0.1ml (1×10 9 cfu/mL). Treated mice received S. boulardii (1×10 9 cfu/mL) orally for 7 days, followed by Salmonella infection. At the end of the experimental period the histological Results showed that administration of S. typhimurium alone resulted in a necrosis, degenerative changes and inflammatory cells infiltration in intestinal sections as compared with normal section taken from uninfected mice, while pretreatment with the S. boulardii ameliorate this effect . Introduction: In human, Salmonella spp. Are responsible for over one billion infections annually, with consequences ranging from self-limiting gastroenteritis to typhoid fever [1]. To initiate disease, Salmonella first adheres to and then induces its own uptake into intestinal epithelial cells through a specialized mechanism involving injection of virulence factors into host cells by a type III protein secretion system (TTSS) [2]. Gram-negative Salmonella sp. Is acommen bacterial enteropathogen and is widely used in laboratory studies aimed toward understanding the basis of mucosal immune responses and diseases such as gastroenteritis and typhoid.Most laboratory studies are carried out using S.Typhimurium in mice, where adisseminated infection with some similarities to human typhoid is observed.Typhoid fever affects more than 20 million individuals and causes more than 220,000 deaths annually [3,4]. In recent years, worldwide interest for the use of functional foods containing probiotic microorganisms for health promotion and disease prevention has increased significantly and according to the Food and Agriculture Organization and the World Health Organization, a probiotic is "a live microorganism which, when administered in adequate amounts, confers a health benefit to the host". [14]. This study was carried out to evaluate the treatment role of Sacchromyces bouladii on mice infected with Salmonella typhimurium. Materials and Methods: Microbial isolates: S.typhimurium was supplied by Central Public Health Lab.Which was previously isolated from stool sample of infant suffer from diarrhea, S. boulardii was bought as commercial lyophilized yeast (Ultra-Levure®, BIOCODEX, France). Bacterial culture: S.typhimurium strain was grown overnight at 37°C in brain heart infusion broth. This activated culture was centrifuged at 3,000 rpm for 5 min, washed with sterile phosphate-buffered saline (PBS, pH 7.4), and resuspended in PBS to a final concentration of 2.5 × 107 bacteria/ml [15]. Determination of the S. typhimurium susceptibility to antibiotics. Disk diffusion test was used for testing susceptibilities of isolates to different antibiotics. Ten ml of nutrient broth medium were inoculated with bacterial isolates, the cultures were incubated at 37°C to mid log phase (18hrs). 100µl of inoculated broth then transferred to Muller-Hinton agar plates. A sterile cotton swab was used in three different planes to obtain an even distribution for inoculating triplicate plates [16]. With sterile forceps, the selected antibiotic disks were placed on the inoculated plate.All plates were incubated at 37°C for 24 hrs in an inverted position. Then diameter of inhibition zone was noted and measure by a ruler in mm. Experimental design: Twenty adults albino male mice were randomly divided into four groups designated as 1, 2, 3, and 4. Each group consists of 5mice, and subjected to the following treatments according to [17]. Group1: This group was used as a negative control. Group2: This group was dosed with 0.1ml. of 2.5 × 107cfu/ml S.typhimurium culture and considered as positive control. Group3: This group was dosed with 0.1ml of 1 × 109cfu/ml S. boulardii culture. Group4: This group was dosed with 0.1ml of 1 × 109cfu/ml S. boulardii culture, and infected with 0.1ml of 2.5 × 107 cfu/ml culture of S.typhimurium. Mice were dosed with a single dose( 0.1 ml) of 1 × 109cfu/ml S. boulardii culture daily by oral administration for 7 consecutive days. After 7 days treatment, at the 8thday of experiment period, each mouse was given 0.1 ml of 2.5×107 Salmonella culture by oral administration. After 6th day of infection with S.typhimurium, mice were sacrificed by cervical dislocation and collected to evaluate histological effect Pieces were taken from intestine and put in petridishes containing physiological salty solution to remove the fatty tissues and sticky bundles, then the organ was kept in test tubes containing 10 % formalin solution then wash, dehydrated, embedded in paraffin, sectioned at 4-5 micron thickness preparation [18]. The staining method was performed by using hematotoxilin and eosin as a routin work for histological studies [19]. Results and Discussion: Susceptibility to different antibiotics as shown in table 1 revealed that S. typhimurium was sensitive to Carbencillin, Ciprophloxacin and Rifampicin and resistant to other antibiotics. Results indicated that mice intestinal sections taken from the control group showed normal structure appearance of villi without any pathological changes as shown in figure 1A . While, in intestinal sections taken from mice infected with S. typhimurium showed shedding and necrosis of intestinal mucosa and villi inside the lumen of the intestine figure 1 B Mice fed with S.boulardii showed normal villi appearance, while mice infected with S. typhimurium and pretreated with S.boulardii revealed shortening of the intestinal villi with thinning of the intestinal mucosa and ameliorate cytotoxic effect of Salmonella as shown in figure 1 C and D. Susceptibility of S.typhimurium to different antibiotics exhibited their resistance to different antibiotics used in this study. These results were close to those of CDC [20] who found that S. typhimurium isolates resisted chloramphenicol, ampicillin, streptomycin, and tetracycline the drug resistance genes can be transferred among many species of enteric bacteria. Results showed that intestinal section of mice infected with Salmonella resulted in widening and odema in the villi with slight exist of inflammatory cells. This might be attributed to its ability to destroy M cells found within pyer's patches. It's known that S. typhimurium grows primarily inside the macrophages P-ISSN 1991-8941 E-ISSN 2706-6703 Journal of University of Anbar for Pure Science (JUAPS) Open Access 2013,(7), (3 ) :20-24 of liver and spleen. It has been shown that within 30 min of infection, invasive S. typhimurium entered M cells found within the follicle-associated epithelium (FAE) of Peyer's patches. At 60 min, internalized bacteria were cytotoxic for the M cells and the dead cell formed a gap in the FAE, which allowed bacteria to invade adjacent enterocytes or rapidly migrate to a number of sites in the body, including the spleen and liver, where they replicated inside phagocytic cells [21]. Results indicated that pretreatment with S.boulardii reduce the effect of Salmonella in intestinal pattern of mice. This protective effect could be due to immunomodulation or competition for nutrients or adhesion sites.The antagonism effect of Sacchromyces against Salmonella and E.coil mentioned by Gedek [22] who reported that the mannose in the cell wall may cause the yeast to act as a decoy for the attachment of pathogens the yeast acts as a pathogen adherent microflora (PAM) and binds organisms such as Salmonella that may enter the gastrointestinal tract before the bacteria can attach to the chicken's intestinal wall. Rodrigues et al [23] reported that the immunomodulation effect of S. boulardii has been shown to increase intestinal secretory IgA production. sIgA is thought to inhibit the close association of pathogenic bacteria with the mucosal epithelium, and so to reduce bacterial penetration and more efficient clearance of S. bouladii against E.coli B41 in mice was correlated with earlier production of IFN-γ and IL-12 which are involved in the T-helper 1 response.
2022-11-04T19:38:19.472Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "f1b5a1e69efb416a25be10bf5515fb2584418b63", "oa_license": "CCBY", "oa_url": "https://juaps.uoanbar.edu.iq/article_104796_e70ec364a4c1782c2a49087ba28d7cc6.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1654beec26d76bd934e892a58c7309dd965e8e48", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
220267273
pes2o/s2orc
v3-fos-license
Understanding Interdependencies between Mechanical Velocity and Electrical Voltage in Electromagnetic Micromixers Micromixers are critical components in the lab-on-a-chip or micro total analysis systems technology found in micro-electro-mechanical systems. In general, the mixing performance of the micromixers is determined by characterising the mixing time of a system, for example the time or number of circulations and vibrations guided by tracers (i.e., fluorescent dyes). Our previous study showed that the mixing performance could be detected solely from the electrical measurement. In this paper, we employ electromagnetic micromixers to investigate the correlation between electrical and mechanical behaviours in the mixer system. This work contemplates the “anti-reciprocity” concept by providing a theoretical insight into the measurement of the mixer system; the work explains the data interdependence between the electrical point impedance (voltage per unit current) and the mechanical velocity. This study puts the electromagnetic micromixer theory on a firm theoretical and empirical basis. Introduction Fluid mixing techniques are ubiquitous in microfluidic applications. The spectrum of use has been broadened in many areas, such as biological, medical, and chemical research and industries [1,2]. Micromixers are typical devices for mixing a micro amount of fluids with various mixing principals. There are two types of micromixers, active and passive. The passive micromixers do not need power sources; they use pressure to guide fluid blending. However, active micromixers require actuators, which use an external source of energy in the form of acoustic, electrokinetic, electrowetting, magnetic, electromagnetic, etc. [3]. Electromagnetic mixers utilise the Lorentz force; alternating voltages are applied to positive and negative electrical terminals of the devices to induce the alternating current. The fluctuating electrical motions cause complicated fluid motions, which generate multi-micro streams and enhance the mixing efficiency [4]. An electromagnet was used to generate transient interactive flows to enhance micromixing by Chih-Yung Wen and Lung-Ming [5]; the authors observed interesting miscible maze-like patterns. They presented a ferrohydrodynamic micromixer that utilises low-voltage and low-frequency properties. Numerical studies on electromagnetic mixers were also performed to characterise magneto-hydrodynamic micro-mixers [4,6,7]. Yiping Chen and Kim [7] introduced an electromagnetic fluidic device to enable both mixing and pumping functions by the Lorentz force, which was induced by the current and applied magnetic field. Pengwang et al. [8] reviewed and compared many types of actuators to be used in the micro-electromechanical system (MEMS) area, including electromagnetic actuators. Although the main objective of the article focuses on MEMS-based scanning micromirrors, general actuator principles apply the same in the MEMS area. Compared to other types of actuators, the electromagnetic actuator requires lower driving voltage, but can achieve a more significant driving force. However, they may need external magnets, which can increase the system size and create electromagnetic interference. The acoustic mixing techniques provide highly efficient and controlled mixing results with little restrictions on the types of fluids used. The acoustic micromixers blend fluids using devices that induce micro-streams. They use various mediums, such as trapped micro air bubbles [9], a thin solid plate [10], or a piezoelectric membrane [11][12][13][14][15] to transfer the acoustic energy. Chan et al. [16] demonstrated controlling acoustic energy propagation in a microfluidic chip via frequency selectivity. They employed a voice coil as an electromagnetic force-driven acoustic actuator. The electromagnetic and acoustic domains were coupled by the "moving-coil" component. Kim et al. [17] presented a microfluidic mixer, which consisted of a chamber and an acoustic actuator. The voice coil actuator was electromagnetically and mechanically coupled to the cylindrical chamber. It converted a periodic input electrical signal at an optimum operating frequency to vibratory mechanical stress entering the chamber. They also demonstrated that an optimum operating frequency of the input electrical signal could be determined by sweeping frequencies, measuring the corresponding impedance in the frequency range, and selecting the sweet-spot operating frequency based on the impedance-frequency plot. Commonly, the mixing efficiency of micromixers is analysed by mixing parameters such as time, length, and the mixing index [18,19], and these parameters depend on the ability of the transducer system (which includes the energy source, transducers, and membrane/transmission materials). To optimise transducer efficiency, there are generally a few approaches. As a theoretical approach, system modelling is performed for numerical simulations using various software such as ANSYS/CFX, FLUENT, and COMSOL Multiphysics or written codes, for instance the lattice Boltzmann technique [18]. The computing simulation is critical to estimating and optimizing all mixing parameters, especially for the experiments' initial conditions. As a direct practical approach, a laser Doppler vibrometer or a micro-force sensor is employed. However, these approaches require additional costs such as computational resources or system installation. With an indirect approach, the efficiency of the transducer is combined with the other parameters influencing the mixing efficiency, and measurement using a tracer (such as a colour dye) combined with an image processing technique is used to estimate the transducer efficiency [17]. Although mixing parameters based on the experiment would predict a sufficiently accurate result, dead zones may exist; the image-based tracer analysis can be a suitable method for ascertaining the degree of mixing. External image monitoring devices can be considered as add-ons to support mixing efficiency analysis; however, it adds costs to the system installation [16,18]. Moreover, more trial and error are required to optimise both the transducer and the mixing system together. One unique approach introduced by Kim et al. [17] was to use input impedance measurement from electrical terminals of the systems to measure the velocity response of the membrane. They demonstrated the mixing degree detection from the electrical impedance measurement. From a practical point of view, the experimental installation was simple and affordable compared to the conventional methods (of tracking the mixing degree) discussed above. In this paper, we elucidate the physical foundation underlying the micromixer's electromagnetic coupling by adopting the principle of a component called a gyrator. Starting by introducing two electrical coupling devices, a transformer and a gyrator, we derive a correlation between the electrical driving point impedance and mechanical velocity in a two port electromagnetic system, the acoustic voice-coil micromixer. We also discuss experimentally measured results to support the theoretical concept. Theories and Methods A systematic way to understand different fields in engineering or physics begins by generalising the areas. Fields (or domains or modalities, i.e., electric, acoustic, mechanic, etc.) are analogous to each other. Two conjugate variables exist in pairs in each field, a generalised force and a flow. The two variables are used to characterise a modality by their product and their ratio. They could be either a vector (in bold) or a scalar and also can vary spatially. A product of these two variables defines the power, while a ratio establishes the impedance, which is usually determined in the frequency domain. Some examples of the conjugate variables in each modality are described in Table 1. One can define a system using a single modality (one port system) or a combination of them. Typical examples of the combined modality systems are electroacoustics and electromechanics, speakers, and motors, respectively. An electromagnetic system is under a subcategory of the electro-mechanical systems. It couples electrical and mechanical domains through the electromagnetic field (Table 1). To combine the modalities, the port network concept and coupling components are required. Regardless of the modalities, there is a law that applies to the every system, the law of the conservation of energy [20,21]: where the total delivered energy e(t), which is an integration of the power over time, should be greater than (or equal to) zero, and power(t) is work done per unit time defined as a potential multiplied by a net flow. In other words, Equation 1 means we cannot have more energy than we supply. A transformer and a gyrator are two standard modality-coupling units. Both of them are defined as electrical components having transmission (ABCD) matrices; cascading ABCD matrices of circuit components are a popular tool to analyse circuits [22][23][24][25]. The transformer is a typical element that links one modality to another in a one-to-one manner; a flow in one modality is linear to the flow in the other field. Ideal and non-ideal cases of the transformer's theory and the ABCD matrices are established well in classic works in the literature [23]. In the non-ideal case, mutual coupling factors between two adjacent circuits are considered [20,23,26]. McMillan [27] introduced a two port network that violates reciprocity; two years later, Tellegen [28] defined an ideal gyrator as the 5 th electrical component to support the anti-reciprocal characteristics of the system. A unique thing about this gyrator is that it can model an electromagnetic (EM) network without using the mobility analogy, which is effective mathematically, but an anti-intuitive method to describe an EM system. The variables of each modality can be represented without modification as they have a gyrator as an EM coupler between the electrical and mechanical terminals. The mobility (dual) analogy must be appreciated to model an EM system with a transformer. Figure 1 explains the dual method; the two circuit representations are equivalent to each other. If the gyrator is not used in the circuit, a series combination of the mass, damping, and stiffness in the mechanical side variables becomes a parallel relationship with the anti-reciprocation of each variable [29]. Furthermore, the flow and potential in one domain (i.e., U, F) become a potential and flow on the other side, respectively (i.e.,Φ, I). Despite the usefulness of using a gyrator in modelling EM systems, this component has not become mainstream in EM circuit modelling and analysis. Other than [25], only a few approaches have been made in EM system modelling with gyrators, mainly limited in power electronic fields. Yan and Lehman [30] demoted the benefit of using a gyrator in the EM modelling approach. In their work, a simplified modelling method was introduced for DC/DC converters using an extended capacitor-gyrator (C-G) modelling technique. They showed their modelling feasibility in complex core and winding structures. Young et al. [31] used the C-G to model a continuously variable series reactor (CVSR), which requires prudent planning to design. The authors also highlighted the convenience of using gyrators in their system modelling. Zhang et al. [32] demonstrated an improved C-G modelling method in power systems by taking the eddy current effect into account. They claimed that classical C-G modelling was not suitable; the eddy current impact must be considered in EM modelling. This point also has been demonstrated by Kim and Allen [25]. A two-port network, such as an electro-mechanical system, has Φ, I, F, and U as the system's variables. A gyrator exists to couple the electric and mechanical sides. To specify this property, the impedance matrix of an ideal gyrator is employed: where G = B 0 l is the gyration coefficient, B 0 is the DC magnetic field, and l is the length of the wire. Thus: namely, Taking the ratio of the two equations in Equation (4), an impedance is derived: When defining an impedance, the flow direction is defined as into the terminals; thus, U is defined as going into the network. Therefore, the minus sign of U in Equation (4) follows from Lenz's law. Note that Equation (4) explains an ideal gyrator, considering only a DC magnetic field. Appreciating that the impedance is a concept in the frequency domain, an angular frequency symbol, ω, is omitted from this part and onward. Considering only a single ideal gyrator element, we can obtain the velocity response via measuring the electrical impedance while performing a constant current sweep across frequencies, The Non-Ideal Gyrator The non-ideal case of Equation (4) is considered from the basics of electromagnetism. Ulaby [33], Kim and Allen [34] described the induced emf (voltage φ(t)) as the sum of a transformer component (φ tr (t)) and a motional component (φ mot (t)), namely, The transformer voltage is: where ψ(t) is the magnetic flux. The voltage has an opposite direction from the emf, emf ≡ E · dl = b a E · dl = −φ(t). In the static case, the time-varying term is zero. φ mot represents the motion of the electrical voltage [33]. The voltage is associated with the motion from the other port (i.e., mechanical). In other words, the signal is observed from the mechanical side (motional voltage due to u). Note that this concept can be applied only in two port (or higher order) systems. Derivation of φ mot starts from the Lorentz magnetic force (f m ), acting on a moving charge q inside a magnetic field B with a velocity U, Then, the motion of the magnetic force from the electrical field E mot is f m = qE mot .The unit of q is in Coulombs (C) and E mot in (V/m) = (N/C) as 1 (V) ≡ 1 (J/C) and 1 (N) = 1 J/m. Therefore, qE stands for force with a unit of N. A positive charge (q > 0, proton) is 1.602 × 10 −19 C; thus, the charge of an electron (negative charge) is −1.602 × 10 −19 C. One coulomb of charge equals the charge that can light a 120-watt-bulb for one second. Therefore, where E mot is the motional electric field seen by the charged particle q, and its direction is perpendicular to both U and B. Thus, the voltage Φ mot is defined as the line integral of the corresponding electric field, which is E mot in this case, This term has been considered in the ideal gyrator. Finally, the total voltage becomes: In the frequency domain, Equation (12) is rewritten as: where s = jω, L e is a leakage inductance due to the leakage flux of a self-inductance in the electrical side, Ψ = L e I. Assuming a static DC magnetic field (B 0 ), then sΨ = 0, and we find the ideal gyrator definition Φ = Φ mot = −UB 0 l (Equation (4)). The frequency dependent term shown in Equation (13) (jωΨ and jωL e I) is a non-quasistatic (dynamic) term that is not considered in an ideal gyrator. The minus sign for the other term −UBl is related to Lenz's law. Two types of magnetic fields exist in an electro-mechanical network: one is the DC magnetic field, and the other is the AC magnetic field. In the ideal gyrator formula, only the motional parts (or the DC magnetic field) of the variables (voltage and force) are considered, which dominate an EM system usually. The two modalities in the network (electrical and mechanical) share this DC magnetic field, which is shown in the motional part of each variable. For the non-ideal case, the transduction parts (or AC magnetic field) of variables along with the motional parts must be considered. For a non-ideal gyrator, we rewrite the mixing system model (from Equation (5)) with the transformer voltage and mechanical impedance (membrane response), Z m , When the transformer impedance is not small (i.e., |ωL e I(ω)| ≈ |B 0 l|), we have to use a very small constant current, and the question of whether the velocity response in its reasonable operating condition can be reflected in the electrical impedance response might be raised. Therefore, we formulate the relationship between the velocity response and electrical impedance with another approach. Velocity Frequency Response with Constant Voltage From Equation (14), During a constant voltage frequency sweep (i.e., Φ(ω) = Φ), As ωL e is the same order as Z(ω), small changes in the electrical impedance Z(ω) will be reflected in the velocity response and vice versa. To illustrate this concept in more detail, we substitute I(ω) via Equations (15) into (16) to obtain the impedance response, Without external force (i.e., no other mixing source in the vicinity or the clamped mechanical system that cannot move), the electrical impedance response in terms of the mechanical impedance is: We also substitute Equations (20) into (17), to obtain the (constant voltage) velocity response in terms of the mechanical impedance, When the mechanical impedance is much larger than the gyrator coefficient (B 0 l), and Then, Equation (22) can be rewritten as: Mechanical Impedance From the forced vibration equation, the force due to the thin membrane [35], where ζ is the membrane properties function, ρ m , φ is the mode shape of the vibration, h, c, and Y are the membrane density, thickness, viscous solid damping, and Young's modulus, ω r is the resonant frequencies of the membrane, and F m is the membrane force. Assuming quasi-static (considering the large time-scale between vibrations and mixing) conditions, the force on the membrane due to the fluid vibration can be approximated [36,37] by solving the Navier-Stokes equation. The scalar potential flow field, ϕ, has a solution in the form, where k represents the penetration of the vibration, Ki is the amplitude associated with the mode shape, h is the height of the fluid, z is the distance from the membrane, µ is the dynamic viscosity, and A f is the geometrical area coefficient of the membrane. The pressure P, therefore, has the following form, where ρ f is the density of the fluid. The force due to the fluid on the membrane, F f , can be obtained by integrating the pressure across the membrane area, A, where h f is the equivalent height of the fluid on top of the membrane. The mechanical impedance can be formulated by the summation of the fluid and membrane force (Equations (25) and (30)), Equation (34) shows that mixing degrees, which can be reflected as a change in fluid density ρ f and/or equivalent height h f near the membrane would alter the value of the mechanical impedance. This change would then be reflected in the electrical impedance response, as shown in Kim et al. [17]. Experimental Setup For the empirical study of the anti-reciprocal property via the gyrator, we designed a simple electro-mechanical system having a permanent magnet, a voice coil, and a loading chamber. The design process was adopted by Kim et al. [17]. Voice coils are thin and the wires lightweight, and a typical application of the voice coil is a dynamic speaker. With the American Wire Gauge (AWG) standards, AWG 30 (cross-section area = 0.0510 (mm 2 ), resistance = 338 (Ω/km)) or higher is commonly used in the loudspeaker industry. Note that a higher AWG number indicates a thinner wire. A typical material for the voice coil is copper with an insulation coating (i.e., enamel). The main application of the voice coil is a low-power personal gadget (i.e., an earphone); thus, it can be used as a safe and low-cost application. Such a delicate character of the voice coil is suitable for microfluidic applications. Once a signal is applied to the coil, the AC magnetic field is generated around the voice coil-loop. Due to the permanent magnet (DC magnetic field), the voice coil moves as it alternates the AC electromagnetic polarities. The coil attached to the polydimethylsiloxane (PDMS) membrane vibrates due to the electromagnetic force at the coil, as depicted in Figure 2. The chamber is used as a fluid container to simulate mechanical loads' variation. A hollow, cylindrical chamber is connected to form a mixer unit. Input signals are driving from the electrical terminals (two ends of the voice coil), and loads vary in mechanical terminals. As shown in the previous study [17], the mechanical loads' variation, including mixing performance, can be captured at the electrical input terminals. Results and Discussion Based on Ohm's law, the electrical impedance is defined as voltage over current in Ω or V/A, volts per unit current. It can be interpreted as a one-port network's transfer function given an output over an input. The voltage is a potential variable in the electrical field; therefore, in electro-mechanical systems, the velocity (flow in the mechanical side) across a gyrator is linked to the voltage term, as shown in Equation (4) (ideal case). Loads in the mechanical front are receptive to the electrical impedance measured; the mechanical loads' variation should be reflected in the electrical impedance in the electromagnetic systems. The non-ideal case (Equation (13)) contemplates additional AC effects; an EM system is affected by both the induced (AC) and the permanent (DC) magnetic fields. However, when the DC magnetic field governs (i.e., an EM system with a strong magnet and a relatively weak moving coil part), the relationship between the voltage and the velocity shows more similar patterns to each other. In this case, the motional voltage term in Equation (13) dominates the total voltage, Φ, and the electrical impedance response represents the velocity response in a constant current frequency sweep. However, when the transformer impedance becomes more substantial, which is generally the case when we drive the system to its full capability to overcome a sizeable mechanical impedance, we require a constant voltage frequency sweep in order for the velocity response to be represented by the electrical impedance response (see Equation (24)). Taking a case when the permanent DC magnetic field is influential and the transformer impedance is not negligible, we perform both electrical and mechanical experiments and electrical point impedance and mechanical velocity measurements. Figure 3 represents our experimental materials, setting, and methods. Figure 3. A schematic illustration and a picture to explain the experimental concept. The signal (i.e., sweep frequencies generated by a function generator) is applied to the coil's terminals. Then, due to the permanent magnet, the electromagnetic force drives the coil to move. An LCR meter is used to take the electrical driving point impedance of the system, and a laser vibrometer light is focused on the membrane to measure the membrane's velocity. Electrical Impedance Measurement There are several ways to gauge electrical impedance. One of the methods is to use an LCR meter. For example, the Agilent E4980A Precision LCR Meter was used in this study. Every physically realizable circuit, such as resistors, capacitors, and inductors, has free-loading components. These include undesired resistance in capacitors, capacitance in inductors, and inductance in resistors. Thus, complex impedance representation is required to model a system precisely. The electrical impedance (Z) has a real part and an imaginary part. The Z can be written in rectangular form as resistance and reactance or in polar style as magnitude and phase. With the LCR meter, one may choose a way to analyse the complex impedance of a physical system. Mechanical Velocity Measurement The Polytec CLV3000 3D laser vibrometer (https://www.polytec.com/int/) 3D laser vibrometer has been used for measuring the mechanical velocity of the electromagnetic system. There are four components to drive the laser system: a laser machine, an Data acquisition box (Polytec VIB-E-400 Junction box) , a laser controller, and a management computer with the software. Figure 4a,b compares electrical impedance and mechanical velocity data obtained from the same electro-mechanical device introduced in Figure 3. Different amounts of water were loaded into the chamber, then the results were compared from both side experiments. There were four experimental conditions in the electric impedance data (Figure 4a): a device with a chamber and a device with a chamber loaded with three different amounts of water (change in equivalent fluid height). The same system was tested mechanically. In Figure 4b, the magnitudes of the mechanical velocity (using laser) with two conditions are introduced. The analogy between the two modalities (electrical and mechanical with the electromagnetic coupling) was discussed in Section 2. Figure 4 carries the empirical evidence; the peak frequency location and overall shape of the first two data of each subfigure corresponded to each other. A water-filled chamber shifted the damping, as well as the mass of the system. This effect entirely changed the characteristic frequency. The result was reflected well in the electrical impedance data (Figure 4a; the resonance moved to a lower frequency with the mechanical loads' increment); while in the laser data (Figure 4b), the effect was disturbed due to laser light deflection. Maintaining a clear focus was essential for the laser experiments. However, on account of the electromagnetic system's dynamic nature, keeping an excellent focal point of the laser light was laborious as the chamber's membrane was oscillating. Analysing the electrical side data was straightforward by minimizing unnecessary efforts such as filtering out noises and interference. For example, to maximize movement (vibration) with the electromagnetic system, we used 0.15-0.2 mL water loading, and we could drive a 300-400 Hz AC signal, supported by Figure 4a. As an extension of our previous project, the current study focused on providing theoretical insight into the physical system, our electromagnetic micromixer. Despite the excellent series of results, there are also many exciting challenges for our future work, primarily focusing on mixing technologies, enhancing micromixers' performance, and design aided by physical simulation. Hosseini Kakavandi et al. [38] investigated mass transfer characteristics in micromixers by varying the junctions and channel shapes of the mixers. Their study demonstrated that the T-shaped mixer's junction shape and pit diameter critically affected the mass transfer coefficients as chaotic advection was generated by the modification of the mixing channel shape. Chandan Kumar and Nguyen [39] developed a numerical model of magnetic nanoparticles and fluorescent dye under a nonuniform magnetic field. They performed a parametric analysis of the mass transfer process to scrutinize the magnetic field strength and nano-particle size effects in a magnetofluidic micromixer. Their simulation demonstrated that the core stream spread into the upper sheath stream due to magnetoconvection, and their experimental results supported their model simulation. Their work inspired us to investigate our mixer's mass transfer coefficients based on the current design concept, especially the effect of the voice coil attachment (i.e., position and size concerning the chamber) on the membrane. Conclusion In electromagnetic systems, the magnetic fieldḢ links the electrical and mechanical sides owing to anti-reciprocal characteristics. TheḢ (induced by the conducting current from the coil) is affected by the permanent magnet and changes its polarity (directions). The induced magnetic field and the constant magnetic field define the net force on the coil [34]. Thus, the movement of the coil follows the net force's direction. In this study, the electro-mechanic (or electromagnetic) two port system was examined. Starting with electromagnetic theories such as Maxwell's equation and the Lorentz force law, we investigated a gyrator's non-ideal formulation. It represented an anti-reciprocal characteristic of the electro-mechanic network. The theory was further explored with the mobility (or dual) analogy" one may choose "the dual analogy with a transformer" or "the impedance analogy with a gyrator" to model an electromagnetic system. An essential benefit of using a gyrator was keeping us from the mystifying dual analogy: it helped an intuitive analysis of the EM network. To our knowledge, this was the first attempt to derive a non-ideal gyrator ever since the gyrator was invented by Tellegen [28]; he suggested the gyrator as the fifth circuit element (along with a resistor, a capacitor, an inductor, and a transformer). This study expounded on the reason why the electrical impedance data were analogous to the mechanical velocity data. An electromagnetic micromixer was designed and tested to explain the anti-reciprocal nature. Additionally, a benefit from electrical impedance measurement was highlighted. Focusing a laser beam is not easy, especially when the point of focus is the fluid's surface, which may be sensitive to the surrounding environment such as lights, vibration, noise, etc. These undesignated factors can interfere with the laser light to make it challenging to focus the light on a fixed position. The electrical experiment may be used to overcome this problem, which provided more precise data to characterise the electro-mechanic device.
2020-06-29T18:47:02.314Z
2020-06-29T00:00:00.000
{ "year": 2020, "sha1": "faf820c4223c82e9a565035f1e0b91e6015fc7d1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/11/7/636/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "faf820c4223c82e9a565035f1e0b91e6015fc7d1", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
229176298
pes2o/s2orc
v3-fos-license
Stochastic representation decision theory: How probabilities and values are entangled dual characteristics in cognitive processes Humans are notoriously bad at understanding probabilities, exhibiting a host of biases and distortions that are context dependent. This has serious consequences on how we assess risks and make decisions. Several theories have been developed to replace the normative rational expectation theory at the foundation of economics. These approaches essentially assume that (subjective) probabilities weight multiplicatively the utilities of the alternatives offered to the decision maker, although evidence suggest that probability weights and utilities are often not separable in the mind of the decision maker. In this context, we introduce a simple and efficient framework on how to describe the inherently probabilistic human decision-making process, based on a representation of the deliberation activity leading to a choice through stochastic processes, the simplest of which is a random walk. Our model leads naturally to the hypothesis that probabilities and utilities are entangled dual characteristics of the real human decision making process. It predicts the famous fourfold pattern of risk preferences. Through the analysis of choice probabilities, it is possible to identify two previously postulated features of prospect theory: the inverse S-shaped subjective probability as a function of the objective probability and risk-seeking behavior in the loss domain. It also predicts observed violations of stochastic dominance, while it does not when the dominance is “evident”. Extending the model to account for human finite deliberation time and the effect of time pressure on choice, it provides other sound predictions: inverse relation between choice probability and response time, preference reversal with time pressure, and an inverse double-S-shaped probability weighting function. Our theory, which offers many more predictions for future tests, has strong implications for psychology, economics and artificial intelligence. Introduction Randomness is a fundamental component in most human affairs, from economics and politics to medicine and sports. Yet, people often make poor and inconsistent decisions when confronted with it. The rational normative recipe of Expected Utility Theory [1] has shown major limitations in accounting for how strongly people misperceive probabilities and uncertainty [2][3][4], leading to the notion of bounded rationality [5] and a long list of behavioral biases and fallacies. Several attempts [6][7][8][9][10][11][12] have been made to explain such fallacies, replacing the objective probabilities of events with "decision weights", but still retaining a sort of expectation principle, where the attractiveness of an event is decomposed into the product of (subjective) probability and (subjective) value. Numerous evidence [13][14][15][16][17][18] however suggest that the two are not independent; for example, people tend to overestimate the probability of an event if the associated outcome is bad. Rank-Dependent Theories [9][10][11] partially take into account the effect of value on probability, such that decision makers tend to overweight only events with 'extreme' consequences. However, their axiomatic structure prevents them to account for observed violations of stochastic dominance [19]. Operationally, estimating the subjective probability and utility as two separate entities is subjected to the joint-hypothesis problem [20] leading to severe limitations for real-life applications. The above-cited frameworks are deterministic in nature, postulating that the best option will always be chosen. When tested against empirical data, a probabilistic component is needed [21] to account for observed "noise" and "inconsistencies" [22,23]. We can distinguish two classes of probabilistic theories of decision-making: random utility maximization (RUM) models and stochastic decision processes. The former, introduced by Thurstone [24] assumes that the "perceived" utility of an option is a random variable, written as the sum of a "true" fixed utility and a random disturbance, encoding the deviation from rational behaviour. Debreu [25] proves the existence of a utility function representing a stochastic preference relation with a minimal set of assumptions. McFadden [26] describes the evolution of RUM models over the past decades, linking it to the Luce choice axiom (LCA) [27], a very useful assumption enforcing desirable properties such as independence from irrelevant alternatives and strong stochastic transitivity. However, several empirical studies [28][29][30][31][32][33] show how humans do not always conform to such structure, the most famous example being the "red bus/blue bus" problem [34]. The second class of models assumes that the utility of alternatives is fixed, but the process leading to a decision is inherently stochastic. Regarding choice in uncertain environment, the most famous model is decision field theory (DFT) [35], where a stochastic process (Brownian motion) is assumed to mimic the fuzzy and hesitant deliberation activity of human mind. The theory takes inspiration from the Ratcliff Drift-diffusion models (DDMs) [36][37][38], which have shown to well describe choices and reaction times in perceptual decision-making tasks (for example, discriminating a motion direction). Differently from RUM models, these theories can account for the fact that human decision making happens in finite time, and explain how such deliberation time-as well as time pressure-affects choice probability. In almost all theories, the "true" utility of a gamble or its index of worth is obtained by combining probabilities and outcomes in a (subjective) expectation, irrespective of the probabilistic model adopted (additive random disturbance or drift-diffusion process). As a result, all these frameworks carry some problematic aspects of expected utility theory, the most prominent (and oldest) being embodied in the St. Petersburg paradox [39], where an infinite expectation value of the gamble would imply infinite willingness to pay, while in reality many people would pay at most a small amount [40]. Although Expected Utility was designed to solve such paradox, a simple modification of the gamble, often called Super St. Petersburg paradox [41], reintroduces the problem: if the lottery provides outcome 2 2 n -rather than 2 n -with probability 1 2 n = , the Bernoullian expected utility of the gamble (logarithmic utility function) diverges again. For any unbounded utility function, there will always be a Super Super St. Petersburg paradox. When shifting from the normative perspective of decision theory (telling what people should do)-where expected utility proves best-to a descriptive perspective (reporting what people actually do), it is worthwhile to investigate alternative mechanisms of value formation in human mind that are different from the class of generalized expectation approaches mentioned above. Indeed, from a semantic perspective, separating probability and outcome seems quite odd, since any probabilistic statement must contain explicitly or implicitly information about "value". In other words, a probability number quantifies the likelihood of a concrete event that is specified, and this event carries an explicit value or implicit assessment of worth or impact. For instance, when conceiving the likelihood of a natural disaster, one cannot help not thinking of the potential associated destruction and losses of lives, which are therefore implicitly connected to a cost. When thinking of the probability of the election of some political candidate, one cannot avoid envisaging the social, economic and financial consequences, which carry an implicit value judgement. Generally, whatever the event, it carries either a direct value or an indirect value assessment, even if not fully formalized in the mind of the probability assessor. Therefore, the way in which outcomes and probabilities interact in human mind seems to be much more entangled than represented by the simple factorization prevalent in utility theories and their generalizations in behavioral economics and psychology. The intermingled nature of probabilities and values have been reported by Lopes [42] and is highlighted in the above-cited experiments [13][14][15][16][17][18], which demonstrate the effect of outcomes on perceived probability. Our contribution Here we propose a new framework for describing human decisions under risk, based on a representative stochastic process-in the same spirit of drift-diffusion models-but with a notable difference: outcomes and probabilities are not merely multiplied to form an index of worth, rather they combine in a non-symmetric and non-separable way, as dual characteristics of an event. The difference will be evident when presenting the model into details, but the core concept is the following. In drift-diffusion models, as in DFT and race models, outcomes and associated probabilities of gambles are combined in a unique entity, a mathematical expectation, that then plays the role of a drift component of the stochastic process representing the decision-maker. The decision is triggered when the process reaches a threshold, called decision criterion, usually related to the time available for making a choice (the closer the threshold to the starting point, the faster the process will reach it). In our framework, probabilities and outcomes play a structurally different role; a decision occurs when the diffusive particle is absorbed at the end-point of an interval associated with a given event, whose distance is solely determined by the event's probability. The existence of n events is thus represented by n absorbing end-points at the end of n arms in a starfish configuration along which the Brownian particle diffuses. The n arms have different lengths controlled by the probabilities of the associated events. In this representation, it is natural to conceptualize the values or utilities of the outcomes by adding drifts characterizing each arm of the starfish, the larger the value of an event, the larger the drift that biases the random walk towards the corresponding end-point. Notice that this mapping respects the positivity of the probabilities associated with the arms' lengths, while the drifts can be attractive or repulsive to reflect gains and losses, respectively. More concretely, consider several outcomes A,B,C. . ., each understood to occur with probabilities p A ,p B ,p C ,. . . Our key idea is that the mind imagines consciously or unconsciously some bundles of random paths wandering around in some abstract space, where the alternative outcomes A,B,C. . . are identified as distinct domains (absorbing boundaries) in this space. The distance between domain representing outcome, say, A and the initial position of the particle is inversely proportional to p A , while the bias responsible for the attraction of the particle to the boundary, is proportional to the outcome A. The probability for the diffusing particle to be absorbed by a particular domain is then primarily interpreted as a measure of attractiveness of the associated event, as in DFT; at the same time, a conditional absorption probability can be interpreted as a subjective value-distorted probability, as we will see below. Thanks to the mutual interaction between perceived probabilities and perceived value of outcomes embedded in the starfish geometry with drifts, our model predicts the famous fourfold pattern of risk preferences [43]. To get an intuition on why this is the case, we derive two previously postulated features of prospect theory [43]: the inverse S-shaped subjective probability as a function of the objective probability and risk-seeking behaviour in the loss domain. However, these two entities are not exactly those described by prospect theory, because they are not separable. Rather, they can be inferred and rationalized by studying how the predicted choice probability depends on events' outcomes and probabilities. Without added assumptions, our model conforms naturally to Luce choice axiom [27], enforcing strong stochastic transitivity for pairwise choices. It also predicts violations (as well as observance) of stochastic dominance, in agreement with empirical data [44]. Moreover, generalizing the model to account for time pressure and finite decision times, it provides other empirically confirmed predictions: the inverse relation between choice probability and response time [45], preference reversal through time pressure [46,47], and an inverse double-S-shaped probability weighting function [48]. Also, note that while usual driftdiffusion models have non-trivial and somehow artificial generalizations beyond binary choices [49], our representation remains essentially locally uni-dimensional for an arbitrary number of available options. Notwithstanding its predictive power, given its simplicity, the present version of our model has limitations. Because of Luce choice structure: i) it would predict a non-deterministic choice for a decision between two simple sure outcomes (thus we restrict our choice set, as Luce does); ii) it cannot predict observed violations of independence from irrelevant alternatives [31][32][33] (similarity effect, attraction effect, compromise effect). Furthermore, the proposed stochastic representation is more of an allegory that should not be taken at literally meaning that the human brain imagines all possible random paths wandering around in some abstract space for several outcomes A, B, C. . ., for instance as a result of limited human working memory. Our framework is proposed as a first minimal complexity model or null-model of human risky choice, which provides the baseline for further elaboration and improvements. Indeed, our present model is characterized essentially by only two tuning parameters (compared for instance to the seven parameters of DFT). In the future, we will present extensions of the model obtained by relaxing some assumptions. In summary, motivated by: i) empirical evidence for "interaction" between probability and value [13][14][15][16][17][18]; ii) empirical evidence for intrinsically probabilistic human choice [50]; iii) success of drift-diffusion models in describing human behaviour in several tasks, we present a new probabilistic decision theory that combines probability and value in a non-separable way. Despite its simplicity, it provides straightforward derivations at a more microscopic level of several known structures that have been documented empirically in human decision theory. The rest of the article is structured as follows: in Section Model we introduce the theoretical model, first without time constraints and then generalizing. In Section Results, we outline the main predictions of our theory. Section Discussion summarizes and concludes. Stage 1: "Infinite time" Stochastic Representation Decision Theory (SRDT) This sub-section presents the simplest version of our model, i.e. without considering the role of (finite) time for human decision-making. Formulation of the stochastic representation of lotteries. In the simplest possible situation, a decision maker (DM) has to make a choice between playing two binary lotteries: If the DM chooses lottery L 1 (resp. L 2 ), she will receive amount o A (resp. o C ) with probability p (resp. q), and o B (resp. o D ) with probability 1-p (resp. 1-q). The amounts can be negative, corresponding to losses. As mentioned in the introduction, our model is conceptually analogous to drift-diffusion models, including decision field theory (DFT), i.e. a stochastic process is assumed to represent the human deliberation activity leading to a decision; choice is triggered when the process reaches a certain threshold. Fig 1 shows how the above binary choice is represented in DFT: if the process (Brownian particle in the simplest case) reaches the upper boundary (resp. lower boundary) first, then lottery L 1 (resp. L 2 ) is chosen. The drift component of the motion is related to the difference between expected-like utilities of the lotteries where u and π are the so-called utility and probability functions, respectively. In our framework, an alternative way of value formation is assumed, keeping in mind the numerous evidence [13][14][15][16][17][18] showing relevant interaction between probability and value perception. We start from a plausible representation of the lotteries' objective probabilities, as perceived by the decision maker. Typically, humans find easier to understand probabilities in terms of frequencies [51]. Therefore, we propose to model their cognition via the occurrence of favorable random walk paths that hit some target, an absorbing boundary in this case. In other words, we view the cognitive processes leading to the "feeling" or "understanding" of probability as imagining a bundle of random walkers wandering about, and the perception of the actual occurrence of the event as the arrival of random walkers in some boundaries or some domains. This representation allows one to give substance and meaning to what is the perception of probability, equal to the fraction of "successful" paths, in the standard frequentist approach of probability theory [52]. Once the lotteries' objective probabilities are encoded into some absorption probabilities, we introduce lotteries' outcomes and account for: i) their intrinsic utility; ii) their effect on perceived probability. The simplest incarnation of this twofold effect is to introduce an outcomedependent force (derived from a potential energy) that biases the random walk, producing a value-distorted understanding of probability. This construction leads to an effective influence between probabilities and outcomes; such reciprocal interaction will result in a distorted perception of these two entities by the decision-maker, that in turn determines her decision preferences. Put differently, instead of compressing all the lottery information into an expectation-like index of worth, we "unpack" a lottery by introducing an absorbing branch for each of its outcome-probability pairs. As a consequence, the topology of the space where the stochastic process wanders will depend on the specific choice setup. This condition resonates with the fact that, in many situations, utility maximization is computationally intractable [53]. Operationally, we represent choosing between L 1 and L 2 with a Brownian particle undergoing a continuous random walk [54] that starts at the crossing (taken as the origin) between 4 segments, 2 per lottery, as shown in Fig 2 (to be compared with one segment used in DFT, as shown in Fig 1 along the y-axis, while the x-axis is the time of deliberation). Pictorially, the decision-maker is identified with the Brownian particle itself, whose stochastic path simulates the deliberation act taking place while evaluating the possible alternatives. Each branch encodes information about one lottery outcome-through a potential energy tilting the branch-and its associated probability of occurrence, through the branch length ending with an absorbing boundary. A (perhaps more intuitive) analogous discrete random walk representation is shown in S2 Fig. Fig 1. DFT-representation of binary choice. If the process reaches the upper boundary (resp. lower boundary) first, then lottery L 1 (resp. L 2 ) is chosen. The drift component of the motion is related to the difference between expectedlike utilities of the lotteries. Time elapsed along the x-axis ("number of sample" denotes "time"), leading to directed paths along it. https://doi.org/10.1371/journal.pone.0243661.g001 When the process is restricted to represent only one lottery, the probability to be absorbed at the end of one branch can be interpreted as the value-distorted subjective probability of the associated outcome (see sub-section "Subjective Probability"). In the presence of two (or more) lotteries, the probability to be absorbed at the end of one branch of a given lottery gives a contribution to the total probability that this lottery is chosen. The probability of choosing lottery L 1 (resp. L 2 )-denoted by P(L 1 ) (resp. P(L 2 ))-is thus given by where Pðk Z k Þ denotes the probability for the particle to be absorbed by the wall located at distance η k on branch k Z k , for k = 1,2 with η k =1 2{a,b} and η k =2 2{c,d}. In words, the probability of choosing, say, lottery L 1 is given by the sum of two terms: the probability of being absorbed along branch 1 a -representing (o A ,p) -plus the probability of being absorbed along branch 1 b , To quantify the meaning of an outcome o A , we assume the existence of a preference or value function u(o A ), endowed with the minimal standard properties of being non-decreasing and concave on the gain side to represent risk aversion (see sub-section "Risk-seeking behavior for losses" for the loss side). Then, the form of the potential energy acting on the Brownian particle along a branch with outcome o A is taken as linear, with a slope proportional to u(o A ), as represented by dashed lines in Fig 2. This corresponds to a constant force acting on the Brownian particle along each segment. The sign of the energy potential is such that the greater is an outcome, the higher is the attraction toward the corresponding branch end point. This representation has the advantage of remaining essentially one-dimensional, the motion on each segment being governed by a simple partial differential equation. For example, the probability density p(x,t) of the particle at position x and time t on branch 1 a (of length a) evolves according to the following Fokker-Planck equation [55,56] @pðx; tÞ @t ¼ uðo A Þ @pðx; tÞ @x þ D 2 @ 2 pðx; tÞ @x 2 pða; tÞ ¼ 08t ðabsorbing boundaryÞ where u(o A ) is the constant drift acting on the particle, D is the so-called diffusion coefficient and the two boundary conditions account respectively for the absorbing wall at distance a from the origin, and f(t) represents the probability of the random walker incoming at the origin from other branches. Note that Eq (4) is just one possible way to look at the problem, i.e. solving a diffusion process on each branch independently and then matching the flux to ensure conservation of probability mass. However, as shown in S1 Appendix, we did not proceed this way: rather, we first solve the absorption problem in the case of only two branches (i.e. a one dimensional Brownian motion between two absorbing walls), and then show how it can be generalized to an arbitrary number of branches. Simple dimension analysis of Eq (4) shows that D sets the scales for the impact of the outcome values compared with the probabilities in the value formation process: (i) taking very large D's amounts to neglecting the influence of outcome values; (ii) small D's make outcome values dominant in the construction of preferences. Explicit expressions for the decision probabilities. As shown in Fig 2, the probability P (L 1 ) (resp. P(L 2 )) for the decision maker to choose lottery L 1 (resp. L 2 ) is represented by solving Eq (4) for each of the four branches with the matching condition of the conservation of the probability of presence of the Brownian particle when crossing the junction point at the origin. Using the theory of random walks and diffusion processes [57], we obtain (see S1 Appendix for derivation) with and Expression (5) recovers the ratio scale representation of Luce's choice axiom for binary choice [27], with effective utilities given by (6) and (7). This implies the so-called strong stochastic transitivity for pairwise choices: P fL 1 ;L 2 g ðL 1 Þ � :5 and P fL 2 ;L 3 g ðL 2 Þ � :5 imply that P fL 1 ;L 3 g ðL 1 Þ � max½P fL 1 ;L 2 g ðL 1 Þ; P fL 2 ;L 3 g ðL 2 Þ�. Note that the solution of Eq (4) for N alternatives generalizes into where P N (L j ) is the probability of choosing lottery L j among the N available lotteries and the UðL i Þ's are generalized utilities given by expressions of the form (6) and (7). As stated in the introduction, because of the Luce choice structure, our theory would predict a non-deterministic decision when the choice is between two sure outcomes (e.g. L 1 = {9,1} vs L 2 = {10,1}). Therefore, following Luce [27], we assume that no such task is present into the choice set. As can be seen from (6) and (7), the utility UðLÞ of a given lottery is given by the sum of two terms, each representing the attractiveness of an outcome-probability pair, which cannot be decomposed in a simple product of utility and subjective probability, as in expected utility theories. In contrast, probabilities and utilities combine and interact in a non-trivial way, with D quantifying the relative importance of value with respect to probability assigned by the DM. This becomes evident when taking the asymptotic limits of, e.g., P(L 1 ) (in the presence of another lottery L 2 offered as the second option): In our framework, a decision maker characterized by D!0 (resp. D!1) is influenced only by outcome values (resp. probabilities), while for finite D his decision derives from an entangled mixture of both. Note that the limit for D!0 depends on the sign of the utilities: for example, if u(o B ) and u(o D ) are negative, the asymptotic behavior of the choice probability is This shows an intrinsic difference in perception between gains and losses, an asymmetry that we discuss further in sub-section "Probability-distorted effective utility". Stage 2: Finite time SRDT Rationale. Many empirical studies (see [58]) have shown how people do not always choose the best option, but the one that gives a fair trade-off between utility and "cost". A decision is in general a stressful operation, and humans have finite computational resources, so even when there is no explicit time constraint for making a choice, low-effort heuristics become attractive as soon as they provide satisfactory outcomes. Thus, the time dimension in decision-making cannot be neglected, as static theories of decision-making (including RUM models) do. Next sub-section extends the previously presented model to account for finite time deliberation. Theoretical extension. Eq (5) provided the choice probabilities P(L 1 ) and P(L 2 ) for a binary choice between lotteries L 1 and L 2 assuming infinite available time to make a decision. We are now interested in calculating the choice probability, say P(L 1 ), conditioned on occurring at some time t�T, denoted by P(L 1 |T). In other words, P(L 1 |T) is the probability to be absorbed by one of the outcomes of L 1 , given that the particle is absorbed somewhere before time T. This condition mimics either an explicit time limit (time pressure) or an implicit one, due to accuracy-effort trade-off. Formally, for the binary choice representation in Fig 2, P(L 1 | T) is given by where J η (x,t) is the probability current on branch η at position x and time t. Given the structure of the problem, a closed form expression of J η (x,t) is hard to obtain. However, a very good approximation of the integrals in (11) is given by the Laplace-transformJ ðx; sÞ of the probability currentJ where s is the conjugate variable of time. Therefore, combining Eqs (11) and (12), P(L 1 |T) is approximately given by (see S1 Appendix for derivation) It is easy to check that when there is no time constraint (T!1) Eq (13) retrieves the usual asymptotic choice probabilities in (5). Fourfold pattern of risk preferences The fourfold pattern of risk preferences [43] is one prominent example of the inadequacy of Expected Utility to describe observed human behaviors. It is experimentally observed that people are: i) risk-averse when gains have moderate probabilities or losses have small probabilities; ii) risk-seeking when losses have moderate probabilities or gains have small probabilities. In Table 1, we report an example of such behavior. Prospect theory, thanks to the interplay of value function and probability weighting, is able to describe it. In Fig 3, referring to the example in Eq (15), we show the predicted probability of choosing L 1 in task (A) (Fig 3A) and L 3 in task (B) (Fig 3B) as a function of p, for fixed diffusion coefficient D and different values of r. The fourfold pattern is correctly predicted: in Fig 3A, P (L 1 (p))�0.5 for small p (risk-seeking, possibility effect) and P(L 1 (p))�0.5 for large p (riskaverse, certainty effect). The situation is reversed in Fig 3B. Note that, despite the fact that our model is structurally different from Expected Utility, a more concave utility function, i.e. higher r, leads to greater risk-aversion in the gain domain. However, r is not an absolute indicator of risk-aversion as in EU, since the ultimate choice probabilities will depend also on the values of D and T (when time constraints are considered). Interestingly, our model predicts also that greater risk-aversion for gains corresponds to greater risk-loving for losses, suggesting a positive correlation (some evidence for such correlation is reported in [59]). By further inspection of Fig 3A, we see that something weird happens: the choice probability P(L 1 (p)) does not go to 0.5 as p!0. But this should actually be expected, because L 1 and L 2 in Eq (15) become identical in this limit. This is due to the fact that the contribution to the choice probability from outcome 100,Ũ p ð100Þ, does not go to 0 as p goes to 0. Thus, there is still a probability to be absorbed along that branch. Conversely, when p is exactly 0, as in L 2 , there is no branch corresponding to such outcome. As next subsection will explain, this amounts to an infinite overweighting of small probabilities. Technically, this problem is known as a singular perturbation limit [60], where, informally, "the solutions of the problem at a limiting value of the parameter are different in character from the limit of the solutions of the general problem" [61]. In this case, the singular perturbation is characterised by the following inequality Such singularity is removed once we include finite time constraints, i.e. by imposing that the decision cannot take an infinite time. Indeed, replacing the effective utilities in Eq (6) with the time-constrained ones in Eq (14), the contribution to the choice probability of the probability p outcome satisfies the following limit Eq (18) means that, when the probability of an outcome goes to 0, the corresponding probability to be absorbed along that branch also goes to 0, not contributing to the choice probability of the related lottery. Let us stress that we do not impose a "small" value of T to get rid of the singularity; T can be arbitrarily large, but finite. The probability of choosing L 1 (p) in Eq (15), given that the decision occurs before T<1, reads PðL 1 ðpÞjTÞ ¼Ũ p ð100jTÞ þŨ 1À p ð0jTÞ U p ð100jTÞ þŨ 1À p ð0jTÞ þŨ 1 ð100pjTÞ ð19Þ Let us now investigate the role of the other parameters, the diffusion coefficient D and the time constraint T. As said in Section "Explicit expressions for the decision probabilities", D is a kind of "utility-numeraire", determining the relative impact of the outcome values compared with the probabilities in the value formation process. The role of T is more subtle: as we will see in the next subsection, a smaller T implies more underweighting (resp. overweighting) of small (resp. high) outcome probabilities. Figs 5 and 6 show the choice probabilities in Eq (16) for different values of D and T, respectively. On the gain-side (Fig 5A), as D decreases, the strength of preferences increases and the preference reversal point between risky and safe lottery shifts to the right (risk-seeking for a wider range of p's). On the loss-side (Fig 5B), decreasing D shifts the curve upward and leftward, implying stronger risk-seeking preferences for a wider range of p's. Focusing now on Fig 6, we see that, through the underweighting (resp. overweighting) of small (resp. high) probabilities, a smaller T destroys both the possibility effect on the gain-side (no risk-seeking behavior for low p) and the certainty effect on the loss-side (no risk-seeking behavior for high p). In general, a smaller T implies greater risk-aversion, as we will discuss in subsection "Preference reversal with time pressure". As stated at the beginning of the Section, in prospect theory, the fourfold pattern of risk preferences is usually explained in terms of the combined effects of probability weighting and a convex-concave value function. The next three subsections show how, although in our model these two constructs are not separable, it is still possible to identify them as consequences rather than postulates, offering an additional intuition on why our theory can explain such patterns. Specifically, the study of how value and time constraints affect probability perception is discussed in subsections "Subjective Probability without time-contraints" and "Realistic Inverse double-S-shaped probability weighting function". Conversely, the effect of probability on value perception is analyzed in subsection "Probability-distorted effective utility". Subjective probability without time-contraints. Eq (5), together with (6) and (7), show the resulting form of the decision probabilities without time-constraints for the binary risky choice (1). From here, we now focus on studying the predicted probability perception of the Decision-maker (DM), say, of outcome o A of lottery L 1 . A convenient way to extract such information is to look at the probability of absorption along branch 1 a , conditional on being absorbed along any branch pertaining to L 1 π(p) can be interpreted as the amount of attention devoted to outcome o A when the DM is looking at lottery L 1 . Several authors [62,63] have established connections between subjective probability and similar psychological notions. Indeed, the fact that π(p) defined in (20) represents a meaningful measure of subjective probability is supported by its asymptotic limits as a function of D: For D!1, outcome values (potential energies) become negligible compared with the stochastic component and the probability perception is unaltered, so that the subjective probability is equal to the objective one. In contrast, for D!0, the decision maker does not pay attention to the probabilities and focuses solely on the payoffs, interpreting their likelihood only as a function of their magnitude. As for Eq (9), a simple interpretation of the D!0 limit is possible only when both utilities are positive . For u(o A ) and u(o B ) negative, the expression becomes Eq (22) implies that when negative utilities are involved, the decision maker, even in the D!0 limit, takes into account the event probabilities. For finite non-zero D, an interesting value-distortion of probability perception arises: Fig 7 shows π(p) vs p for different uðo B Þ uðo A Þ and D values. Our theory thus derives the empirical inverse Sshape of subjective probability as a function of objective probability, for instance used in standard Prospect Theory by Tversky and Kahneman [11], indicating that human beings tend to overestimate rare events and underestimate high probability events. More specifically, π(p)�p (resp. π(p)<p) for p�p � (resp. p>p � ), where p � is the inflection point of π(p) given by Our theory predicts that the asymmetry in the distortion of π(p) for p!0 and p!1 is controlled by uðo A Þ uðo B Þ : the larger this ratio is, the larger is the subjective distortion for small p's compared with large p's. There is empirical evidence that changing lottery payoffs changes inflection points. In [64], for each individual, the authors perform the elicitation of two probability weighting functions p À S ðpÞ and p À L ðpÞ for gambles involving small and large losses. The idea is that, when considering lotteries like L ¼ fÀ 1000€; 0:1; À 10€; 0:9g it is possible that the probability 0.1 is not weighted in the same way, because of the different magnitude of the consequences and because of the "distance" between the lottery outcomes. Note that Rank-dependent models (e.g. CPT) predict that π(0.1) is the same in both lotteries, PLOS ONE since L and L 0 are comonotonic. On average, the authors find that small probabilities (�0.33 for small losses and �0.5 for large losses) are overweighted (indicating pessimism). The usual inverse-S shape thus holds over both small and large losses, but the inflection point shifts to the right over large losses. While deriving or recovering the empirical inverse S-shape, our formulation of the subjective probability is fundamentally different from those used in existing decision theories, such as the Prelec II weighting function [65] parametrized to account for some assumed probability distortion, which is supposed to be intrinsic to the DM and can be determined by calibration of the results of a number of standard tests and questions presented to the DM [66]. These subjective probabilities are considered independent of the values of the outcomes to which the probabilities are associated. We have previously argued and also referred to empirical evidence that there is no such thing as an outcome without value. Even a question as far from every life on the probability of life on Mars, say, carries, depending on the DM, religious, scientific, and cultural values and possibly more. In our framework, the subjective probability (20) is influenced by the outcome values and represent the contribution of each outcome in a lottery to the choice of that lottery by the DM. Thus, our theory suggests that it is ill-conceived to attempt characterizing the subjective probabilities of DM. Our approach allows us to formulate a general hypothesis that subjective probabilities are value-dependent, which deserve empirical investigations. In existing decision theories, the subjective probabilities are multiplied by the utilities of the associated events to form a measure of worth and then the choice probability "layer" is added on top. In our theory, subjective probabilities are instead encapsulated into decision probabilities, the former determining the latter. Our model can thus be viewed as a natural generalisation beyond the standard factorisation of probabilities and values to form value preferences. At this stage, the definition of subjective probability as a relative absorption probability (Eq (20)) may seem somewhat counterintuitive, notwithstanding the fact that it correctly retrieves the objective event probability in the D!1 limit. We would like to stress that our mathematical formulation of the subjective probability is fundamentally different from that in expected utility theories. In Expected utility, as described by Savage [67], the assumption of separation between preferences and beliefs is crucial for the elicitation of subjective probability. However, as stated in the introduction, the simultaneous estimation of utility and subjective probability is subjected to the joint hypothesis-testing problem [20], and many methods have been devised to circumvent such issues [68,69]. Here, in contrast, the subjective estimation of the likelihood of an event depends on the associated magnitude. Consequently, in our model, the subjective probability is actually implied by the utility function, and thus two separate functions cannot be really identified. Our definition of subjective probability should be treated as a way to extract how the choice probability depends on the outcome probabilities, and to get an intuition on why our model is able to explain the fourfold pattern of risk preferences. Concretely, one would just need to estimate the utility function (together with the parameters D and T), and the corresponding "belief function" comes as a result. The next subsection presents an analysis of the subjective probability when time or "energy" constraints are considered. "Realistic" inverse double-S-shaped probability weighting function. Al-Nowaihi and Dhami [48] report that a theory of choice should be able to describe the following two stylized facts: i) overweighting low probability events and underweighting high probability ones; ii) neglecting extremely low probability events and considering as certain extremely probable events. The first fact is essentially captured by an inverse S-shaped probability weighting function, as derived in Eq (20). The second one is referred by Kahneman and Tversky [43] as an editing phase: "the simplification of prospects can lead the individual to discard events of extremely low probability and to treat events of extremely high probability as if they were certain". Clearly, this resonates with the idea that the DM has limited computational resources and, even when there is no explicit time limit, the processing cost acts as such. To account for both patterns i) and ii), Al-Nowaihi and Dhami axiomatically construct a composite probability weighting function, shown in Fig 8, obtained by the concatenation of three different Prelec functions, for a total of 6 parameters (see Eq 6.2 in [48]). A DM with such probability function underweights (ignores) very low probabilities events -p2[0,p 1 ]and overweights (considers as certain) extremely probable events -p2[p 3 ,1] -reflecting stylized fact ii). Within the middle range p2[p 1 ,p 3 ], the function has an inverse-S shape, addressing stylized fact i). Although the proposed probability weighting function addresses the previously mentioned stylized facts, it has six parameters and may seem ad-hoc and artificial. Our framework, on the other hand, predicts the desired shape, resulting from the superposition of two effects: finitetime deliberation and value distortion. Indeed, referring to the previously derived value-distorted subjective probability (20) for outcome o A of lottery L 1 in (1), the time-dependent generalization is (approximately) given by withŨ p ðojTÞ given in Eq (14). In Fig 9, we plot π(p|T) for different values of T, fixing u(o A ) = u(o B ) = D = 10 for illustrative purpose. We can see how the value-distortion and the finite time deliberation play opposite effects: for high values of T, one can observe an inverse S-shape, due to the influence of value on probability perception (as in Fig 7). For low values of T, the influence of time pressure becomes dominant, resulting into a S-shaped probability weighting. For intermediate values of T (in the example T = 0.3, black star-dotted line), the superposition of these two "forces" results in an inverse double S-shaped probability weighting, similar to the one in Fig 8. In summary, our framework predicts the probability weighting function postulated in [48] with only 2 parameters-D and T-and offers a more microscopic explanation for such observed behavior, in terms of competition between value-distortion and finiteness of computational resources. Let us stress that the time constraint in our model is not necessarily meant as an external time pressure, but it can also be conceived as an internal time pressure, because of energy constraints and efficiency-accuracy tradeoff. Therefore, at this stage, we are not claiming that an explicit time-pressure is needed to recover an inverse double-S curve. "Probability-distorted" effective utility. In the previous sub-section, we have studied how the outcome values alter the probability perception of the DM. Here, we study the effect of probability on value perception. Eq (5) has introduced the effective utilitiesŨ p ð:Þ, which are transformed from the utilities u(.) via a non-trivial nonlinear operation involving the outcome probabilities. This corresponds to the dual of the value-distorted probability π(p) given in expression (20) in the form of a value perceptionŨ p ð:Þ influenced by probability. Fig 10 shows the transformed utility functionŨ p ð:Þ as a function of the original one u(.) for different values of D p ≔ pD 2 . The interaction between probabilities and values transforms an initially risk-averse (concave) utility function u(.) into a convex risk-seeking utility on a sub-interval of the loss domain, predicting the existence of a reference point to discriminate between behavior toward gains and behavior toward losses, as postulated in prospect theory. We stress here that this comes as another prediction of the theory without any parameter adjustment or added ingredients. In particular, it is not a phenomenological assumption put in the theory, for instance as in Prospect Theory. To illustrate this effect quantitatively, let us consider again the utility function uðoÞ ¼ 1À e À ro r for o2R. The corresponding transformed utility functionŨ p ðoÞ given by expression (20) reads (20) and (25) predict a risk taking behavior on the loss side even when starting with a utility function that is everywhere concave, in agreement with the outlined predictions on the fourfold pattern of risk-preferences [43]. Decision tasks like those in Eq (15) are classic examples where the weak risk-aversion relation, denoted by R w , can be applied: meaning that L 2 is riskier than L 1 . More general relations [70] have been suggested to formalize risk-aversion, such as the so-called strong risk-aversion (or second-order stochastic dominance) R s : Within expected utility, these two definitions of risk-aversion coincide [70,71], but, in general, when departing from the expectation structure, the two relations differ [72] and need to be studied separately. An example for strong risk-aversion is the following: According to (27), L 5 is riskier than L 6 ; our framework, using for example r = 1 as before, predicts the correct pattern for the majority of D's: In summary, without added assumptions, our theory predicts what has been postulated for instance by prospect theory, with a concave part of the value function for gains and a convex part for losses. These properties derive naturally from the stochastic representation of probabilities in the presence of values. However, the way in which the above-presented transformed utility determines choice preferences is different from usual decision models; in expected utility, the concavity of utility function implies risk aversion through Jensen's inequality [73]. Here, due to the non-linear form ofŨ p ðoÞ, it is in general not easy to derive analogous simple constraints for the model parameters. More generally, we stick with the notion of an (initially concave) utility for the following reason: utility is a well-defined concept in choice under certainty [74], where diminishing marginal utility indicates less and less increase in "happiness" as wealth increases. In Expected Utility, the concept of diminishing marginal utility and risk-aversion are fundamentally entangled: there cannot be one without the other. On the other hand, in generalized theories like Rank-Dependent Utility Theory, as shown in [75], it is possible for a decision-maker to be risk-seeking with a concave utility function, provided the probability weighting is sufficiently "optimistic". Therefore, the concept of diminishing marginal utility and risk-aversion are decoupled to same extent. Analogously, our model hypothesis is that the utility function, when in the context of choice under certainty, has some form (e.g. the CARA function used in the manuscript), expressing (or not) diminishing marginal utility. Then, as soon as there is some uncertainty, due to the interaction between probability and value, the utility becomes "distorted", and assumes a form like the one in Eq (25), allowing to exhibit risk-seeking behavior for losses. Stochastic dominance First order Stochastic Dominance [76] is a property that decision theorists usually are not willing to give up, as it essentially encodes the reasonable behaviour that "more is better". A random variable (gamble) L 1 has first-order stochastic dominance over gamble L 2 if P(L 1 �o)�P (L 2 �o)8o and for some o P(L 1 �o)>P(L 2 �o), where {o} is the set of possible outcomes. However, people often violate it when presented with choices like Even if L 1 stochastically dominates L 2 , most people choose L 2. Popular decision models like rank-dependent utility theory [10] and cumulative prospect theory [11] cannot account for this pattern. Within our framework, this is explained when DM exhibit relatively low values of D, such that the decision is "value-oriented" and the DM does not pay sufficient attention to the probabilities. For this particular gamble, assuming linear utility function, P(L 2 )�0. 65 for small values of D, quite close to the fraction 70% of people choosing L 2 experimentally found by Birnbaum and Navarrete [77]. Note that our model does not predict any violation when the dominance is "evident'", as in the following examples ðAÞ L 1 ¼ f1€; 0:5; 3€; 0:5g or L 2 ¼ f1€; 0:5; 2€; 0:5g ! PðL 1 Þ � PðL 2 Þ8D ðBÞ L 3 ¼ f11€; 0:5; 12€; 0:5g or L 4 ¼ f10€; 1; 0€; 0g ! PðL 3 Þ � PðL 4 Þ8D ð31Þ It is clear that L 1 dominates L 2 in (A) and L 3 dominates L 4 in (B), and people choose accordingly. Fig 12 shows the predicted probability to choose L 1 in task (A) (Eq (31)) for different values of D and T. As expected, for small values of the diffusion coefficient D, the choice becomes more deterministic. The (explicit or implicit) time constraint T plays a similar role. Several descriptive theories [78, 79] allowing violations in cases like (30) predicted unreasonable behaviour in tasks like (31), essentially because two outcomes with the same objective probability were forced to have the same subjective one [80]. Our framework, thanks to the non-separable form of the lotteries' attractiveness, avoids this problem and confirms its significant predictive power. Predictions from finite time SRDT This Section presents further predictions of our theory when generalized to account for finite decision time. Inverse relation between choice probability and response time. Several studies (e.g. [45]) report that there is an inverse relation between the probability to choose an option and the (mean) decision time to choose that option (see Fig 13). Intuitively, the more "difficult" the choice (e.g. two lotteries with similar expected values), the more time it will take to decide, and the choice probability will be around .5, because the optimal decision is not obvious. By construction, our theory predicts such phenomenon, since: In [46], the main result is that the fraction of subjects choosing low risk gambles increased from below .50 (low time pressure) to above .50 (high time pressure). Therefore, subjects essentially became more risk-averse as the time available to make a decision decreases. Our framework is able to predict such pattern. Consider a choice of the form: where E[L 1 ]<E[L 2 ], but Var(L 1 )<Var(L 2 ), so L 2 gives on average a greater payoff, but is riskier. Assume for simplicity a linear utility function u(x) = x. In Fig 14 we can see clearly that P(L 1 | T) goes from below .5 to above .5 as time pressure increases (i.e. time available decreases), reflecting the tendency found in [46] of increasing risk-aversion as a function of time pressure. Note that while Decision Field Theory needs to assume an asymmetric starting point for the random walk in order to capture a preference reversal, our theory essentially predicts this pattern without adjusted additional parameters. Discussion We have presented a simple and efficient "stochastic representation" framework that describes the human decision-making process as inherently probabilistic. It is based on a representation of the deliberation process leading to a choice through stochastic processes, the simplest of which is a random walk. Differently from random utility theory (external noise added to the rational utility and probability representation as a calibration procedure), our stochastic representation framework relies on a plausible description of the (assumed) intrinsic stochasticity of the human choice process. Our proposed approach does not disentangle probability and value as in expected utility theories, rather it allows interaction between them in a non-trivial way. Despite its simplicity, the model provides straightforward derivations at a more microscopic level of several known structures that have been documented empirically in human decision theory. Our theory also provide a number of novel predictions. Here, only structural properties have been presented through simple examples, which are not sufficient to falsify the theory. At this stage, the parsimony of its formulation and the wealth of obtained properties, which are in qualitative or semi-quantitative agreement with empirically observations, makes our theory interesting to further explore. We plan to use more sophisticated procedures to test our model against the major decision theories, based on crossvalidation methods: parameters are first estimated from one part of an experiment, and then these same parameters are applied to a separate part of the experiment and the predictions are evaluated. Note that we cannot use the usual Wilks likelihood-ratio test [84] because in general the models will not be nested, but other methods are possible, such as the Vuong test [85] and Information Criteria (AIC [86] and BIC [87]). The current formulation is not meant to be "the" definitive framework (if it would exist)since as already mentioned it presents some limitations, such as those deriving from Luce choice axiom-but a baseline to construct more elaborate models, keeping in mind the tradeoff between parsimony and explanatory power. In general, we are aware that testing alternative ways of value formation is very difficult, because of the measurement problem in economics [88]. Indeed, we cannot really measure the "degree of happiness" of the decision-maker, but we have to infer it-adopting one particular model-through her choices. This adds an additional layer of complexity with respect to other hard sciences, such as physics or chemistry. On the other hand, contribution of this type may help to devise more effective ways to elicit preferences, deepening our understanding of decision processes. In addition, further theoretical and empirical work may lead to modifications of the presented theory, where the expected utility hypothesis (separability of probability and value) can be seen as a particular case of a more complex structure, where probability and value do interact to some extent in the decision maker's mind.
2020-12-15T21:59:30.662Z
2020-04-21T00:00:00.000
{ "year": 2020, "sha1": "25d3be0efbd8d0021e95897a48ffd5f59c19eee5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0243661&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba9a3664a7393c32226ba1f5759acd3d27311df0", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
238795692
pes2o/s2orc
v3-fos-license
Infant Oral Health Care Concerning Education of Mothers – Part 2 Infant oral health care is essential in a way where it provides a solid foundation on which a healthy oral environment is augmented. Dental caries is perhaps the most infectious and prevalent disease seen in the current scenario. Dental caries is 5 times more common than asthma and 7 times more common than hay fever in children. Rotten primary teeth can influence kids' development, lead to malocclusion, and result in huge torment and possibly perilous swelling. To forestall caries in youngsters, high-risk individuals should be distinguished at an early age (ideally high-risk moms during pre-birth care), and aggressive strategies ought to be received, such as anticipatory guidance, behaviour modifications (oral cleanliness and taking care of practices). On establishment of Dental Home, mothers should be told about preventive measures to take during teething of infants and how to administer oral care and proper cleaning of teeth. Infant oral health care is essential in a way where it provides a solid foundation on which a healthy oral environment is augmented. Dental caries is perhaps the most infectious and prevalent disease seen in the current scenario. Dental caries is 5 times more common than asthma and 7 times more common than hay fever in children. Rotten primary teeth can influence kids' development, lead to malocclusion, and result in huge torment and possibly perilous swelling. To forestall caries in youngsters, high-risk individuals should be distinguished at an early age (ideally high-risk moms during pre-birth care), and aggressive strategies ought to be received, such as anticipatory guidance, behaviour modifications (oral cleanliness and taking care of practices). On establishment of Dental Home, mothers should be told about preventive measures to take during teething of infants and how to administer oral care and proper cleaning of teeth. B A C K G R O U N D Infant oral health (IOH) is the bedrock on which preventive schooling and dental consideration are built to augment the chance for a lifetime, free of oral diseases which can be otherwise avoided. 1 Parents are considered the primary leaders in the wellbeing of their kids and medical services, play a vital part in accomplishing the best oral health consequences for the children. Considering the importance of a parents role in the overall care of young children, it is crucial to inspect and inquire about their knowledge, attitude, and beliefs as it has an impact on the dental care that youngsters get at home and their admittance to proficient dental services. 2 Dental caries, even in the most initial phase of life, if left untreated, can have a severe impact on the child's long-term health and well-being. Higher incidence of caries in childhood has been linked to children having lower body weight and lost school time due to frequent dental visits. Dentists, however, can make a difference by incorporating the infant oral health examination / age one visits into their practice. This will help forestall early childhood caries and go far towards guaranteeing ideal oral health services for a lifetime. Very little attention has been given to infant oral healthcare to date. Historical consideration of oral health care for infants urges the need to move from the careful methodology of overseeing oral illness to the idea of essential consideration directly from the perinatal period. American Academy of Paediatric Dentistry (AAPD), in 1986, adopted the first infant oral healthcare policy statement approach. 3 Early dental care is an essential aspect of establishing and maintaining optimal oral health in children. Parents and guardians need to begin to standardize dental care at infancy and continue it through adolescence and into adulthood. As seen in the previous part, we came to know the importance of prenatal oral health counselling, anticipatory guidance for mothers and infant feeding practices, which are preventive measures. In this article we are going to look into the oral health risk assessment of an infant, how to time the first dental visit and establishment of a dental home, measures to be taken while teething, what oral hygiene measures are needed for maintenance and incorporation of fluorides into oral health routine. O R A L H E A L T H R I S K A S S E S S M E N T All children should get oral wellbeing risk evaluations by six months of age by a certified paediatrician or certified paediatric health care professional. Recording dental history from a new mother can help the dentists perform a risk evaluation to recognize guardians and babies with a high inclination to dental caries. 4 Questions directed at dietary practices, fluoride exposure, oral hygiene, the number, and area of the mother's dental fillings can give an overall notion of the mother's baseline rot potential. Children born earlier have a lower risk of caries than the late-born when the mother has a mild to moderate high caries rate. However, due to the lack of accessibility to longitudinal dental databases, these observations are not epidemiologically confirmed. 5,6 When risk assessment is done it promotes the treatment of the disease cycle as opposed to treating the result of the disease; which thus gives an inside and out comprehension of the disease factors for a particular patient and helps to individualize preventive conversations, chooses, and decides the recurrence of precautionary and therapeutic treatment for a patient; and expects caries progression or decline. 7 Dental caries is a result of an excess of explicit organisms that are essential for naturally occurring human dental flora. 8 Human dental flora is site-specific. Infant oral flora colonization begins at around 6 months to 3 years of age; that is when the eruption of primary dentition begins. 9,10 The vertical transmission of Streptococcus mutans from mother to child is well noted. 5,6 Truth be told, genotypes of Streptococcus mutans in babies seem indistinguishable from those present in moms in around 71 % of mother-new-born child pairs. 11 The essentialness of this data becomes centred while thinking about two focuses. To begin with, higher rate of caries is passed down in families, 12 and pass from mother to younger ones from generation to generation. The offspring of moms with high caries rates are at a greater risk of rot 12,13 Secondly, any changes or variation in a mother's dental flora at the time of the child's colonization can fundamentally influence the caries rate in their younger ones. 14-16 Therefore, an oral health risk evaluation before one year of age allows identifying high-risk patients and giving ideal reference and mediation to the younger ones, in this manner permitting an important occasion to diminish the degree of caries causing organisms in the mothers with past caries risk and during colonization of the new-born child. F I R ST D E N T A L V I SI T A N D D E N T A L H O M E The American Academy of Paediatric Dentistry, whose vision is "Optimum Oral Health for All Children" and which fills in as an important resource for dentists and hygienists requiring information on the early treatment of children, suggests the youngster's first visit should be when the first tooth erupts and prior to the kid's first birthday. 17 The concept of the "dental home" is derived from the American Academy of Paediatrics concept of the "medical home." The definition states: "The dental home is the ongoing relationship between the dentist and the patient, inclusive of all aspects of oral health care delivered in a comprehensive, continuously accessible, coordinated, and family-centred way. Dental homes have to be established by 12 months of age and should include referral to dental specialists when appropriate. 18 To build up a dental home; it is necessary to meet the guardians / imminent guardians early. Gynaecologists, paediatricians, family doctors are the ones who interact with them much before a dental specialist. They should set up correspondence with them to such an extent that powerful and convenient references are made to the dental specialist. Likewise, schools and pre-schools, childcare centres can be educated about the dental homes.  A notification, for example, -"Do you realize you can profit your kid's teeth and oral well-being by beginning preventive dental consideration before labour?"-can draw into consideration of imminent guardians whenever put in a gynaecologist's office. 19 Similarly, these messages can be displayed in hospitals and clinics of a paediatrician, gynaecologist, and all other paediatric health care professionals.  Dental issues can start early. A significant concern is the Early Childhood Caries (otherwise called baby bottle tooth decay or nursing caries). Children are at high risk of caries from using a bottle during naps or around evening time or when they nurture ceaselessly from mother's breast.  The sooner the dental visit, the better is the visualization of staying away from dental issues. Kids with sound teeth bite food rapidly, are better ready to figure out how to talk unmistakably, and grin with certainty, imparting a long period of good dental propensities.  Make children drink from a cup more frequently as they get closer to their first birthday. Infants should not fall asleep with a bottle. Mothers should avoid night-time breastfeeding after the first primary tooth erupts. When juice is given, it should be in a glass and drinking from the bottle should be avoided.  Parents ought to guarantee that young children use a proper size toothbrush with a little brushing surface and just a pea-sized measure of fluoride toothpaste at each brushing. Youngsters need to be consistently directed while brushing and instructed to expectorate as opposed to swallowing the toothpaste. Except if encouraged to do so by a dental or other health professional, parents ought not to use fluoride toothpaste for their children under two years of age.  Young children who primarily consume packaged water may not be fulfilling the fluoride content necessary.  Sore gums are a common problem from age six months to 3 years as the teeth erupt. Numerous youngsters like a crisp teething ring, cool spoon, or a cool, wet washcloth. Very few guardians lean towards a chilled ring; others rub the infant's gums with a spotless finger.  Guardians and parental figures need to manage their own teeth, so caries causing microorganisms are not as easily passed onto youngsters.  Giving anticipatory guidance with respect to dental and oral turn of events, fluoride status, non-nutritive sucking propensities, getting teeth, injury avoidance, oral cleanliness guidance, and the impact of food habits on the teeth are moreover essential components of the underlying visit. T E E T H I N G The eruption of the principal tooth is the most enthusiastically anticipated, significant formative landmark by most guardians. Teething Latin term "Dentition difficili" was coined, literally meaning difficult dentition. Getting teeth can prompt irregular limited distress nearby erupting primary teeth, aggravation of the mucosa overlying the tooth, pain, general irritability / malaise, irregular sleep, slobbering, gingival scouring / gnawing / sucking, bowel upset (ranging from stoppage to free stools and the runs), loss of appetite / alteration in volume of fluid admission, and ear scouring on a similar side as the erupting tooth; be that as it may, numerous youngsters have no evident troubles. H O W T O C L E A N Y O U R C H I L D ' S MO U T H 22 Indeed, even before the teeth start to erupt, the child's mouth ought to be cleaned at any rate once every day with clean bandage or delicate cotton cushion. Cleaning should be regulated as a habit. To ensure cleanliness of the child's teeth and gums: 1. Sit on a sofa or sit with your youngster's head in your lap. Or on the other hand if someone is helping you, place the kid's head in your lap with his feet towards your helper. It is approaching that the youngster is agreeable, and you can see viably into his / her mouth. 2. Hold a clean dressing cushion or a delicate material over your finger. Dunk the cloth in water so that it's moist, yet not splashing wet. Wipe your youngster's teeth and gums delicately. 3. Exactly when your youngster's teeth start to appear, begin to use a little, fragile toothbrush to brush his teeth. Make sure to brush all surfaces of the teeth, including the gums. 4. It isn't important to utilize toothpaste, however on the off chance that you do, utilize a limited quantity of fluoride toothpaste (about the size of a pea). 5. Youngsters ought to brush their teeth solo by age 11. Up to that point, guardians should watch or help, in light of their kid's capacities. F L U O R I D E Fluoride is probably the most ideal approaches to forestall cavitation. Fluorides ought to be given in drinking water or as an enhancement as drops or tablets, with or without nutrients. Ask your youngster's dental specialist or expert about furnishing your kid with fluoride on the off chance that there's no fluoride in your water. Right when your kid is around two years of age, fluoride administration has to begin. "Your dental specialist or dental hygienist puts fluoride arrangement outwardly of the youngster's teeth to give the teeth added assurance. Choices regarding the delivery of fluoride depend on the extraordinary requirements of every tolerant. 24 The utilization of fluoride for the avoidance and control of caries is documented to be both protected and successful. While deciding the risk-benefit of fluoride, the primary issue is mild fluorosis as opposed to forestalling devastating dental disease. In children with mild or high caries risk younger than 2 yrs., a "smear" of fluoridated toothpaste should be utilized. In all children aged 2-5 years, a "pea-size" paste should be utilized. Expertly applied topical fluoride; for example, fluoride varnish should be considered for children at risk of caries. 25 Systemically-administered fluoride should be advised for all children at caries risk who drink fluoride deficient water (< 0.6 ppm) subsequent to deciding any remaining dietary wellsprings of fluoride exposure. Cautious monitoring of fluoride is shown in the utilization of fluoride-containing items. Fluorosis has been related to total fluoride consumption during enamel improvement. C O N C L U S I O N S Setting up a dental home implies that a youngster's oral health care is overseen in a far-reaching, ceaselessly open, facilitated and family-focused route by an authorized dental specialist. The dental home upgrades the dental expert's capacity to give excellent oral health care, starting with the age one dental visit for fruitful preventive care and treatment as an aspect of a general oral health care foundation for life. Moreover, the foundation of the dental home guarantees suitable referral to dental specialists when care can't straightforwardly be given inside the dental home. With the mix of legitimate eating regimen, early oral hygiene and systemic and topical fluorides, we can advance a generation of sans caries youngsters, given that the dentalcare program is to be started at or before the eruption of the main primary teeth. Guardians are instructed that milk teeth need dental consideration like permanent teeth. Parental directions about giving non-cariogenic food in the middle of dinner like water/plain milk/new natural products is done to diminish the rate of dental caries. The job of fluoride in caries avoidance is considered as an ideal fluoride exposure which is fundamental for all dentate babies and kids. 26 Preventive dental consideration should begin ahead of schedule during infancy, in the main year of a youngster's life to guarantee fruitful results. In developing nations like India, where an absence of pedodontics and other dental labour forces in country zones are apparent, the arrangement of infant oral health (IOH) care is a difficult issue. To conquer these challenges, it is obligatory to teach the clinical and medical care experts about IOH care. This would improve the way to deal with dental consideration, particularly for poor people and the minority kids who suffer excessively from dental caries and who have restricted admittance to dental consideration. 27 Financial or other competing interests: None. Disclosure forms provided by the authors are available with the full text of this article at jemds.com.
2021-09-27T21:05:08.455Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "7f1bf258c9645e3880a048b3cb093853327f5f22", "oa_license": null, "oa_url": "https://doi.org/10.14260/jemds/2021/521", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4ab235317b44fee0c14c80f68974f04c16444b7a", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
20446283
pes2o/s2orc
v3-fos-license
Atomic Mutagenesis of the Ribosome: Towards a Molecular Understanding of Translation : The multifaceted repertoire of non-protein-coding RNAs (ncRNAs) in organisms of all three domains of life emphasizes their fundamental role in biology. Research in my lab focuses on revealing the regulatory and catalytic function of small and large ncRNAs in different model organisms. In particular we are interested in understanding ncRNA/protein complexes such as the vault complex or the ribosome. The ribosome, the central enzyme of protein biosynthesis, is a multifunctional ribonucleoprotein particle composed of two unequal subunits that translates the genome’s message into all proteins needed for life. The crucial role the translation machinery plays in gene expression is also mirrored by the fact that the ribosome represents the main target for antibiotics. Decades of genetic, biochemical and recent crystallographic studies revealed the ribosome as an RNA-enzyme with roots in the ‘RNA world’. Despite these experimental insights, the catalytic and regulatory mechanisms of the ribosomal RNA are still not fully understood at the molecular level. To unravel the detailed contributions of rRNA nucleotides for protein synthesis we have developed and applied an ‘atomic mutagenesis’ approach. This tool allows the role of specific 23S rRNA functional groups and even individual atoms to be studied during various stages of the ribosomal elongation cycle with thus far unequalled precision. This experimental approach bridges the disciplines of biochemistry and organic chemistry and has recently revealed specific functional 23S rRNA groups involved in peptide bond synthesis Norbert Polacek studied biology/genetics at the University of The ribosome is a multifunctional RNA-protein complex and translates the genome's message into all proteins needed for life. Translation of the genetic information represents the final step in gene expression. The crucial role of protein biosynthesis in gene expression is highlighted by the fact that the ribosome represents the main target for clinically relevant antibiotics. Therefore the knowledge of its functioning is crucial for our understanding of antibiotic resistance and for the future design of new anti-microbial compounds. The ribosome is the largest known RNA enzyme and is regarded as one of the most ancient catalysts in biology. The bacterial ribosome has a molecular weight of 2.6-2.8 MDa with about 2/3 of the mass comprised of ribosomal RNA (rRNA) and 1/3 of ribosomal proteins (r-proteins). The ribosome is a highly dynamic particle and can be regarded as a molecular machine consisting of two unequally sized subunits. The large 50S subunit contains two rRNA molecules (the~2'900 nucleotide-long 23S rRNA and the 120 residue-long 5S rRNA) and about 33 different r-proteins. The small 30S subunit on the other hand carries a single rRNA strand (the~1'500 nucleotide-long 16S rRNA) and approximately 20 r-proteins. All ribosomes have three distinct tRNA binding sites designated the A-site (aminoacyl-tRNA or acceptor site), P-site (peptidyl-tRNA or donor site), and E-site (tRNA exit site). The catalytic center of the ribosome is located on the 50S subunit where the individual amino acid monomers are chemically linked via peptide bonds to form the polypeptide chain. This active site is referred to as the peptidyl transferase center (PTC) and it is located in a funnelshaped deep cleft on the interface side of the 50S subunit. Peptide bond synthesis and peptidyl-tRNA (pept-tRNA) hydrolysis are the two central chemical reactions of protein synthesis that take place in the PTC (for reviews see refs. [1,2]). Peptide bond formation involves aminolysis by the α-amino group of the A-site bound aminoacyl-tRNA (aa-tRNA) of the ester ester bond that connects the fully synthesized peptide chain and the tRNA in the PTC P-site. This reaction is the second chemical reaction that is catalyzed in the PTC during protein synthesis. A universally conserved GGQ tri-peptide motif, which is located in a flexible loop of the RF, reaches into the PTC where it contacts and repositions conserved 23S rRNA residues, such as the crucial nucleotide A2602. [15,16] These structural changes and interactions with PTC nucleotides are thought to be critical for the GGQ motif to adopt its functional conformation and allows it to coordinate the hydrolytic water molecule that finally cleaves the pept-tRNA ester bond. [17] Finally, the recycling phase of translation frees the ribosome from the mRNA, the tRNAs and the class I RF. The latter are removed from the ribosome by the release factor RF3, which represents another translational GTPase. Before the ribosome can re-initiate at another mRNA molecule it needs to be dissociated into its subunits. This is established via binding of the ribosomal recycling factor RRF and EF-G GTPase activity. [7] Nucleotide Analog Interference in the Ribosome Using the Atomic Mutagenesis Approach While the crystal structures and cryo-EM images of ribosomal complexes provide a wealth of detailed insights, they do not reveal definite answers about the molecular mechanisms of ribosome-triggered reactions. For example, despite the structural identification of the inner shell 23S rRNA nucleotides that line up the catalytic center of the ribosome, the detailed mechanism how peptide bonds are formed remains controversial and far from being understood at the molecular level. [1,2,[18][19][20][21][22] Mutational studies of PTC nucleotides turned out to be insufficient to unequivocally disclose the contribution of specific 23S rRNA functional groups to translation at the atomic level. [23] It appears that the level of chemical engineering that can be generated via standard mutagenesis is not adequate since it is limited to only three possible nucleobase changes that can be introduced by RNA polymerase. [24] To circumvent these limitations and to be able to fully use the structural information provided by crystallographic and cryo-EM studies, we have established an in vitro genetics approach allowing the site-directed manipulation of 23S rRNA residues in the context of the 50S ribosomal subunit at the functional group or even single atom level. [25,26] This approach permits changing the chemical characteristic of a specific residue within an RNA molecule containing thousands of nucleotides. The key fea-bond that carries the nascent peptide at the terminal ribose C3' position of pept-tRNA. Subsequent to the nucleophilic attack of the α-amino group, a short-lived tetrahedral transition state is formed that breaks down by donating a proton to the leaving oxygen to yield the reaction products deacylated tRNA at the P-site and pept-tRNA (elongated by one amino acid) at the A-site. From an energetic point of view formation of a peptide bond is a downhill reaction since almost 8 kcal/mol are 'stored' in the ester bond of aa-tRNA and only~0.5 kcal/mol are needed for amide bond formation. Nevertheless, the uncatalyzed reaction (extrapolated from model reactions) occurs very slowly in solution with less than one bond formed per day. [3] Thus the ribosome accelerates the rate of peptide bond formation approximately 10 7 -fold. [4] The mechanism by which the PTC accelerates peptide bond formation has long been a question of rather intense discussions over decades (reviewed in ref. [1]). Before the discovery of the first RNA enzymes an r-protein-based catalytic mechanism was proposed even though it had already been realized that the function of rRNA inside the ribosome exceeds that of a mere scaffold for the r-proteins. The debates whether or not the ribosome is a ribozyme sustained until the beginning of the new millennium when high-resolution crystallographic structures ended these disputes by revealing the inner shell of the PTC as an all-RNA active site. [5,6] Five universally conserved residues of domain V of 23S rRNA form the catalytic crevice. From an evolutionary point of view it is remarkable that the ribosome, so to say the mother of all contemporary protein enzymes, is an RNA enzyme. Obviously the PTC proved itself as a very efficient RNAbased catalyst and has therefore not been replaced by proteins during the course of evolution. The ribosome can thus be regarded as molecular relic that outlived the transition from the 'RNA world' to contemporary biology. Making Proteins: The Ribosomal Elongation Cycle Protein biosynthesis can be divided into four different steps: initiation, elongation, termination and recycling. [7] During the elongation cycle of translation, the tRNAs have to move along a 100 Å path through the ribosome in steps of 10-30 Å. The initiator tRNA (fMet-tRNA fMet in bacteria) is delivered to the 30S particle as ternary complex with initiation factor 2 (IF2) and GTP, where it binds to the AUG mRNA start codon which is displayed in the 30S P-site with the aid of IF1 and IF3. Subsequently the 50S subunit joins which results in triggering of the GTPase activity of IF2 and subsequent dissociation of the IFs from the 70S initiation complex. During the elongation phase of translation, the next amino acid is delivered to the A-site in form of the ternary complex composed of aa-tRNA, elongation factor Tu (EF-Tu) and GTP. Accuracy of translation is maintained in the 30S decoding center by a sophisticated mechanism relying on monitoring the shape of the codon-anticodon mini-helix as well as the conformation of the tRNA anticodon stem-loop by highly conserved 16S rRNA elements. [8,9] For correct decoding only exact Watson-Crick interactions between the mRNA codon and the tRNA anticodon (primarily monitored at the first two nucleotides of the codon) are allowed and trigger, by a so far not fully understood signaling pathway mechanism, GTP hydrolysis on EF-Tu. EF-Tu•GDP subsequently dissociates from the aa-tRNA. The acceptor end of aa-tRNA subsequently swings into the 50S A-site, a process called accommodation. Also during this accommodation step discrimination between correct and incorrect aa-tRNAs can be achieved, since it has been shown that the rate of accommodation for cognate aa-tRNA is significantly faster than for near-cognate aa-tRNAs and thus the latter can even be rejected at this stage. [10] Once the CCA terminal acceptor end of the aa-tRNA is fully accommodated in the PTC, peptide bond formation occurs immediately. While catalyzing 15 to 20 peptide bonds per second [11,12] the error rate remains in the range of 10 -3 to 10 -4 . [10,13,14] During the course of peptide bond formation the peptidyl moiety of P-site located pept-tRNA is transferred to the aa-tRNA in the A-site. This results in the elongation of the growing peptide chain by one amino acid at the C-terminal end. The nascent peptide departs the catalytic center via the so-called peptide exit tunnel, a~100 Å long and rRNA-rich path that spans the entire 50S subunit and has its end at the solvent side. In the final step of the elongation cycle the deacylated tRNA and the pept-tRNA are translocated from the P-and A-sites to the E-and P-sites, respectively. This wellorchestrated tRNA/mRNA movement is promoted by the action of EF-G and GTP hydrolysis, which results in the entrance of a new mRNA codon into the decoding center of the small ribosomal subunit A-site. At the end of the open reading frame an mRNA stop codon (UAA, UAG, or UGA) is moved into the A-site. This results in the establishment of the termination phase of protein biosynthesis. Stop codons are usually not recognized by tRNA anticodons but by specific peptide motifs of class I release factors (RF1 and RF2 in bacteria). Binding of a class I RF to the A-site of the ribosome results in the hydrolysis of the tional GTPases, [29,30] and tRNA translocation. [31] One of the most surprising results of our recent research was the finding that a single rRNA backbone group, namely the ribose 2'-OH of the inner core PTC residue A2451, rather than any of the universally conserved nucleobases is crucial for peptide bond synthesis (Fig. 2). Introducing different modifications at this ribose 2' position with markedly different hydrogen donor and acceptor characteristics demonstrated that the 2'-OH group at A2451 donates a proton during the course of amid bond synthesis. [28] The data obtained so far with this approach has enabled us to put forward a novel catalytic mechanism for peptide bond synthesis (Fig. 3) that embraces also relevant data from other research groups. [32][33][34][35][36][37] Does this mean that the PTC nucleobases are not crucial for ribosome functions? The answer is most likely no, since typically universal conservation indicates functional relevance. While it seems that rRNA backbone groups drive ribosome catalysis [15,28] PTC nucleobases are crucial for tRNA translocation. Employing the atomic mutagenesis approach we were able to identify the importance of a non-Watson-Crick base pair between the two active site residues A2450-C2063 for tRNA translocation. [31] In addition, we identified a functional group at an adenosine at position A2660 of the 23S rRNA to be pivotal for activating the GTPase activity of EF-G (Fig. 2). [29,30] A2660 resides in the so-called sarcin-ricin loop (SRL) which has been shown previously by biochemical and structural studies (reviewed in ref. [30]) to be in immediate proximity to the GTP bound in the active site of translational GTPases such as EF-G or EF-Tu. Our atomic mutagenesis studies highlighted the C6 exocyclic amino group at A2660 as a potential trigger of EF-G GTPase activity (Fig. 2). Since the chemical nature of this C6 exocyclic group does not seem to play a critical role (Fig. 2), we interpreted our data such that this nucleobase 'communicates' with the G-domain of the elongation factor via π-stacking interactions of this adenine with amino acid residues of EF-G. Alternatively, the exocyclic N6 amino group could be necessary to sterically clash into EF-G residues at the G domain. These interactions may act as the trigger to induce conformational changes within the G domain of EF-G leading to GTPase activation. Due to the universal conservation of the SRL and the G-domains of translational GTPases, this might be a commonly used GTPase activation mechanism among all translational GTPases, such as initiation factor 2 (IF2), EF-Tu, EF4 (LepA) or release factor 3. Indeed all of these translational GTPases have been shown to equally depend on the presence of this ture of this approach is the use of circularly permuted (cp) 23S rRNA transcripts as the major component for in vitro reconstitution of 50S particles (see Fig. 1 for details). The generation of a cp-23S rRNA is possible without dramatic structural consequences due to the proximity of the natural 5' and 3' ends, which form helix 1 (H1) of native 23S rRNA (Fig 1). The new ends of the cp-23S rRNA are placed such that a short sequence gap is introduced (typically between 25 and 55 nucleotides), which encompasses the 23S rRNA residue under investigation. The missing rRNA segment is chemically synthesized to either contain the wild type sequence, or non-natural nucleoside analogs. During an in vitro reconstitution procedure this synthetic RNA piece is assembled with all the other components of the 50S subunit to form a large ribosomal subunit. Depending on the studied ribosomal function, the chemically engineered 50S subunit can subsequently be joined with native 30S subunits to yield a complete 70S ribosome for functional analyses. Thus with this experimental design, which we dubbed 'atomic mutagenesis', individual functional groups or even single atoms on 23S rRNA residues can be exchanged within the context of the whole 70S ribosome. The atomic mutagenesis technique has so far been successfully applied to study peptide bond formation, [26][27][28] release factor-mediated peptidyl-tRNA hydrolysis, [15] ribosome-triggered activation of transla- Fig. 1. Experimental design of the 'atomic mutagenesis' approach. The circularly permuted (cp) 23S rRNA (schematic secondary structure is shown in the center) is produced by T7 RNA polymerase in vitro transcription from the cp-23S rDNA template. The cp-23S rDNA is generated by PCR (PCR primers are depicted as arrows; the forward primer introduces the T7 RNA polymerase promoter) using a plasmid carrying tandemly repeated 23S rRNA genes from T. aquaticus. The cp-23S rRNA is designed so that the natural ends of 23S rRNA, which form helix 1 (H1) in the native 50S subunit, are covalently connected (bold black line) and new 5' and 3' ends have been introduced. Thereby a short sequence gap is generated encompassing the residue under investigation. The compensating synthetic RNA oligo (grey) that carries the desired non-natural nucleoside (asterisk) is used in combination with the remaining components of the 50S subunit (5S rRNA and the total r-proteins of the large subunit of T. aquaticus) to generate chemically engineered 50S subunits by in vitro reconstitution. C6 exocyclic group at A2660. [30] Structural data suggest an alternate or additional ribosomal trigger to fire GTPase activity on ribosome-bound elongation factors, namely a non-bridging oxygen at the phosphordiester bond between SRL residues G2661 and A2662. [38] Equipped with the atomic mutagenesis approach this model can now be tested experimentally since it enables us to individually manipulate the suspected rRNA backbone groups in that region of the SRL. Research Directions and Perspectives Initial proof of principle experiments demonstrated that unprecedented molecular insight can be gained by applying the atomic mutagenesis approach. However in these initial applications of the technique the chemically engineered ribosomes were used in simplified model reactions of protein synthesis primarily under single turnover reaction conditions (e.g. peptide bond formation, pept-tRNA hydrolysis). However, protein biosynthesis by the ribosome is a very dynamic multi-functional and iterative process, thus it is possible that functionally crucial 23S rRNA groups have been overlooked in these initial studies. In order to improve the biological relevance of the assays and to challenge chemically engineered ribosomes in a more physiologically relevant system, we have recently optimized reaction conditions to employ these in vitro generated particles in genuine in vitro translation reactions. And indeed chemically engineered ribosomes turned out to be active in poly(U)-directed poly(Phe) synthesis and, more importantly, also in translation reactions using natural mRNA templates (Fig 4). [25] Currently we use the atomic mutagenesis approach to (i) study the function of the 50S E-site, (ii) to 'retro-evolve' a miniaturized functional ribosome, and (iii) to learn more about the regulatory potential of interactions between the growing peptide chain, as it leaves the PTC, and the nascent peptide exit tunnel wall in the large ribosomal subunit. The experimental evidence gathered to date with the atomic mutagenesis technique open doors for novel avenues of biochemical research and revealed the ribosome, the largest known ribozyme, as being amenable to synthetic biology approaches. [23] It appears possible that the chemical characteristic of the ribosome can be rationally engineered utilizing the 'atomic mutagenesis' approach to generate an in vitro translation system that allows the synthesis of 'designer peptides' carrying, for example, several consecutive unnatural amino acids. Fig. 2. Atomic mutagenesis at A2451 in the peptidyl transferase center (PTC) and at A2660 in the sarcin-ricin-loop (SRL) of 23S rRNA. Secondary structures of the respective active sites are shown whereas the newly introduced endpoints of the cp-23S rRNA constructs are shown (5' and 3', respectively). The chemically synthesized RNA oligos, which compensate for the missing rRNA fragments, are shown in grey and are held in place by Watson-Crick base pairing with the cp-23S rRNAs. The chemical structures of nucleoside analogs that were introduced at A2451 or A2660 are depicted and the relative rates of peptide bond formation [26,28] or EF-G GTPase activity [29] are shown for each nucleoside analog above the structures. The rates were normalized to reconstituted ribosomes containing the synthetic wild-type RNA fragment. Te sted nucleoside analogs: dA: 2'-deoxy-adenosine; c 3 A: 3-deaza-adenosine; 2'-F-A: 2'-fluoro-adenosine; 2,6-DAP: 2,6-diaminopurine; 2'-NH 2 -A: 2'-amino-adenosine; d-aba: 2'-deoxy-ribose-abasic analog; r-aba: riboseabasic analog; Pu: purine; m 6 2 A: N6,N6-dimethyladenosine; I: inosine; m 6 Pu: 6-methylpurine. P-site peptidyl-tRNA A-site aminoacyl-tRNA 23S rRNA A2451 R Fig. 3. Model for the mechanism of peptide bond formation proposing a role for A2451 based on our experimental finding that its hydrogen donor propensity is a stringent requirement for activity. [28] The A2451 2'-O-H ··· O(2') A76 hydrogen bond assists in P-site tRNA A76 ribose positioning in its functionally competent confirmation and in suppression of spontaneous intra-molecular transesterification. In this model the nucleophilic attack of the α-amino group of A-site bound aminoacyl-tRNA on the ester carbonyl carbon is accompanied by a concomitant acceptance of a proton from the α-amino group by the A76 2'-O of the peptidyl-tRNA which simultaneously donates its proton to the vicinal 3'-O. Arrows indicate pair-wise electron movement for proton shuttling after the attack of the α-amino nucleophile has established the tetrahedral intermediate. In vitro assembled ribosomes, programmed with mRNAs encoding either r-protein S8 or the enzyme chloramphenicol acetyltransferase, are capable of translating full-length [ 35 S]-labeled proteins (arrow heads). The reaction products were separated via SDS polyacrylamide gel electrophoresis. Results with three different cp-23S rRNA constructs are shown that allow distinct 50S active sites to be investigated by the atomic mutagenesis approach (PTC: peptidyl transferase center; ex. tunnel: nascent peptide exit tunnel; SRL: sarcin-ricin-loop). Also in this assay the requirement for a ribose 2'-OH group at A2451 is evident (top gel). Lanes labeled with 'M' show an aliquot of an in vitro translation reaction using native 70S ribosomes from E. coli and serve as length markers. For abbreviation of nucleoside analogs see Fig. 2.
2017-07-31T19:07:27.605Z
2013-03-25T00:00:00.000
{ "year": 2013, "sha1": "e45387cf8dbd254c6441161a41811381486d3e34", "oa_license": "CCBYNC", "oa_url": "https://chimia.ch/chimia/article/download/5394/4684", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "16ab7fce738a29f8097e42435730441f5f99f6e1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
225313794
pes2o/s2orc
v3-fos-license
Research of Daubechies Wavelet spectrum of vibroacoustic signals for diagnostic of diesel engines of combine harvesters The authors have researched the practice of diesel engines vibration diagnostics of combine harvesters and established the existence of non-stationary signals dissemination, whose statistical properties evolve over time. It was determined that they consist of short-term high-frequency elements, which are supported by low-frequency components are imposed on the first. The short-term Fourier transform was used to analysis such signals, which showed the appropriateness of the frequency distribution in concert with the differ time allocation. The authors have used the wavelets in cases where the certain signal analysis result should contain not only a simple enumeration of its typical frequencies, scales (spectrum analysis), but also local coordinates data under which these frequencies manifest themselves. This is an issue to identify the aspects when defects in the diesel engines of combine harvester’s connections arise. Introduction Fourier transform is the most widely used in vibration diagnostics of diesel for spectral analysis [1]. It decomposes the signal into orthogonal basis functions (sines and cosines), determining its constituent frequencies [2]. This method is strictly mathematically being appropriate for stationary signals (bearings, turbines) and improper for non-stationary signals (piston transfer, gas distribution mechanism, nozzles, etc.) [3]. Particularly, the Fourier transform does not make it possible, for example, to determine either a certain frequency was in a signal at all times or it appeared there at a given time (a defect occurrence) [4]. At present, all vibroacoustic signals, even non-stationary with a certain approximation, are considered as stationary, dividing them into blocks of conditionally stationary segments [5], the statistics of which is constant at its passage over a period of time. In accordance with the vibrodiagnostics practice, non-stationary signals, the statistical properties of which change over time, are most commonly available [6]. Commonly, they consist of short-term high-frequency elements [7], accompanied by low-frequency components superimposed on the former [8]. Such a method should be used to analyse those signals, should show good frequency distribution along with excellent time distribution [9]. The first is to localize the low frequency components and the second is to allow the high frequency components [10]. This method [11] is found in the literature for the analysis of non-stationary signals and is called the Short-time Fourier transform (Short-time Fourier transform (STFT) (figure 1). Materials and methods The time interval of the signal is divided into segments and converted by STFT, using it separately for each segment [12]. Thus, the transition to the time-frequency (frequency-coordinate) signal display is occurred, in which signal is considered fixed within each segment (window) [13]. The window converting result is a family of spectra that characterizes the change of signal spectrum in the intervals of the window shift transformation [14]. Briefly, STFT can be characterized by the following algorithm: define an analysis window (e.g., a 30 ms narrowband, 5 ms broadband); determine the overlap magnitude between windows (e.g. 30%); select a window function (e.g., Hann, Gauss, Blackman); create a window segment (signal multiplication by the window function); employ a fast Fourier transform (FFT) to each window segment [15]. Results and discussion The obtained by this algorithm spectra of segments make it possible to select and analyze the features of the non-stationary signal on the coordinate axis. Normally the size of the carrier of the window function w(t) should be established according to the stationarity interval of the signal. For instance, let examine the following transformation for a real vibroacoustic signal (figure 2) obtained from the cylinder block of the engine SMD-31A by the accelerometer B&K Type 4333 No 272437 installed in the 3rd cylinder piston transfer zone. (1) The w(t-bk) function is a function of the window shift transformation in the coordinate t, where parameter b specifies fixed shift values. In a windows shifting with an equable domain bk, the kb are assumed to be equal. As the conversion window can be used both the simplest rectangular window and special weight windows (Blackman, Bartlett, Gauss, etc.), which provide small distortions of the spectrum at cutting window segments of signals. Listing of the window conversion example for a nonstationary vibration signal and the result is shown (2) and figure 3. (2) Figure 3. Implementation of window conversion using two types of windows. According to the spectrum of the signal S it is possible to speak about presence of its harmonic oscillations at more than 6 strongly pronounced frequencies, determine the relationship between the amplitudes of these oscillations and indicate the locality of oscillations in the signal interval. Coordinate resolution of window conversion is determined by the width of the window function and inversely proportional to the frequency resolution. The functional capabilities of the short-time Fourier transform limit the ability to analyse the spectral composition of signals at an interval that is the same for all frequencies, particularly, the frequency resolution is the same for the entire frequency range. Relating to the processing of complex signals. Of course, the vibration signals of the internal combustion engine are also related, the problem of short-term window transformation is that have to choose the window size "once and for all", to analyses the whole signal. However, different parts of the recorded signal may require the use of particular duration windows. For example, if the signal consists of far-distant frequency components, thus it is possible to contribute the frequency resolution in favour of temporary, and vice versa. Thus, due to using fixed-width windows, the STFT is not always appropriate for analyzing low and high frequency signals at the same time (piston transfer signal, nozzle, timing mechanism). Such an analysis is made possible by using a flexible window which becomes narrow as it passes through the high-frequency signal component and turns wide in passing the lowfrequency signal region. The wavelet analysis of signals based on the wavelet transform makes this possible. Wavelet signal transform is the signals decomposition into components by a particular function called wavelet. Wavelets have become a necessary mathematical tool in a number of studies. They are used in cases where the result of the analysis of a certain signal should contain not only a simple list of its characteristic frequencies, scales (spectral analysis), but also information about some local coordinates at which these frequencies manifest themselves. This is precisely the task when it is necessary to define the moments when defects in ICE junctions arise. The general principle of basis wavelet transform construction is to use large-scale changing and offset. The wavelet basis is a function defined as (t-b)a -1 , where bis the offset, and ais the scale. Besides to become wavelet, the function must have a zero square and what is more zero the first, second and other points. The Fourier transform of such functions equals zero at w=0 and looks like a bandpass filter. For different values of a, it will be a set (block) of bandpass filters. In the wavelet spectra of the vibration signal estimation in a discrete form, the sampling rate (step) of the parameters b and a is essential for the demonstrativeness and clarity of the spectrum. The sampling step b is usually taken to be equal b signals being analysed, equal to 1, meanwhile the time scale of the wavelet spectrum corresponding to the time scale of the signal and convenient for localization of the signal features. To enable the basic wavelet size changing, a constant scale index d is introduced into its formula (figure 4). For instance, let's try to make a wavelet transform in MathСad system using the fourth order db4 Daubechies Wavelet input the functions of direct S:=wave(s) and inverse s:=iwave(S) transformation. To visualize a picture of the coefficients, the submatrix is converted into a two-dimensional array, bringing the input signal to a single numerical axis (figure 5) (stretching the coefficients along the offset axis without changing their value). Due to the wavelet transform dyad, the expressiveness of the wavelet spectrum preserves all frequency temporal features of the signals and what is more significantly, allows to change (certain processing) the signal at different levels of decomposition, and after processing to implement the reversed wavelet transform without loss of information. 2 ln( 16380 ln ) 2 ln( ln : where: jis the number of counting (points) of the recorded vibration signal. Wavelet spectrum data vector These seven are the most informative, although the total amount may be larger. It is possible to display the wavelet spectrum in 2D and 3D form ( figure 6). The main purpose of the article is to show the possibility of using wavelets to clearing the vibroacoustic signal from the noise, which is usually attributed to other sources of oscillation. During the CNG diagnostics signals from the fuel equipment, gas distribution mechanism, etc. can serve as noise. Classic filtration can be used for cleaning, but a wavelet cleaning is the best option. Some calculations are needed to begin this purge. To prepare the wavelet transform, the number of levels complete decomposition of the signal M is determined and the array is supplemented by zeros to the necessary 2M value. In the dyadic separation of the spectrum at each decomposition level, the first level of detail coefficients will be formed from the high-frequency part of the signal spectrum from /2 to  (in a onesided physical frequency scale). The second part of the spectrum from 0 to /2 is converted to approximating coefficients. At the second level, the approximating coefficients are again split in half, converting the range /4-/2 into detailing, and the range 0-/4 into approximating coefficients of the second decomposition level, and thus on to the full decomposition (in this case 14 times). This permits the approximately noise limit to be set directly over the frequency spectrum of the signal, and according to the decomposition levels in which the noise power is close to and higher than the signal power. When forming a new line level, coefficients that exceed the established thresholds It can be seen wavelet cleaning has retained characteristic peaks throughout the frequency range, which form periodic jumps of the signal values that virtually no linear frequency filter can perform. However, using the wavelet cleaning after signal wavelet noise from linear filters is quite perspective in terms of signal separation (figure 9). The using Butterworth linear bandpass filter of 6th order (figure 9) even permits you to visually evaluate the result of the wavelet cleaning of the signal. Figure 9. Filtering the output a) and cleared b) of the vibration noise. The signal after filtration is well separated and can easily be used for an automated diagnostic system. The main problem of such a software complex is the wavelet form choosing, the choice of decomposition levels, which, after some research, may underlie the development of an appropriate automated software complex vibroacoustic diagnosis of ICE. Conclusions The use of the Dobeshi wavelet spectra and the Fourier window transform for the analysis of the vibroacoustic signal makes, it possible to recognize changes in the state of the engine mechanisms and the location of the source of the change itself. The decomposition of the signal by all decomposition equations makes it possible to find the level, at which the signal is less than the noise, through approximating coefficients that exceed the set thresholds 
2020-09-03T09:11:29.987Z
2020-09-02T00:00:00.000
{ "year": 2020, "sha1": "6b0e7e38fa15ae55acfc1f82a208bd9f039c348b", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/548/3/032030/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "120e19e6664d9525c93ce226164a1b146a8256b2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
71297981
pes2o/s2orc
v3-fos-license
ATROPHY CLINICAL-RADIOLOGICAL CORRELATION Report of two cases Multiple system atrophy (MSA) is a sporadic, neurodegenerative disorder, clinically characterized by parkinsonian, autonomic, cerebellar and pyramidal signs. We describe two patients showing different presentations of the same disease. The patient on case 1 presents features of MSA-C or olivopontocerebellar atrophy with the pontine “cross sign” on brain MRI. The second case reports a patient presenting MSA-P or striatonigral degeneration and the brain MRI shows lenticular nucleus sign alteration. We think that brain MRI might increase the accuracy diagnostic of MSA. function accompanied by urinary urgency since he was 53.Eighteen months after he noted gait and speech alteration.On physical and neurological examination he presented slurred speech, gait ataxia and bilateral appendicular cerebellar ataxia.The deep tendon reflexes were +++/4 and there was bilateral Babinski sign.Arterial pressure (AP) was 130X80 mmHg and radial pulse frequency 60 bpm with the patient laying down; 100X70 and 70bpm when standing up.Brain MRI showed important pontine atrophy with the "cross sign" (Fig 1A ) as well as cerebellar and brainstem atrophy (Fig 1B).Six months later he presented with the same complains and the same alterations on physical examination except for worsing on orthostatic hypotension: 120X80 laying down and 80X0 standing up.Treatment with midodrine started.After six months he presented nystagmus and many episodes of syncope.Then fludrocortisone started without success.In the last 6 months, he became aid-requiring walking and presented intestinal constipation.His speech, the erectile dysfunction and the urgency incontinence have worsened.There was no familiar history of neurological disease. Case 2 -A 64 year-old woman presented diffuse pain since she was 60.A diagnosis of fibromyalgia was given.Four years later she complained of rigidity of arms and legs mainly on the right side.Levodopa started as the diagnosis of Parkinson's disease was made but there was no improvement.She reported urinary incontinence and insomnia almost every night in the last 2 years.There was progressive deterioration of daily activities and she required aid to walk.On general physical and neurological evaluation, she presented extrapyramidal rigidity (mainly on her right side), bradykinesia and some periods of akinesia.There was no rest or postural tremor, but the speech was slurred, very difficult to be understood.There was hyperreflexia with sinreflexia and Babinski sign on both sides.AP laying down was 149X90 and seated was 90X 50mmHg.Brain MRI shows sign alteration on lenticular nucleus (Fig 2A and 2B).She presented progressive deterioration.There was no neurological disease in her family. DISCUSSION We report two patients presenting two different forms of MSA, although in the first case cerebellar syndrome was the main feature and in the second case, parkinsonian symptoms were predominant.Both of them presented orthostatic hypotension and urinary incontinence (autonomic system alterations). In the first case, the disease started when the patient was 53 years old with autonomic dysfunction and progressed with cerebellar signs within one year and six months after onset.As the cerebellar signs clearly predominate, it is classified as MSA-C or olivopontocerebellar atrophy using the old nomenclature.MRI of the brain showed the pontine "cross sign", one more feature of this type of disease (Fig 1A).In the second case, diffuse pain was the first symptom, probably resulting from extrapyramidal rigidity that soon became evident with clear asymmetry (right side more intense than the left).Two years later autonomic dysfunction and pyramidal signs have appeared.Due to the predominance of parkinsonian signs this case was classified as MSA-P.She presented a more severe clinical picture and functional deterioration, despite the same time of disease evolution. Watanabe et al. 8 evaluating the progression of MSA in 230 Japanese patients concluded that patients presenting MSA-P have a more rapid functional deterioration when compared to those with MSA-C.In the other hand, there was no difference in the survival time. Slurred speech was the patients main complain, limiting his daily activity.This disturb of speech is reported by other authors 9 .Kluin et al. 10 , in his 46 MSA patients, found dysarthria in all of them, with different degrees of hypokinetic, ataxia and/or spasmodic component.This same study reported that in patients with MSA-P hypokinetic dysarthria predominates as much as the hypomimic facial features, and lips or tongue tremor. In the pathological analysis of 59 living patients who were considered to have MSA, Osaki et al. 12 tried to assess the sensibility of neurological evaluation confronting it with Quinn's criteria and the criteria defined by the consensus.They concluded that both criteria are more sensitive in the early stage of the disease, compared to the clinical assessment, but they have the same accuracy as the neurological clinical assessment in later stages of MSA.The majority of misdiagnosis patients on the study above had in fact supranuclear palsy 4,12 .MSA histopathological findings include glial cytoplasmic inclusions and neuron loss that predominates in different areas according to the clinical form of presentation 5,13,14 .These inclusions are constituted by alfa-synuclein, ubiquitin and tau protein 6,15,16 . MRI is a useful diagnostic tool in the early course of MSA-C and MSA-P.Horimoto et al. 7 report that pontine "cross sign" and lenticular nucleus sign alteration appears early in MSA-C and MSA-P respectively.Both of them appears lately in MSA-A.The characteristic T2 hyper intense sign in pons and middle cerebellar peduncle ("cross sign") reflects pontocerebellar fibers degeneration and despite very suggestive of MSA it can be found in other forms of parkinsonism 17 .Asato et al. 18 have showed that the anteroposterior diameter of the inferior portion of the pons in MSA-C patients is lower when compared to patients in the control group or with progressive supranuclear palsy.Putaminal abnormalities may be present in MSA-P patients MRI, other findings include hypo intense sign of the putamen with marginal hyper intense sign in T2.Atrophy or hyper intense sign at the pons, middle cerebellar peduncle and cerebellum may be seen.Putaminal atrophy is the most specific finds in MSA-P 17 It is hard to establish the differential diagnosis with Parkinson's disease, as it can be seen in our case.Colosimo et al. 19 found, among 27 pathologically confirmed cases of MSA, 16 within an early stage only with parkinsonism.They concluded that instability due to previous falls, lack of tremor, fast progression of the disease and poor response to levodopa may be the firsts symptoms of MSA.These same authors refer to an early asymmetric beginning of parkinsonism in 43.7% of the patients with MSA, against 25% of the patients with Parkinson's disease, although such asymmetry is not useful information for the differential diagnosis. There is no specific treatment to MSA until the present, only symptomatic interventions 20 . Our cases are classified as likely MSA according to criteria in consensus, since the diagnosis of MSA is defined just with pathological analysis 6 .In the two cases, we try to contribute to the better acknowledgement of different ways of MSA presentations.We mainly draw attention to the importance of a good neuroradiological assessment.We concluded that the brain MRI changes might increase the accuracy diagnosis of MSA. . Our case number 2 presented putaminal hypo intensity (Fig 2B) as well as marginal hyper intensity (Fig 2A) in T2 images.
2018-04-03T02:17:59.421Z
2007-06-01T00:00:00.000
{ "year": 2007, "sha1": "66bba8e42822217e6cf39695fdf6822fae73ab0e", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/anp/a/qvzSf7rmY5z5ZF9XV3WXVyL/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fccb874cd67ede58e36f47b4d200d7c4b349188e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
51859339
pes2o/s2orc
v3-fos-license
Diffusion with Optimal Resetting We consider the mean time to absorption by an absorbing target of a diffusive particle with the addition of a process whereby the particle is reset to its initial position with rate $r$. We consider several generalisations of the model of M. R. Evans and S. N. Majumdar (2011), Diffusion with stochastic resetting, Phys. Rev. Lett. 106, 160601: (i) a space dependent resetting rate $r(x)$ ii) resetting to a random position $z$ drawn from a resetting distribution ${\cal P}(z)$ iii) a spatial distribution for the absorbing target $P_T(x)$. As an example of (i) we show that the introduction of a non-resetting window around the initial position can reduce the mean time to absorption provided that the initial position is sufficiently far from the target. We address the problem of optimal resetting, that is, minimising the mean time to absorption for a given target distribution. For an exponentially decaying target distribution centred at the origin we show that a transition in the optimal resetting distribution occurs as the target distribution narrows. Introduction Search problems occur in a variety of contexts: from animal foraging [1] to the target search of proteins on DNA molecules [2][3][4]; from internet search algorithms to the more mundane matter of locating one's mislaid possessions. Often search strategies involve a mixture of local steps and long-range moves [5][6][7][8][9]. For human searchers at least, a natural tendency is to return to the starting point of the search after the length of time spent searching becomes excessive. In a recent paper [10] we modelled such a strategy as a diffusion process with an additional rate of resetting to the starting point x 0 with rate r. Considering the object of the search to be an absorbing target at the origin, the duration of the search becomes the time for the diffusing particle to reach the origin. Statistics such as the mean time to absorption of the process then give a measure of the efficiency of the search strategy, defined by the resetting rate r. Moreover, the model provides a system where the statistics of absorption times can be computed exactly. A related model, where searchers have some probabilistic lifetime after which another searcher will be sent out, has been studied by Gelenbe [11] and mean times to absorption computed. Also, in the mathematical literature the mean first passage time for random walkers that have the option of restarting at the initial position has been considered [12]. In [10] it was shown that the mean first passage time (MFPT) to the origin for a single diffusive searcher becomes finite in the presence of resetting (in contrast to a purely diffusive search where the MFPT diverges). Moreover the MFPT has a minimum value as a function of the resetting rate r to the fixed initial position x 0 . Thus, there is an optimal resetting rate r as a function of the distance to the target x 0 . In this work we address the question of resetting strategies which optimise the MFPT in a wider context. To this end, we make several generalisations of singleparticle diffusion with resetting studied in [10]. First, we consider a space dependent resetting rate r(x). Second, we consider resetting to a random position z (rather than a fixed x 0 ) drawn from a resetting distribution P(z). Finally, we consider a probability distribution for the absorbing target P T (x). The general question we ask is: what are the optimal functions r(x), P(x) that minimise the MFPT for a given P T (x)? Although we do not propose a general solution, the examples we study turn up some surprising results and illustrate that answers to the problem may be non trivial. The paper is organised as follows. In section 2 we review the calculation of the mean first passage time for one-dimensional duffusion in the presence of resetting to the intial position with rate r. In section 3 we introduce spatial dependent resetting r(x) and work out the example of a non resetting window of width a around the intial point. In section 4 we consider the generalisation to a resetting distribution P(z) and to a distribution of the target site P T (x). In section 5 we formulate the general problem of optimising the mean first passage time with respect to the resetting distribution P(z). We consider the example of an exponential target distribution and show that there is a transition in the optimal resetting distribution. We conclude in section 6. First passage time for single particle diffusion with resetting We begin by briefly reviewing the one-dimensional case of diffusion with resetting to the initial position x 0 (see Fig. 1), introduced in Ref. [10]. The Master equation for p(x, t|x 0 ), the probability distribution for the particle at time t having started from initial position x 0 , reads with initial condition p(x, 0|x 0 ) = δ(x − x 0 ). In Eq. (1) D is the diffusion constant of the particle and r is the resetting rate to the initial position x 0 . The second term on the right hand side (rhs) of Eq. 1 denotes the loss of probability from the position x due to reset to the initial position x 0 , while the third term denotes the gain of probability at x 0 due to resetting from all other positions. 00 00 00 11 11 11 x 0 O time space Figure 1. Schematic space-time trajectory of a one dimensional Brownian motion that starts at x 0 and resets stochastically to its initial position x 0 at rate r. The stationary state of (1) is the solution of which is determined by the elementary Green function technique, which we now recall. The solutions to the homogeneous counterpart of (2) are e ±α 0 x where The solution to (2) is constructed from linear combinations of these solutions which satisfy the following boundary conditions: p * → 0 as x → ±∞, and p * is continuous at x = x 0 . Imposing these conditions yields Note that (4) has a cusp at x = x 0 . The constant A is fixed by the discontinuity of the first derivative at x = x 0 which is determined by integrating (2) over a small region about Carrying this out yields A = α 0 /2 so that Alternatively, the constant A in (4) could be fixed by the normalisation of the probability distribution (4). Note that (6) is a non-equilibrium stationary state by which it is meant that there is circulation of probability even in the one-dimensional geometry. At all points x there is always a diffusive flux of probability in the direction away from x 0 given by −D∂p/∂x, and a nonlocal resetting flux in the opposite direction from all points x = x 0 to x 0 . Mean first passage time We now consider the mean first passage time for the diffusing particle to reach the origin. One can think of an absorbing target at the origin which instantaneously absorbs the particle (see e.g. [13]). A standard approach to first-passage problems is to use the backward Master equation where one treats the initial position as a variable (for a review see Ref. [14]). Let Q(x, t) denote the survival probability of the particle up to time t (i.e. the probability that the particle has not visited the origin up to time t) starting from the initial position x. The boundary and initial conditions are Q(0, t) = 0, Q(x, 0) = 1 (see e.g. [15] for more general reaction boundary conditions). The backward Master equation (where the variable x is now the initial position) reads for the survival probability Q(x, t) Note that Q(x, t) depends implicitly on the resetting position x 0 due to the third term on the right hand side of (7). The second and third terms on the rhs correspond to the resetting of the initial position from x to x 0 , which implies a loss of probability from Q(x, t) and a gain of probability to Q(x 0 , t). Equation (7) may be derived as follows. We consider the survival probability Q(x, t + ∆t) up to time t + ∆t, where ∆t is a small interval of time. We divide the time interval [0, t + ∆t] into two intervals: [0, ∆t] and [t, t + ∆t]. In the first interval [0, ∆t], there are two possibilities: (i) with probability r∆t, the particle may be reset to x 0 and then for the subsequent interval [∆t, t + ∆t] this x 0 will be the new starting position and (ii) with probability (1 − r∆t), no resetting takes place, but instead the particle diffuses to a new position (x + ξ) in time ∆t where ξ is a random variable distributed according to a gaussian distribution P (ξ) = (4πD∆t) −1/2 exp(−ξ 2 /4D∆t). This new position (x + ξ) then becomes the starting position for the subsequent second interval [∆t, t + ∆t]. One then sums over all possible values of ξ drawn from P (ξ). Note that we are implicitly using the Markov property of the process whereby for the second interval [∆t, t+∆t], only the end position of the first interval [0, ∆t] matters. Taking into acount these two possibilities, one then gets which can be rewritten as Taking the limit ∆t → 0 then yields (7). The mean first passage time T to the origin beginning from position x is obtained by noting that − ∂Q(x,t) ∂t dt is the probability of absorption by the target in time t → t+dt. Therefore, on integrating by parts, we have (assuming that tQ(x, t) → 0 as t → ∞). Integrating (7) with respect to time yields with boundary conditions T (0) = 0 and T (x) finite as x → ∞. To solve for the mean first passage time beginning at the resetting position x = x 0 we first consider the initial position to be at x > 0, different from the resetting position x 0 , then solve (11) with arbitrary x and x 0 . Once we have this solution we set The general solution to (11) is where α 0 = r/D. The boundary condition that T (x) is finite as x → ∞ implies A = 0 and the boundary condition T (0) = 0 fixes B. Thus Solving for T (x 0 ) self-consistently yields Note from (14) that, for fixed x 0 , T is finite for 0 < r < ∞. As a function of r for fixed x 0 , T diverges when r → 0 as This is expected since as r → 0, one should recover the pure diffusive behaviour (no resetting) for which the T is divergent-due to the large excursions that take the diffusing particle away from the target at the origin. Also T diverges rapidly as r → ∞, the explanation being that as the reset rate increases the diffusing particle has less time between resets to reach the origin. In other words, the high resetting rate to x 0 cuts off the trajectories that bring the diffusing particle towards the target. We now consider T as a function of r for a given value of x 0 and define the reduced variable Since T diverges as r → 0 and r → ∞ it is clear that there must be a minimum of T with respect to r (see Fig. 2). The condition for the minimum, dT dr = 0, reduces to the transcendental equation which has a unique non-zero solution z * = 1.59362.... In terms of the restting rate, this means an optimal resetting rate r * = (z * ) 2 D/x 2 0 = (2.53962 . . .)D/x 2 0 , for which the mean first passage time T (x 0 ) is minimum. The dimensionless variable z (16) is a ratio of two lengths: x 0 , the distance from the resetting point to the target, and (D/r) 1/2 , which is the typical distance diffused between resetting events. Thus, for fixed D and x 0 the mean first passage time of the particle can be minimised by choosing r so that this ratio takes the value z * . Space-dependent resetting rate In this section we generalise the model of section 2 to the case of space-dependent resetting rate r(x). The master equation for the probability distribution p(x, t|x 0 ) is generalised from (1) to The third term on the right hand side now represents the flux of probability injected at x 0 through resetting from all points x = x 0 . The stationary distribution p * (x|x 0 ) satisfies In general the stationary state is difficult to determine unless r(x) has some simple form. The equation for the mean first passage time becomes Again, this is difficult to solve generally for arbitrary r(x). In the following we consider a solvable example where r(x) is zero in a window around x 0 and is constant outside this window. Example of a non-resetting window We consider the case of a non-resetting window of width a about |x 0 |, within which the resetting process does not occur: This choice is a rather natural one in the sense that a typical searcher usually doesn't reset when it is close to its starting point, but rather the resetting event occurs when it diffuses a certain threshold distance a away from its initial position. The Master equation reads where with initial condition p(x, 0) = δ(x − x 0 ). Thus h(t) is the probability that the particle is outside the non-resetting window, i.e., in the resetting zone at time t; the particle is reset to the origin with a total rate h(t)r. First, we consider the stationary state. One can solve for the stationary probability using the Green function technique of section 1. For |x − x 0 | > a (outside the window), The solution should be continuous at x = x 0 , but its derivative must undergo a jump at x = x 0 and the jump discontinuity can be computed by integrating Eq. 23 across x = x 0 . Thus, noting that the solution should be symmetric about x = x 0 , one has where α 0 = r/D and the constants A and B are determined by the discontinuity in the derivative of p * (x|x 0 ) at x = x 0 and the continuity of the derivative at |x − x 0 | = a. The result is The solution has a cusp at x = x 0 and a discontinuity in the second derivative at |x − x 0 | = a (see Fig. 3). We now consider an absorbing trap at the origin. The backward equation for T (x), the mean time to absorption beginning from x, reads The general solution to (29,30) is and the solution that does not diverge as x → ∞ is The constants A, B, C, E, F are determined by the continuity of T (x) and T ′ (x) at |x − x 0 | = a and the boundary condition T (0) = 0. The result for T (x 0 ) is We now consider the reduced parameters z = α 0 x 0 and y = α 0 a, and T as a function y for z fixed. The allowed values of y are 0 ≤ y ≤ z. At y = z, one can show that dT dy y=z > 0. Therefore the minimum of T with respect to y is either at y = 0 or at a non trivial minimum 0 ≤ y ≤ z. The condition for a minimum dT dy = 0 reduces to 2 + y 4 + 5y + 2y 2 = tanh(z − y) Therefore the condition for there to be a nontrivial minimum for y > 0 is given by tanh z > 1/2 or equivalently z > (log 3)/2 = 0.5493 . . .. In summary, the analysis of the condition for T (y) to be a minimum reveals that: if z < (log 3)/2 then y = 0 is the minimum of T (y); if z > (log 3)/2 then T (y) has a nontrivial minimum at 0 < y < z. Therefore, when z < (log 3)/2 the introduction of a window around the initial site where resetting does not take place does not reduce the mean time to absorption. A strategy of introducing a non-resetting window is an effective one only when the initial point is sufficiently far from the search target. Otherwise it is advantageous to always reset. Optimal resetting function Having seen in the previous example that non-trivial behaviour emerges for a simple spatial-dependent resetting rate r(x), one can ask for the optimal function r(x). The optimisation problem would be to minimise T under certain constraints pertaining to the information available to the searcher. Clearly if there are no constraints, that is one can use full information about the target position, the optimal strategy is to reset immediately whenever x > x 0 and not reset when x < x 0 . This corresponds to the choice In this case problem (20) reduces to the mean first passage time of a diffusive particle with reflecting barrier at x 0 the solution of which is Thus, (36) gives the lowest possible mean first passage time for a diffusive process. One can then ask about how close simple strategies, such as a spatially constant resetting rate r or non-resetting window, come to approaching this bound. For example, the case of spatially constant resetting rate r considered in section 2 yields a minimum MFPT using (17) As noted in section 3.1 the value 3.0883 may be improved upon by considering a nonresetting window around x 0 . However, (36) uses the crucial information of whether the target (at x = 0) is to the right or left of the resetting site x 0 . More realistically, the searcher would not have this information. The relevant optimisation problem is to find the optimal resetting rate r(|x − x 0 |) (constrained to be a function of the distance |x − x 0 | from the resetting site) that minimises T (x 0 ). This remains an open problem. Resetting distribution and target distribution In this section we consider the generalisation to a system with resetting to points distributed according to P(z). We shall also consider a distribution of the target site P T (x). Stationary state We begin by considering again the one-dimensional case of diffusion but this time with resetting to a random position: at rate r the particle is reset to a random position z → z + dz drawn with probability P(z)dz. We refer to P(z) as the reset distribution. For simplicity we take the initial position x 0 to be distributed according to the same distribution as the reset position p(x 0 , 0) = P(x 0 ). The Master equation for the probability density p(x, t) now reads The stationary solution to (38) is simply found using (6) as the Green function: which, using p * (x|x 0 ) given by (6), yields Mean first passage time The mean first passage time, T (x 0 , x T ), to a target point x T , starting from x 0 with resetting distribution P(z), satisfies with boundary condition T (x T , x T ) = 0. To solve this equation we let then write down the general solution to (41) and solve for F (x T ) self-consistently. The general solution of (41) which is finite as x 0 → ∞ is The boundary condition T (x T , x T ) = 0 implies A = − 1 r + F . Then substituting this expression for A in (43) and integrating we find which yields Inserting this into (43) we obtain As noted above it is convenient to choose the same distribution for x 0 as the resetting distribution. Averaging over x 0 then gives using (39) Equation (47) gives the expression for the mean first passage time to a target positioned at x T . Let us check the case of a single position x 0 to which the particle is reset P(z) = δ(z − x 0 ). In this case (47) becomes which recovers (14) when x T is set to 0. Finally, we average over possible target positions drawn from a target distribution: Equation (49) gives the main result of this section-the MFPT for a resetting distribution P(x 0 ) and averaged over target distribution P T (x T ). Extremisation of mean first passage time Let us now consider the problem of extremising T given by (49), for a given target distribution P T (x), with respect to the resetting distribution P(z). Throughout this section we will assume a symmetric target distribution: P T (x) = P T (−x) and P ′ T (x) = −P ′ T (−x). The problem is to minimize the functional appearing in (49): subject to the constraint dzP(z) = 1. The functional derivative to be satisfied is where λ is a Lagrange multiplier. Condition (51) yields For (52) to hold for all y requires that or fixing λ through the normalisation of p * (x) . (54) Equation (54) implies that to minimise T the stationary probability distribution should be given by the square root of the target distribution. This result has been derived in [16] for the case of searching for the target by sampling a probability distribution P(x). This corresponds to the limit r → ∞ of our model. For r < ∞ we have the additional constraint that the optimal p * should be realisable from a resetting distribution P(z) through formula (50). Equation (50) may be solved for P(z) for a desired p * (x) by taking the Fourier transform and using the convolution theorem to give where P(k) is the Fourier transform of P(x) and p * (k) is the Fourier transform of p * (x). We may invert the Fourier transformation to find However this solution may become negative in which case the solution to the optimisation problem is unphysical. Example of an exponential target distribution As a simple example, we consider an exponentially decaying target distribution peaked at x = 0: We first note that for a delta function resetting distribution P(z) = δ(z − x 0 ) the mean first passage time (49) diverges when α 0 > β. Therefore, for small β (a broad target distribution) one expects an optimal resetting distribution (for fixed α 0 ) that differs from a delta function. For β < 2α 0 , the optimal stationary distribution is from (54) This expression yields from (56) a resetting distribution that is always positive, thus the optimal resetting distribution For β > 2α 0 , (59) always gives negative probabilities due to the first term. Therefore we anticipate that P(x) = δ(x) is at least a locally optimal solution. In fact one can prove this is the case by showing that any distribution of the form P(x) = (1 − ǫ)δ(x) + ǫf (x), where f (x) ≥ 0 and dxf (x) = 1 leads to an increase in (49) at first order in ǫ when β > 2α 0 . (As the proof is straightforward but somewhat tedious we did not include it here.) Thus a transition in the form of the optimal resetting distribution, from a single delta function to (59), occurs at β = 2α 0 . Inversion of p * (x) As noted above, the constraint P(x) ≥ 0 means that the optimal p * (x) given by (54) may not be realisable from a physical resetting distribution P(z). We are therefore led to the general question of when a desired stationary distribution (e.g. (54)) which we denote g(x) may be generated from (50) i.e. when can we invert to obtain a physical P(z)? Let us first discuss a sufficient condition for the resetting distribution implied by (60) to be physical. Equation (55) relates the characteristic functions of the two distributions P(x) and g(x) (given there by p * (x)). In terms of the characteristic function, Polya's theorem [17] states that if a function φ(k) satisfies: φ(0) = 1; φ(k) is even; φ(k) is convex for k > 0, and φ(∞) = 0; then φ(k) is the characteristic function of an absolutely continuous symmetric distribution. Polya's theorem therefore gives a sufficient condition for P(x) implied by p * (x) to be physical. The condition for convexity becomes in one dimension If the function P(k) does not satisfy the conditions of Polya's theorem, the solution of (60) is invalid as a probability distribution i.e. the desired g(x) cannot be realised from any resetting probability distribution P(z). In the case where (60) may not be inverted to give a physical P(x), it may be possible to generate the desired form for g(x) on a finite region by choosing a compact support for P(z). Let us assume g(x) to be a symmetric function of x. Then if we choose where λ is a normalising constant, we find provided that y 0 is chosen so that (see Appendix A). The second condition follows from the first by the assumed symmetry of g(x). As an example, we consider the gaussian distribution The inversion of (60) using (56) yields However, choosing a compact support for P(z) according to (67), yields and we find that the resulting distribution (70) is positive for all x. Conclusion In this paper we have considered some generalisations of diffusion with stochastic resetting to the case of spatial-dependent resetting rate and a resetting distribution. We have considered the mean first passage time to a target which may be situated at a fixed point (the origin) or distributed according to a distribution and derived the result (49). The minimisation of this quantity may then be formulated as an optimisation problem of which we have studied some examples. In particular we have seen some perhaps unexpected results. First, the introduction of a non-resetting window around a fixed resetting position reduces the MFPT when the target is sufficiently far away. This suggests that the optimal resetting distribution, in the case where we consider a resetting rate that is symmetric about the restting point, r(|x − x 0 |) may be non-trivial. We have also seen that in the case of an exponentially distributed target (57) the optimal resetting distribution undergoes a transition from (59) to a pure delta function at the origin. Generally, the computation of an optimal resetting distribution is an open problem since the resetting distribution that minimises T may be become negative over some domain and therefore nonphysical. In the case where (56) becomes unphysical, although we do not have a solution to the extremisation problem of minimising T subject to the additional constraint P(x) ≥ 0 we may propose likely candidates for extremal solutions. One possibility for the optimal physical solution is one that has compact support i.e. since the constraint for the distribution to be physical is that P(x) ≥ 0, one might expect that the optimal solution lies on the boundary where P(x) = 0 for some regions of x. However, we have no proof that this is the case. Further considerations for optimising mean first passage times in a more realistic search process would be to add a cost to resetting since in the present model the diffusive particle instantaneously resets to its selected resetting position. This could be implemented by attributing some time penalty to each resetting event, as is the case in the framework intermittent searching.
2011-07-24T11:06:31.000Z
2011-07-21T00:00:00.000
{ "year": 2011, "sha1": "33623d0f63c77059c5bc13299811da4467c77540", "oa_license": null, "oa_url": "http://lptms.u-psud.fr/satya-majumdar/files/2010/12/evans11.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "33623d0f63c77059c5bc13299811da4467c77540", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
218898524
pes2o/s2orc
v3-fos-license
A steeply-inclined trajectory for the Chicxulub impact The environmental severity of large impacts on Earth is influenced by their impact trajectory. Impact direction and angle to the target plane affect the volume and depth of origin of vaporized target, as well as the trajectories of ejected material. The asteroid impact that formed the 66 Ma Chicxulub crater had a profound and catastrophic effect on Earth’s environment, but the impact trajectory is debated. Here we show that impact angle and direction can be diagnosed by asymmetries in the subsurface structure of the Chicxulub crater. Comparison of 3D numerical simulations of Chicxulub-scale impacts with geophysical observations suggests that the Chicxulub crater was formed by a steeply-inclined (45–60° to horizontal) impact from the northeast; several lines of evidence rule out a low angle (<30°) impact. A steeply-inclined impact produces a nearly symmetric distribution of ejected rock and releases more climate-changing gases per impactor mass than either a very shallow or near-vertical impact. T he 66 Ma asteroid impact event that formed the Chicxulub crater, Mexico, marks the end of the Mesozoic Era of Earth history and has been attributed as the cause of the contemporaneous mass extinction 1 . Numerical impact simulations combined with geophysical investigation of subsurface structure have constrained the kinetic energy of the impact under the simplifying assumption of a vertical trajectory [2][3][4] . The trajectory angle and direction of the Chicxulub impact are not known, but a near-vertical impact is unlikely. Only one quarter of impacts occur at angles between 60 and the vertical and only 1 in 15 impacts is steeper than 75 . For constant impactor size and speed, a shallower impact angle produces a smaller crater, a more asymmetric dispersal of ejected material 5 and partitions more impact energy at shallower depths 6 . As a result, impact direction and trajectory angle to the target plane are important impact parameters that determine, among other things, the direction of most severe environmental consequences and the volume and depth of origin of vaporised target 7,8 , as well as ejecta 5 and crater asymmetries 9 . Since the discovery of the Chicxulub impact structure based on diagnostic evidence of shock metamorphism and geophysical anomalies 10 , several asymmetries in the geophysical character of the crater have been noted 8,11,12 , which may result from oblique impact 8,11 and/or impact in a heterogeneous target 3,12 . Among the most obvious of these ( Fig. 1) are radially oriented gravity lows to the south and northeast, and a radial gravity high to the northwest 8,13 . However, these large-scale, peripheral features are all likely to be pre-existing features, unrelated to the impact 12 . As models of the subsurface based on potential field data have inherently poor resolution and suffer from non-uniqueness, the most robust evidence of asymmetry comes from seismic reflection and refraction data 14 . High-resolution seismic reflection images along a concentric arc outside the crater rim clearly show that the northeastern gravity low in the offshore half of the crater occurs in an area where Cretaceous and Cenozoic sedimentary rocks are particularly thick, and the northwestern gravity high occurs where this sedimentary sequence is thinnest and basement rocks are closer to surface (ref. 3 , Fig. 3). Given the observed correlation between the gravity signature and depth to basement outside the crater, the thickness of the sedimentary sequence is the most likely control on the offshore gravity anomaly. There is therefore no evidence that the azimuthal asymmetry in the outer gravity signature is impact related. A nominal geographical centre of the crater (21.29°N, 89.53°W) is defined by the geometric centre of the crater rim demarcated by both a circular high in horizontal gravity gradient and the prominent cenote ring 11,13 (Fig. 1). Relative to this point, the centre (21.24°N, 89.58°W) of both the central gravity high, attributed to the uplift of dense lower-crustal rocks 11 , and the surrounding annular gravity low, which underlies the inner edge of the peak ring, are shifted several km to the west-southwest ( Fig. 1 and Supplementary Fig. 1). In contrast, three-dimensional (3D) seismic Fig. 1 Asymmetries of the geophysical signature of the Chicxulub crater. Background colourmap shows Bouguer gravity anomaly map in the vicinity of the crater (gravity data courtesy of Hildebrand and Pilkington). The red circle marks the nominal position of the crater centre; the green circle marks the centre of maximum mantle uplift; the blue circle marks the centre of the peak ring (as defined by the annular gravity low surrounding the central high); the white triangle marks the location of the Expedition 364 drill site through the peak ring (Hole M0077A). The coastline is displayed with a thin white line; cenotes and sinkholes with white dots, and the city of Mérida with a white square. The dotted lines offshore mark the approximate location of the inner crater rim and the extent of faulting as imaged by seismic data 14 . Inset depicts the regional setting, with red rectangle outlining the region shown in the gravity map. Adapted from ref. 14 . velocity data indicate that the maximum uplift of the mantle beneath the crater occurs at 21.38°N, 89.52°W, ca. 10 km to the north-northeast of the crater centre 4 (Fig. 1, Supplementary Fig. 1 and ref. 4 , Fig. 4d). The south-westerly offset of the central gravity high relative to the crater centre was previously interpreted as indicating impact from the southwest, on the premise that central uplift motion would be directed uprange 11 . An alternative interpretation, of a trajectory from the southeast, was proposed on the basis of a northwest-southeast elongation of the central gravity high and magnetic anomaly, and the northwest truncation of the annular gravity low 8 . However, seismic reflection and refraction data reveal that the zone of structural uplift is not elongated towards the northwest 14 (see also Supplementary Fig. 1) and that the truncation of the gravity low in the northwest is a pre-impact feature of the regional anomaly caused by the shallow depth to basement in this direction. The short-wavelength component of the magnetic anomaly shows a slight (10%) elongation in the northwest-southeast direction 15,16 , but is also offset to the southwest of the crater centre ( Supplementary Fig. 1). The short wavelength and steep gradients of this anomaly both suggest a shallow source, probably related to the melt sheet and impact breccia and not to structural crater asymmetry 15,16 . On the other hand, the long-wavelength component of the magnetic anomaly is a magnetic high elongated and offset along a direction southwest of the crater centre ( Supplementary Fig. 1), consistent with a zone of uplifted basement rocks southwest of the crater centre 15 . Here we use 3D numerical modelling to examine the relationship between impact angle and structural crater asymmetries in a Chicxulub-scale peak-ring crater in a flat-layered target without lateral pre-impact asymmetry. We show that the observed asymmetry in the positions of the central uplift, peak-ring centre and maximum mantle uplift, relative to the crater centre, can be attributed to the angle and azimuth of the impact trajectory. Comparison of our simulation results with geophysically constrained models of the Chicxulub crater structure is used to infer the likely trajectory and angle of the impact. The recent joint International Ocean Discovery Program (IODP) and International Continental Scientific Drilling Program (ICDP) Expedition 364 recovered~600 m of peak-ring rocks from the Chicxulub crater 17 that provide additional constraints to discriminate between impact scenarios. Our simulations also reveal azimuthal variation in peakring material properties, which provide context for IODP-ICDP Expedition 364 core analysis. Results and discussion Numerical simulation results. We performed a series of 3D simulations of impacts that produce a Chicxulub-scale crater, using the iSALE3D shock physics code 18,19 . The simulations assumed a flat, two-layer target comprising crust and mantle and considered four different impact angles (90 (vertical), 60 , 45 and 30 ) and two impact speeds (12 and 20 km/s). Further model details are described in the "Methods" section. Our simulations provide insight into crater asymmetries diagnostic of impact angle and trajectory in the absence of any target asymmetry (Figs. In our vertical impact simulation ( Supplementary Fig. 3), crater formation is axially symmetric and consistent with previous two-dimensional (2D) numerical simulations that employed an axially symmetric geometry 2,3,17 . Collision of the asteroid with the target surface generates a detached shockwave that propagates symmetrically from the impact site. In the first minute after impact, an excavation flow initiated by the shockwave produces a deep, bowl-shaped cavity, often termed the transient crater. The material flow depresses the crustmantle boundary beneath the transient crater, uplifts the crust in the transient crater wall and expels the unvaporized portion of the >3-km-thick sedimentary rock sequence from the transient crater as part of the ejecta curtain (Supplementary Fig. 3 and Supplementary Movie 1). The transient crater is unstable and collapses dramatically to produce a much flatter, broader final crater. In the vertical impact simulation, collapse manifests as uplift of the crater floor and downward and inward collapse of the transient crater rim and a surrounding collar of sedimentary rocks. Floor uplift begins directly beneath the transient crater centre and proceeds vertically upward, overshooting the pre-impact surface to form a large central uplift. At the same time, rim collapse occurs symmetrically at all azimuths, converging towards, and helping to drive up, the central uplift. Finally, the overheightened central uplift of crustal rocks collapses downward and outward, overthrusting the collapsed transient crater rim to form an uplifted ring of crystalline basement, overlying inwardly slumped sedimentary rocks from outside the transient crater. Although the spatial resolution of the numerical simulations is insufficient to resolve the characteristic sharp-peaked topography of the inner ring observed in extraterrestrial peak-ring craters, we are able to identify the position and structure of the material that forms the peak ring in the numerical simulations as a 10-km-wide collar around the central uplift ( Supplementary Fig. 2). This model of peak-ring crater formation is supported by geophysical data 20,21 and recent geological drilling 17 at Chicxulub, as well as remote-sensing data from the Schrödinger peak-ring crater on the Moon 22 . Impacts at progressively shallower angles to the horizontal result in an increasingly asymmetric development of the crater, internally ( The impact simulations shown in Figs. 2 and 3 employ an impact speed of 12 km/s, only slightly larger than the minimum possible speed-Earth's escape velocity of 11.2 km/s. While these results are likely to be representative of the~25% of all impacts that occur at speeds below 15 km/s, we also conducted another suite of simulations with a more probable impact speed of 20 km/ s (close to Earth's mean and median asteroid impact speed 23 with Supplementary Figs. 6 and 7), and the same trends in offsets with impact angle (Fig. 5). An important consequence of higher impact speed is enhanced melt production caused by higher shock pressures close to the impact site (e.g., compare Fig. 2a and Supplementary Fig. 6a). The larger melt volume complicates the interpretation of peak-ring structure in the 20 km/s simulations as the dynamics of the melt are not expected to be well captured, given the 500-m spatial resolution of the 3D simulations, and would likely continue long after the simulation end time. Nevertheless, the lateral distribution of the melt material relative to the peak-ring material at the end of the simulations (Supplementary Figs. 8 and 10) suggests that below an impact angle of 45 there is a high concentration (thick sheet) of surficial melt in the downrange quadrant of the crater, which is likely to hinder or prevent formation of a topographic peak ring at these azimuths. Our results therefore support the idea that horse-shoe shaped peak-ring planforms are indicative of shallow-angle impacts, with the gap in the peak ring diagnostic of the downrange direction 24 . Comparison with observations. Asymmetry in crater development produces differences in central crater structure in the uprange and downrange directions. While the centre of the simulated peak ring appears to be consistently offset downrange of the crater centre by~5% of the crater diameter in the three oblique impacts, the centre of the mantle uplift is offset uprange of the crater centre in the 60 impact and, to a lesser extent, the 45 impact; and is offset downrange in the 30 impact (Fig. 5). This pattern of mantle-uplift offset relative to the final crater rim is a consequence of the corresponding change in the offset of the deepest part of the transient crater relative to the centre of the final crater. Geophysical observations at Chicxulub suggest the peakring and mantle-uplift centres are offset in different, approximately opposite directions from the crater centre (Fig. 1). Uncertainty in the precise locations of the centres of the crater, peak ring and mantle uplift ( Supplementary Fig. 1), as well as uncertainty in the crater diameter 25 , contribute to an approximate uncertainty of 26% and 48% for the relative offset of the peak ring and mantle uplift, respectively (grey bands in Fig. 5). Comparison of these observations with our simulation results suggests that the observed configuration is most similar to the 60 impact simulations (or possibly the 45 impact simulation at 20 km/s; Fig. 5). Tracer particles that track the history of material in the simulation afford analysis of the provenance of peak-ring materials and their variation with azimuth. The mean depth of origin of peak-ring materials is 10-12 km for the 45 , 60 and 90 impacts, only dropping significantly, to~8 km, in the 30 impact. In the 30 scenario a significant fraction of the simulated peak ring originates from the sedimentary sequence in the uprange direction (Fig. 4); the presence of significant amounts of sedimentary material in the simulated peak ring is not consistent with geophysical interpretations or results from Expedition 364 17,26,27 . We also observe a systematic change in the up/downrange difference in subsurface structure of simulated peak rings with impact angle (Fig. 4). Similar to the situation in a vertical impact, at 60 the simulated peak ring is formed of overthrusted granitic crustal rocks from the central uplift above down-slumped sedimentary rocks from the transient crater wall, in all directions. However, the sedimentary rocks are deeper and extend farther beneath the simulated peak ring in the uprange direction compared with the downrange direction (Fig. 4). At 45 and 30 this difference is more pronounced: on the downrange side of the crater, the inwardly slumped sedimentary rocks do not extend under the simulated peak ring (Fig. 4) owing to enhanced transient crater rim uplift in this direction. This downrange configuration is inconsistent with geophysical interpretations at Chicxulub, which suggest sedimentary slump blocks lie beneath the outer portion of the peak ring at all azimuths offshore 12,25 . However, pre-impact asymmetries in sediment thickness, water depth, particularly in the northeast part of the crater (and potentially in the crust), may also affect structure beneath the peak ring 3 . A proposed indicator of shallow-angle impact is the truncation of the peak ring in the downrange direction 24 . Our numerical simulations at typical terrestrial impact speeds (20 km/s) are consistent with the production of a gap in the peak ring in the downrange direction for impact angles shallower than 45°( Supplementary Figs. 8 and 10). However, a prominent gap in the Chicxulub peak ring that might indicate a shallow-angle impact is not supported by the geophysical data. The topographic expression of the peak ring is clearly resolved in all radial seismic reflection lines through the offshore portion of the crater 28 and is particularly prominent in the northwest seismic reflection line Chicx-B 28 , the downrange direction according to shallow-angle impact hypothesis proposed by Schultz and d'Hondt 8 . While the onshore portion of the crater has not been seismically imaged, the annular negative gravity anomaly that has been shown to correlate with peak-ring position offshore is well-pronounced and continuous in this region, with no break that might indicate an abundance of melt or change in the character of the peak ring. The continuity of the geophysical signature of the peak ring therefore also supports a more steeply inclined impact trajectory. In summary, our numerical simulations of oblique Chicxulubscale impacts appear to be most consistent with the internal structure of the Chicxulub crater for a steeply inclined impact angle of 45-60°to the horizontal. If the observed asymmetries in the Moho uplift, central uplift and peak ring of the Chicxulub impact structure are attributable to impact trajectory, the implied direction of impact is northeast-to-southwest. This is the opposite direction to that proposed by Hildebrand et al. 11 based on the offset of the central uplift relative to the crater centre. Our results indicate that uplift of the crater floor occurs in a downrange rather than uprange direction, consistent with numerical simulations of complex crater formation 19 and geological interpretation of eroded structural uplifts at terrestrial complex craters 9,29 . Analyses of Venusian craters have not shown a clear link between asymmetries in central crater features and direction of impact. A slight tendency for the peak-ring centre to be offset in the downrange direction was observed, but the results were inconclusive, in part owing to the relatively small number of craters used in the study 30 . The magnitude of the offset (0.03-0.07 D) is, however, consistent with our numerical simulation results. In contrast, there is no correlation between impact trajectory direction and the offset from the crater centre of central peaks in small complex craters 31 . While we did not simulate central peak formation in this work, our results provide a possible explanation for the absence of correlation. At steep angles, uplift of the crater floor initiates uprange of the crater centre, while at shallow angles uplift initiates downrange. If central peaks represent frozen central uplifts, offsets in either uprange or downrange direction might therefore be expected at moderately oblique angles 30-60 . Implications of a steeply inclined Chicxulub impact. Impacts that occur at a steep angle of incidence are more efficient at excavating material and driving open a large cavity in the crust than shallow incidence impacts 5,19 . Our preferred impact angle of ca. 60°is close to the most efficient, vertical scenario, which suggests that previous estimates of impactor kinetic energy based on high-resolution 2D vertical impact simulations 2,17 do not need to be revised dramatically based on impact angle. Steeply inclined impacts favour a more symmetric distribution of material ejected from the crater among both proximal and distal ejecta 5 . Asymmetry in the distribution of ejecta was originally used by Schultz and d'Hondt 8 as an argument for a shallow impact angle towards the northwest. This was based on the observation that both the particle size and layer thickness were relatively large in North American K-Pg sites. Subsequent work has shown that number and size of shocked quartz grains present in the global ejecta layer decreases with distance from Chicxulub, and is independent of azimuth [32][33][34] . In addition, the 1-3-cm-thick double layer in North America is also observed to the south and southeast of Chicxulub in Colombia 35 and the Demerara Rise 36 at equivalent paleodistances from Chicxulub. The global K-Pg boundary layer therefore has a more-or-less symmetric ejecta distribution, consistent with our preferred steep impact angle. Impact angle has an important influence on the mass of sedimentary target rocks vaporised by the Chicxulub impact 37 . Recent complementary numerical simulations of impact vapour production in oblique impacts using the SOVA shock physics code showed that a trajectory angle of 30-60°constitutes the worst-case scenario for the high-speed ejection of CO 2 and sulfur by the Chicxulub impact 37 . At this range of impact angles, the ejected mass of CO 2 is a factor of two-to-three times greater than in a vertical impact and approximately an order of magnitude greater than a very shallow-angle (15 ) scenario 37 . An absence of evaporites in the IODP-ICDP Expedition 364 drill core is consistent with highly efficient vaporisation of sedimentary rocks at Chicxulub 27 . Our simulations therefore suggest that the Chicxulub impact produced a near-symmetric distribution of ejecta and was among the worst-case scenarios for the lethality of the impact by the production of climatechanging gases. Methods Numerical simulations. The Chicxulub impact was simulated using the iSALE3D shock physics code 18,19 . Tabular equations of state generated using ANEOS 38 with input parameters for dunite 39 and granite 40 were used to describe the thermodynamic response of the mantle and crust, respectively. The impactor was also modelled as a granite sphere, with a density of 2650 kg/m 3 , because of a current limitation of iSALE3D that does not allow for more than one boundary between materials per grid cell. The actual Chicxulub impactor density is not known. Although a carbonaceous chondrite composition has been proposed 41,42 , the bulk porosity of the Chicxulub asteroid prior to impact is undetermined. The Murchison (CM2) carbonaceous chondrite meteorite has a bulk density only 10% less than our impactor density, suggesting that our assumed impactor density is reasonable. Moreover, for a given impactor mass our simulation results are not expected to be sensitive to the assumed impactor density (or other impactor material properties). Material strength was modelled using an approach appropriate for geological materials 43 . The choice of model parameters was based on previous vertical impact simulations using iSALE2D 3,4,17 and the similar SALEB code 44,45 and oblique impact simulations of the early stages of the Chicxulub impact 2,46 . A flat, two-layer target was employed, with a crustal thickness of 33 km. Material number limitations precluded inclusion of a rheologically distinct sedimentary layer in the target; however, Lagrangian tracer particles allowed material at this stratigraphic level to be tracked during the simulation, as well as the peak shock pressure and provenance of peak-ring materials. We considered four impact trajectory angles, measured relative to the target surface: 90 (vertical), 60 , 45 and 30 . Simulations were performed at two impact speeds: 20 and 12 km/s. The slower speed was used for computational expediency and to afford direct comparison of the vertical impact case with previous 2D simulations 3 . The higher impact speed is approximately the average speed that asteroids encounter Earth 23 and is hence more representative of the likely impact speed of the Chicxulub impact. Impactor diameter was increased with decreasing impact angle (and impact speed) to achieve approximately equivalent final crater diameters (<10% difference). The minimum cell size was 500 m, affording spatial resolutions of 16-21 cells per impactor radius, depending on impact angle and speed. The impactor size required to produce a Chicxulub-scale crater in our vertical impact simulations with a speed of 12 km/s is slightly larger (14%) than that used in previous 2D simulations 3 . We attribute this discrepancy to a combination of lower spatial resolution in the 3D simulations as well as the absence of a weak sedimentary layer in the upper 3 km of the target. The vertical impact simulations presented here using iSALE3D are consistent with the results of equivalent iSALE2D simulations that employ an equivalent spatial resolution and do not include a separate material layer for the sedimentary rocks. As with previous simulations of the Chicxulub impact, the acoustic fluidization model 47 was invoked to explain the temporary dynamic weakening of the target rocks required to facilitate collapse of the transient crater and formation of a final peak-ring crater consistent with geophysical observations 2,3,17 . Acoustic fluidization is a mechanism that reduces the effective resistance to shear deformation of a rock mass subjected to sustained high-frequency pressure fluctuations. In the context of asteroid impacts, initiation of the pressure fluctuations is attributed to the passage of the shockwave; pressure fluctuations subsequently decay in amplitude until they have negligible effect on the internal friction of the rock mass. To ensure consistent application of the acoustic fluidization model for impactors of different size that produce the same size crater, we used fixed acoustic fluidization parameters (viscosity and decay time) in each impact simulation. A full listing of all input parameters is given in Supplementary Table 1. To analyse the azimuthal variation in peak-ring properties, such as radius and peak shock pressure, it was necessary to identify the Lagrangian tracer particles that track the peak ring. Owing to the relatively low spatial resolution of our 3D simulations, in comparison with previous high-resolution 2D simulations 17 , it was not possible to identify peak-ring materials based on topographic expression within the final crater. Instead, peak-ring material was identified as unmelted (shock pressure <60 GPa) material within a 10-km-wide collar of the central uplift, and above the plane defining the base of the central uplift, at the time of maximum uplift (T 3 min; Supplementary Fig. 2). The centre of mantle uplift was defined as the x-location of the maximum uplift of the mantle material in the simulation. The horizontal peak-ring dimensions were defined based on the final x-y positions of the peak-ring material tracers (see Supplementary Figs. 9 and 10). Two circles, one inscribing and one circumscribing the peak-ring material tracers, were used to define the inner and outer edge of the simulated peak ring; the centre of the simulated peak ring was calculated as the average of the centres of these two circles. All crater metrics are given in Supplementary Table 1. Code availability At present, the iSALE code is not fully open source. It is distributed via a private GitHub repository on a case-by-case basis to academic users in the impact community, strictly for non-commercial use. Scientists interested in using or developing iSALE should see http:// www.isale-code.de for a description of application requirements.
2020-05-27T14:58:03.061Z
2020-05-26T00:00:00.000
{ "year": 2020, "sha1": "01218d27fad2748e5f24608d132188ea11bd49cd", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-15269-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3da041755e157ae49f2e4ff6a4953e4ff1668989", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Physics" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
86087373
pes2o/s2orc
v3-fos-license
NEW RECORDS FOR THE CICADA FAUNA FROM FOUR CENTRAL AMERICAN COUNTRIES (HEMIPTERA: CICADOIDEA: CICADIDAE) Abstract Analysis of museum specimens has added to the cicada fauna of Belize, El Salvador, Guatemala, and Honduras. Information on the cicada fauna reported in the literature as well as the first records of cicada species to the fauna are reported here to provide a more accurate understanding of cicada diversity in each country and the region. The new records represent an increase of 75, 14, 110, and 320%, respectively, to the cicada faunal diversity of each country. The Central American cicada fauna has received little study since Distant's Biologia Centrali-Americana (Distant 1881, 1883, 1900, 1905. Davis (1919Davis ( , 1928Davis ( , 1936Davis ( , 1941Davis ( , 1944 described new cicada genera and species, primarily from specimens he received from Mexico. Since that time, most work on Central American cicadas has focused on the ecology of Costa Rican (Young 1972(Young , 1976(Young , 1980(Young , 1981 and Panamanian (Wolda 1984(Wolda , 1993Wolda & Ramos 1992) cicadas with limited work being done on the Mexican fauna (Moore 1962(Moore , 1996Sueur 2000Sueur , 2002Sanborn 2006). The lack of knowledge was illustrated in the paper by Sanborn (2001), who identified the first cicadas to be reported from El Salvador. The taxonomic position of some of the Central American species has been altered (Boulard & Martinelli 1996;Moulds 2003) and the process of describing new species (Sueur 2000;Sanborn et al. 2005) has begun but there are still many species to be described (Sanborn unpublished). I have come across multiple species in various museum collections that have not been described as being part of the cicada fauna in several Central American countries as published in the Cicadoidea bibliographies (Metcalf 1963a, b, c;Duffels & van der Laan 1985) or more recent literature. I have now identified specimens from several collections and individuals that represent additions to the cicada fauna of Belize, El Salvador, Guatemala, and Honduras. These new additions to the cicada fauna of the region are identified along with a listing of previously identified species from the various countries to provide a current view of the cicada fauna for the region. M ATERIALS AND M ETHODS Specimens for this study were found among the undetermined material in the collections of the Florida State Collection of Arthropods (FSCA), the Smithsonian Institution, United States National Museum (USNM), San Diego Natural History Museum (SDMC), Bohart Museum of Entomology at the University of California at Davis (UCDC), Carnegie Museum of Natural History (CMNH), University of Mississippi Insect Collection (UMIC), William R. Enns Entomological Museum, University of Missouri (UMRM), University of Connecticut (UCMS), University of Georgia (UGCA) and three individuals who donated their specimens to the author. Original specimens are housed in the collections above with vouchers of most species and the specimens donated to the author in the author's collection. The number of species previously attributed to each country was determined from the cicada bibliographies (Metcalf 1963a, b, c;Duffels & van der Laan 1985) and the more recent literature. Original references can be located in these materials. R ESULTS The regional cicada fauna for Belize, El Salvador, Guatemala, and Honduras is summarized here. Species identified as new to a country in-clude available collection information. Bibliographic information is provided for species that have been described previously from a country. There are currently four species that have been collected in Belize, one of which is a recently described new species (Sanborn et al. 2005). Three species are added to the cicada fauna with this report. The cicada fauna of El Salvador was unknown until I reported on representatives of seven species collected in the country (Sanborn 2001). One additional species was found in the collection of the USNM. There are currently ten species attributed to Guatemala. Eleven additional species are added to the fauna in this report. There are currently five species reported to inhabit Honduras. One of these is a recently described species (Sanborn et al. 2005 Proarna insignis Distant, 1881. The FSCA has specimens from Guatemala, Izabal, La Graciosa, 15-IV-1995. The FSCA also has specimens collected at Honduras, El Paraiso, 7 km North of Oropoli, 30-IV-1993 and Atlántida, RVS Cuero y Salado, Salado Barra, 15°46'N 89°59'W, 2-5 m, 22-IV to 1-VIII-2000. There is a specimen from Honduras, El Paraiso, Yuscaran, 1-VI-2003 in the UGCA. The species has been reported in Central America (Metcalf 1963a). D ISCUSSION This work has added significant numbers of representatives to the cicada fauna of the northern Central American countries (Fig. 1). However, there are probably many additional cicada species present in each country. The distribution of a species may bypass an individual country while being reported from border countries. It may be that insufficient collecting has occurred to produce representatives of these species in some countries, e.g., neighboring Guatemala, Honduras, and Nicaragua have been reported to have 24 species and 13 genera not reported from El Salva-dor here and in Sanborn (2001). Continued museum study and field work will no doubt result in the identification of new species and additions to the cicada fauna of each country.
2018-12-18T15:01:08.005Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "a73e792db042897d93a8a642853c697ff9563d0a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1653/0015-4040(2006)89[75:nrftcf]2.0.co;2", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "36c67a9c434a97abba65e37497b78388b127eb8e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
156597895
pes2o/s2orc
v3-fos-license
How the IMF did it–sovereign debt restructuring between 1970 and 1989 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. passu clause, especially in the wake of the decisions rendered by New York courts and the US Supreme Court in 2013 and 2014 on the Argentine case. 3 These most recent evolutions are striking because they seem to prohibit any majority-based decision rule among creditors, even though since the Middle Ages domestic bankruptcy statutes have almost always endorsed this non-contractual, coercive principle. The underlying trend is however broader: there is arguably no period in modern history when the renegotiation of sovereign debts has been so strongly structured by a rather narrow contractual language. Even during the first global era, before 1914, when no multilateral agency interfered in restructurings, these operations were more ad hoc and took place farther away from the national courts of justice. The contrast is also very strong when comparing current practices with those observed during the debt crisis of the 1980s, which are the object of the present article. While today we observe at best a fledgling and contested regime, the rules that were adopted immediately after the Mexican quasi-default of August 1982 proved remarkably stable and predictable: until 1989 they governed a total of 109 restructurings between 41 debtor states and hundreds of creditor banks. Of course this episode is first remembered today for the long lapse in time (seven years) before creditor countries accepted the principle of large write-offs, instead of simple debt service rescheduling. But here is a problem of substance, not one of procedure. Process in fact worked fine, at least in relative terms, though how this stability was obtained is now largely and curiously forgotten. Here is indeed a lesson that was neither entirely drawn, nor handed over. The key feature of this procedure was a veto-based decision rule. In case after case, each party had to endorse explicitly a so-called burden-sharing agreement that sought to balance on an ex ante basis the financial concessions made by banks, the policy commitments of the debtor country and the International Monetary Fund (IMF or the Fund) lending. The voluntary dimension that was enshrined in this decision rule was a response to the absence of a proper jurisdictional authority that would have allowed the Fund or another authority to adjudicate cases. Still, creditors were coordinated, economic costs and losses were shared, debt contracts were rewritten, and economic policies generally changed course. Moreover, the burden-sharing agreement endowed these accords with a measure of perceived distributive fairness, hence of legitimacy: they could be opposed to third parties, like bank shareholders, electors or representatives of key Member States on the Fund's Board. Lastly, the multilateral character of this regime ensured that a degree of 'comparability of treatment' (in IMF-speak) would apply across cases. Another remarkable trait of these rules is that they were never formalized in any international agreement, treaty, communiqué or guidelines. Restructuring operations were exclusively founded on bylaws that had been formulated internally by the Fund before being de facto adopted by creditor banks and debtor countries. All alternate hard law regulations or contractual clauses that could have governed or influenced debt renegotiations were systematically ignored or circumvented; even the IMF's own statutes, the 1944 Articles of Agreement, were freely reinterpreted when convenient. An ultra soft law mechanism thus ousted all better established authorities that could have legitimately claimed jurisdiction. The veto rule and widespread informality together beg the question of how the overall procedure was structured and what exact role the IMF played in it. Did it work as the proxy of a court or an arbitration tribunal? Or was the neutral and technical character of its many contributions only a shield, with the actual game boiling down to a more or less sophisticated mix of bargaining and brokering? In this case, outcomes would have been shaped in fact by the brutal political economic forces that are typically unleashed by sovereign defaults. But then what about the rule of mutual veto? Why would the parties have adhered to such a demanding rule, if in fact power struggles ruled the day? The best way to explore this experiment is to start with the nitty-gritty of how debt restructurings and macroeconomic programmes were actually discussed and articulated in relation to each other. The Fund's archives reveal how experimentation, bargaining and raw power politics delivered this ad-hoc assemblage set of rules according to which cases were then processed. We can then recount how the post-1982 procedure resulted in practice from a decade-long process of trial and error, that built in turn on the rules and usages that the Fund had developed since the 1950s. The same archival sources show however that this regime always rested on a constellation of interests and power relationships whose gradual decline after 1987 directly affected its capacity to deliver settlements. Once the banks and the countries involved had recovered from the initial economic shock, the capacity to arm-twist them into the IMF negotiating rooms rapidly declined. Political economic contingencies thus threatened, constrained and shaped altogether the procedure that governed renegotiations. The next section briefly summarizes the existing literature on the debt crisis of the 1980s. Sections 3 and 4 then analyse how the regime that emerged in 1982 resulted from a long process of trial and error that began in 1970 before being shaped by the tensions that rapidly surrounded the post-oil shocks, the so-called recycling strategy. Section 5 then moves on to the specific crisis of the 1980s and how the Fund addressed it, and section 6 analyses how these rules were described and justified in spite of their soft, lightly formalized character. Section 7 is a conclusion. What the literature says The regime for sovereign debt restructuring that was built and operated by the Fund during the 1980s stands out indeed from practically all other experiences. Before 1914, these operations resulted generally from direct negotiations between debtor countries, issuing banks and bondholders' associations, without intervention by any third-party, multilateral or not; in the worst cases, gunboats would be sent, but this measure was seen as an exception to the rule. 4 Later, the League of Nations intervened repeatedly in these matters, just as the IMF since 1995, but both proved unable to build a well-accepted, stable set of rules. 5 Critically, all attempts at establishing some kind of bankruptcy court or arbitration panel for sovereign debts failed miserably: first during the pre-1914 era at The Hague, 6 then in the late 1930s in Geneva 7 and finally in the early 2000s in Washington. 8 Against this background, the recent experiences of Argentina and Greece remain as examples of how 'not' to address sovereign insolvency. 9 These repeated failures ultimately reflect the fact that a multilateral forum is not a natural jurisdiction for solving debt disputes, neither between private parties, nor between states and private investors. In fact, national governments have never conceded the authority to adjudicate their debt problems to any such body. One important reason why so little is known about the workings of the 1970s and 1980s regime is that most social scientists who have studied this episode have shared a similar disregard for its legal and procedural dimensions. The pattern applies whether they have focused on the international politics of the debt crisis, 10 on the broader longterm history of international finance 11 or on the macroeconomics of over-indebtedness and balance of payment adjustment. 12 This lapse also extends to later publications, like Reinhart and Rogoff, 13 or Das and others, 14 which cover a lot of ground in matters of sovereign debt without adding much on the specifics of the 1980s cycle. The main exceptions to this broad rule are Cohen, 15 who is still a good primer on the early part of the crisis, Lipson 16 and Guttentag and Herring. 17 However, the extent of the information available to them was substantially less than what is now available, so that they do not entirely reconstruct the link between the macro-side of the crisis and the micro-level legal and contractual dimensions of restructuring operations. In order to better identify the rules that shaped the restructuring procedure and endowed it with remarkable predictability, one should thus look first at the legal literature, like Horn, 18 Buchheit,19 MacCallum, 20 Gold 21 or Hagan. 22 Beyond these, the key sources are the Fund's own publications and archives. The present article is based as suggested, in particular on a systematic exploration of: (i) the Minutes of the Executive Board meetings, which reflect how Member States (represented by Executive Directors) discussed debt issues and tried to guide the organization; (ii) the various proposals and appraisals made by the staff, which capitalize the internal expertise of the Fund; 23 and lastly (iii) a stream of reports and publications by the Fund's Legal Department which tracks its jurisprudence on conditional lending, primarily the Stand-By Arrangement (SBA). How the game started Taking control of the playing field There was nothing inevitable about the emergence of the IMF as the main forum for dealing with sovereign debt problems in the 1970s. As its 1944 Articles of Agreements did not envisage any such role, first the Executive Board had to agree that the absence of a formal mandate was not an excuse to stay put, a decision which in fact raised little resistance from the principals. 24 As the American Executive Director casually noted at a Board meeting, in April 1971, relying on the expertise of the Fund 'where appropriate . . . was often extremely useful' 25 -a statement that could well be interpreted as a green light for wading more deeply into sovereign debt matters. However, the IMF could just as well have anticipated on Milton Friedman's advice to stop lending and provide only expertise, information and goodwill. 26 Whether this tack would have led to countries with debt problems to a less controversial presence is not clear: later during the same decade, banks began to use the Fund's pronouncements as a source of risk assessment, and even started to ask for its private judgement on the evolving situation in given countries-whether a programme was on track or drifting. 27 The Board soon judged that this role might seriously limit its discretion and capacity to adjust its judgements when bargaining with countries' governments. 28 More generally, it was clearly agreed upon that the Fund should never mimic a rating agency or become the operator of a system of traffic lights for international capital markets. The decision to enter the debt restructuring business also implied the de facto rejection of alternate institutional options. The first, outright default, was seldom considered in practice. In that case, the parties would have followed the initial debt contracts' provisions for renegotiation, and if a private agreement could not be obtained then the parties would have litigated-probably under different jurisdictions. 29 Moreover, in many cases, a 'default event' on one class of debt or another would have extended to all other contracts, after which most creditors would have been forced to post large loan-loss provisions or even declare themselves insolvent. In practice, this was a recipe for disaster. A second option was for privately coordinated creditors to enter in direct negotiations with the government of the debtor country-without any mediation, somewhat as before 1914. Only one experiment of this sort was attempted, in 1976: at its own initiative, Peru negotiated both a financial agreement and an economic programme with a consortium of American banks but without any input from the Fund in terms of information, expertise or economic monitoring. After less than a year, the programme collapsed and the banks declared that they would never again try to do the Fund's job; they were simply not equipped to enforce conditionality. 30 Of course, the episode was closely monitored by the IMF and became part of its knowledge base. 31 A third option was to restructure debt through the United Nations Conference on Trade and Development (UNCTAD). This body, governed by a one-country, one-vote decision rule, was used during the 1970s by developing countries as a platform for demanding better trade and financial conditions. At its 1979 Conference, in Tanzania, a resolution was adopted that called for the creation of a permanent 'International Debt Commission' explicitly aimed at balancing social fairness and financial commitments; the 27 possibility of deciding for across-the-\board debt write-offs was also mentioned. 32 But the developed countries immediately made clear that they were not in agreement, and this proposal barely registered with the Fund. Clearly, the Fund was protected by its close relationship to the financial administrations of its key Member States-after all, it is their creature. The 'four corners' of the SBAs Once it had been agreed that addressing debt problems was actually 'appropriate' for the IMF, the next step was to design a strategy to deal with restructurings and make them easier to settle. The tool the Fund immediately mobilized was the Stand By Arrangement (SBA), ie the standard vehicle for conditional lending which it had been developing through trial and error since the early 1950s. Significantly, the founder of the IMF's legal doctrine, Joseph Gold, had often insisted when addressing Fund officials that the SBA should be envisaged neither as a contract nor as an international treaty. In both cases, he argued, sanction of a failure to comply would have taken a sharp 'censorious' character, which was deemed unhelpful or even disruptive. 33 The threat of sanction or even expulsion was not an option, Gold argued, because the Fund should always aim at keeping communication lines open so as to allow for a resumption of negotiations and to make a new accord as easy as possible to obtain. Said differently, the priority is 'consultation and collaboration' 34 -a dyadic approach that may suggest pragmatism, but also hints at bargaining and arm-twisting. Rather than as a (private law) contract or a (public law) international treaty, the SBA was thus shaped as a shallower 'exchange transaction', made of two 'parallel declarations': 35 a 'Letter of Intent' is first sent by the country's authorities to the Fund's Managing Director, and the Executive Board then gives access to a stated amount of refundable cash which 'stands by' in the Fund's accounts until the country draws on it. This two-track exchange explains why no document ever carries the signatures of both parties at the same time, even though hard bargaining does happen and commitments are entered into. Three significant features further highlight the original, self-sustained legal character of the SBA. First, Gold is adamant that SBAs should not interact with the domestic legal order of the member-countries, which is why the Fund never asks that they be voted into law by national Parliaments. Then, the interpretation of the Letter of Intent (ie the economic programme) on an ex post basis is never left to a third-party or arbitration procedure: the Fund has always preferred discretionary bilateral 'consultations' between the two parties. Lastly, Gold has admitted that, during at least the first decade or practice, the experimental and evolutionary character of the SBA implied that the formal, legal link with the Article of Agreement was a source of concern. 36 Bringing all these elements together, Gold once summed up in graphic terms this selfcontained approach by stating (in an internal presentation at the Fund) that 'our objective has been to set forth our understandings with members to the maximum extent within the four corners of the stand-by arrangement'. 37 What Gold may not have anticipated at the time was the sheer flexibility of this squared legal framework: quite soon after the IMF started addressing sovereign debt crises, it discovered that the best way to handle them was to invite the banks to come in and deal jointly with the Fund and the debtor country within the framework of the SBA. Therefore all parties would be able to rely upon tested rules for bargaining and decision-making as well as on the practice of ex post conditionality monitoring. In other words, the structure of the SBA and the related expertise that the IMF had accumulated since the 1950s provided after 1970 a platform where the three parties could bargain and exchange credible commitments. Not lending into arrears Technically, banks and their debt baggage entered the four corners of the SBA via one rather narrow door. When a country with a debt problem called for help, the IMF had to consider, among many other policy issues, how to address the arrears to lenders (interests and capital) that had accumulated before that moment. Should the Fund ignore these arrears and therefore act without regard for the commercial banks and more generally for the evolution of the international credit markets? In this case, the Fund may as well 'lend into arrears', ie while ignoring arrears entirely. The Fund converged rapidly, however, towards the opposite, pro-active stance and so it was decided in 1970 that reducing arrears, typically during the duration of a SBA, could be part of a programme and hence of conditionality. As long as the country remained faithful to its commitments in this respect (as on the more traditional macroeconomic targets), the Fund could keep lending to it. 38 In practice, the prohibition to 'lend into arrears', hence the obligation to coordinate one way or another with banks, remained until 1989 a key plank of the Fund's doctrine. Under this rule, no loan could be made available as long as an agreement on arrears was missing (such as rescheduling or capitalization) but also, by extension, as long as a settlement on the general debt problem had not been reached; this settlement might thus include as well the rescheduling of amortizations, grace periods or eventually debt writeoffs. The catch, from a legal perspective, is that any assumption regarding arrears, hence debt more generally, inevitably affected how the 'financing gap' of the country (ie current balance plus amortizations) would be covered over the duration of the SBA. And this arrangement had to be stated ex ante in the Letter of Intent if the Executive Board was to extend a stand-by loan. Thus the policy towards arrears became in practice the conduit that brought the general issue of debt restructuring into the squared structure of the SBA. Macroeconomic conditionality, financial concessions by the banks and multilateral support from the IMF had to become part of a 'burden sharing' accord that would be de facto written into the terms of the stand-by loan. Commercial banks enter the scene: 1977-1982 The run-up to the 1982 crisis Making this broad rule work was not easy. A full decade of trial and error was needed, where the main external driving force was the post-oil-shocks 'recycling' strategy: the process through which the current account surpluses of oil-exporting countries were largely re-lent to countries like Mexico, Argentina, Brazil or Yugoslavia. Critically, private banks did most of the work and relied on large loan syndications as their preferred lending vehicle: many banks small and large, international and local, became a part of the overall scheme and of the eventual payment problems. 39 In practice, as soon as the Fund had made lending conditional upon the repayment of arrears, the first problem to address was that its loans might end up supporting capital and interests outflows, rather than supporting economic adjustment. In other words, the IMF would indirectly though visibly bail out private creditors, with all the consequences that would come to haunt it in later decades: moral hazard followed by over-lending and eventually more crises. Remarkably, in the still benign environment of the early 1970s, the Fund staff already recommended a 'balanced burden-sharing between creditors and active aid donors, avoiding any impression that new capital inflows would be largely offset by repayments to creditors'. 40 This threat became steadily more serious as the debt problems grew and as the banks' voice became louder inside the four corners of the SBA. Said differently, the IMF was confronted with a standard Prisoner's Dilemma: unless it succeeded in coordinating the parties ex ante and in enforcing ex post the terms of their agreement, it would lose any capacity to control how its own resources were used. And the greater the tensions on debt sustainability, the more difficult it would be to solve this dilemma. Until the late 1970s, the solution that was adopted took the form of a two-track approach: at the same time when the country was negotiating a SBA with the Fund, it would bargain with the banks on arrears specifically, and on debt more generally. This approach came to be known under the codename 'parallel operations' though, for sure, they called for substantial exchanges of information and signals between banks and Fund. 41 By the end of decade, however, financial distress increased further as a consequence of the second, 1979-1980 oil shock, the tightening of US monetary policy after 1979, and the world recession of 1980-1982. Whereas only 10 countries had renegotiated their debt between 1975 and 1979, 20 countries concluded such arrangements between January 1980 and August 1982. A second problem of collective action, this time among banks, then increased further the challenge for the IMF. Already by 1977, the Fund staff noted that banks could occasionally exercise 'considerable bargaining power', but that the main risk was stalemate: 'occasions could arise in which the payments outlooks are so adverse that it will be difficult for banks whose interests may well be divergent to hold together in a common approach'. 42 In 1980, the staff further underlined that debt restructurings now involved 'formidable organizational problems for the banks. The banks generally acknowledge that their interests are best served by avoiding default (. . .) but individual banks have considerable incentives to attempt to reduce exposure to the country concerned'; this situation therefore imposed 'a massive effort by lead banks to ensure coordination and cooperation among all creditor banks'. 43 As the stakes became higher, agreement thus became more difficult to obtain and the risk increased that the Fund would eventually fail to control the use of its own resources, once disbursed. As a consequence, the doctrine of 'parallel lending' evolved towards a more binding relationship. In late 1980, the Fund staff described in an internal report a recent landmark case in which 'it had indicated to [the banks] the level of bank financing, which it considered crucial to the success of a reasonable adjustment effort. After a Fund arrangement [. . .] had been worked out, but before it was brought to the Board, the staff participated in a meeting where the banks agreed in principle to provide a certain level of financing'. 44 In other words, in order to reach a viable settlement, the two negotiations could no longer unfold on two separate though parallel routes. The three parties had to meet, and the Fund would not be just a silent observer, or benevolent mediator. The same internal report then continues: 'After the arrangement was approved, the staff assumed an active, if informal, role in helping to ensure that the planned amount of assistance was in fact forthcoming, explaining to the banks that a shortfall would force a failure of the balance of payments test and might require explanation to the Fund Board.' 45 In this most 41 A 1977 Office memorandum mentions for instance that, over a six-month period, the staff had received nearly 400 inquiries from banks, among which a number were actually 'initiated by banks to help determine their immediate decision on major lending remarkable statement, the venom is in the tail: 'a failure of the balance of payment test' meant that conditionality would be broken as a consequence of uncooperative bank behaviour, and the 'explanation to the Fund Board' implied that the overall programme might therefore be suspended and disbursements halted. In sum, if the banks free ride on the Fund's lending, then the Fund might well retaliate and raise the stakes. This was no longer a world where ships sailed on parallel lines and exchanged cryptic signals. 46 The other significant bifurcation before the Mexican watershed appears in a 1980 review of policies on arrears, where the Fund staff softly suggested that, in the case of large restructurings, interest arrears might be capitalized upfront and immediately made part of the total stock of debt. 47 This proposition was presented as one option among others, but its strategic implications were straightforward: in this case, the banks would not only have to accept an extension over time of interest repayments, they would have to contribute to easing the financial and liquidity position of the debtor country from the very first day the programme was adopted. This is indeed what happened after 1982. And this new twist implied a new interpretation of the rule of 'not lending into arrears': no payment at all on interest arrears would be written into the SBA, because they would be entirely absorbed by banks as de facto new debt. Guidance from the principals During this long process of experimentation, the Fund's staff in Washington, its missions in debtor countries and the officials in contact with commercial banks never ceased to design and discuss alternative strategies for addressing sovereign debt crises. Moreover, this was not seen as a minor issue to be left to second-rank officials. Tellingly, from early 1980 onwards the personal papers of the Managing Director, Jacques de Larosière, include many references to meetings (and dinners) with the heads of large banks, like Citibank, Warburg, Lazard or Kuhn Loeb. 48 Clearly, the bankers welcomed informal exchanges of views, including on the 'possible solutions to the major funding problems which many developing countries are likely to face in the eighties'. 49 All parties saw these meetings as new practice, a point that is alluded to in a letter of thanks to Larosière that noted in passing that 'direct contact with the commercial banks is difficult for you politically'. 50 The fact that the IMF as a whole remained alert to the growing tensions in the international financial system stands, however, in sharp contrast with the positions of the Executive Directors. The representatives of member-countries typically backpedalled when discussing the ongoing experimentation, defending multilateral orthodoxy and clear-cut rules of interaction with banks. Developing countries in particular were most adamant on this point, like Brazil for instance, whose Executive Director said most clearly at a 1977 Board meeting that 'the staff should do nothing, either positively or negatively, to provoke or discourage such lending [by the banks] or it would [. . .] be in the business of issuing certificates of creditworthiness or unworthiness'. 51 The Indonesian Director added, 'the Fund was an organization of sovereign countries [. . .]. It should be left to private entrepreneurs to assess the [investment] risks, receiving rewards for a correct appraisal or suffering losses for wrong judgments'. The German Executive Director, always ready to defend a principled position in financial matters, declared 'the Fund should avoid at all costs being drawn into debt negotiations between official lenders and debtor countries'. 52 Again in 1981, at a time when the Fund had already started to ask banks to enter into 'understandings', the US Alternate Director underlined that 'he was concerned about the recent Fund practice [. . .] of assuming that a specific amount of debt relief [by the banks] was involved in program proposals submitted to the Executive Board'. However, a few minutes later he argued that it 'would be important to make certain that public resources were not used to finance reflows to banks'. 53 Yet he did not propose an option that could eliminate the bailout risk without entering into undue relations with the banks. In summing up the discussion that same day, Jacques de Larosière first politely agreed with the Executive Directors: they 'have . . . noted that it is difficult, and in the view of some perhaps not wise, for . . . the Fund to impose on commercial banks a predetermined and detailed set of debt-rescheduling norms for each particular country'. But he then brought the point home: 'when it appears that the continuation of commercial or other private credit is an important element of the integral approach that the Fund has been developing, we shall do our best to see that the banks provide the expected amount of finance'. 54 What is most striking in this (then confidential) statement is the recognition that some strong-arm politics or even coercion might be needed for the Fund to implement its 'integral approach'. In fact, when speaking to its cautious principals, the Managing Director was much more straightforward than the staff dared to be. 1982-1987: from crisis to routine In August 1982, Mexico did not default on its debt in the exact legal sense of the 'default event' that is a suspension of debt service. Rather, it declared itself unable either to refinance its maturing debt or to raise fresh funds, and thus asked for help. The story of how matters unfolded during the following months has already been told in detail and needs only be briefly summarized here. 55 The first step was to negotiate a so-called bridge loan with the Bank of International Settlements and its main Member States, the G10 countries; 56 this measure reduced the immediate pressure so that over several weeks an economic programme was negotiated between Mexico and the IMF while exploratory meetings were held with the banks. What was new in these circumstances was not the nature of the collective action problem raised by the threat of an outright default: as said previously, neither the lead banks nor the IMF were in unknown territory. But what was entirely unfamiliar was the magnitude of the problem: some 530 banks now had to be party to any agreement made, and with a country that clearly had much more economic and geopolitical leverage than, say, Zaïre or Jamaica. The financial stakes were proportional: $8.2 billion ($16.8 billion in 2015 dollars) was needed just to close Mexico's financing gap till the end of 1983. It soon became clear to all parties that either the banks would be 'bailed in' via some burden-sharing agreement, or the burden would be transferred to multilaterals and to major creditor countries, that is, banks would be 'bailed out'. In practice, the banking community was awash with proposals for banks to (i) launch a mega-loan syndication on behalf of the Fund; (ii) use the money to replenish the IMF's coffers; and (iii) ask the Fund to bail them out from their excessive lending to developing countries. The eventual costs would admittedly have been borne by taxpayers, but for many it was a price worth paying. 57 The threshold was reached on 18 November, when a meeting was held at the New York Federal Reserve Bank between the commercial banks and the Managing Director of the Fund, Jacques de Larosière. On the basis of financing commitments made by the Fund ($1.2 billion) and the official lenders ($2 billion), he asked the banks to commit the balance of $5 billion. He then added that the proposed Mexican SBA 'would not be sent to the Board [for approval and disbursement] if the coverage of the deficit gap were not known'; that is, if no binding understanding was obtained from the banks as a whole. 58 Said differently, he attempted in full daylight to arm-twist the international banking community. That same afternoon, however, Paul Volcker (then head of the Federal Reserve) stated that as long as American banks agreed with the Fund's demands, 'new credits should not be subjected to supervisory criticism'. 59 Banks would not have to augment their loan-loss reserves or reveal their possible underlying insolvency, 60 making this negotiation a two-way game. During the following weeks the main challenge was to maintain coordination among the banks themselves: between (i) the largest institutions, which had lent the most to developing countries, and which also maintained a major stake in the continued expansion of international capital markets; and (ii) the local and regional banks, which were only marginally exposed to the debt crisis and thus had the incentive simply to absorb their losses and withdraw to their home market. This conflict of interest would mark the political economy of the debt crisis until 1989. 61 On 23 December, when the Fund Executive Directors met to discuss the Mexican loan, the lead banks had been able to obtain commitments for only $4.32 billion (of the $5 billion requested). The Managing Director proposed that the operation proceed, but he also came with a list that detailed how much each member-country's banks still had to contribute: the Japanese banks were $92 million short; the Italians, $122 million; the Germans, $47 million; and so forth. He then 'invited Executive Directors from countries whose commercial banks had not yet contributed their full share to consult with the authorities concerned on the best means of securing the additional funds'. 62 In short, if large banks were unable to coerce the smaller ones to step up (via market power), then national domestic regulators would step in: a domestic, hierarchical relation would be mobilized in support of international coordination. Whether this strategy should be viewed as coercion or more charitably as moral suasion is a matter of judgment. 63 More important is the illustration proposed here of how national regimes of regulated finance typical of post-1945 settlements became instrumental in addressing an international crisis, via a multilateral agency. Governments de facto intervened into private property rights at the domestic level in order to support a multilateral response to the crisis. Of course, the Fund's Board did not miss the extraordinary character of this method: in the words of the Belgian Executive Director, 'the generalization of the method used by the Chairman in this instance would depend on the supervising authorities in the various countries setting up appropriate arrangements'. 64 With some irony, Jacques Polak, the Dutch Executive Director (and long-time Director of the Fund's Research Department), could not resist the temptation 'to compliment the US financial authorities for their willingness to assume duties with respect to banks, which they would certainly have dismissed as totally inappropriate not long previously'. Deformalization and justification The decision-making process assembled in late 1982 and confirmed in early 1983 to address the case of Mexico was relied upon 109 times by 1989, sometimes with the explicit intervention of national regulators and more often under the shadow of that possibility. 66 First, the debtor country negotiated a macroeconomic programme with a visiting IMF mission. In most cases this programme was sanctioned by a Letter of Intent to the IMF Managing Director, which described the economic strategy and how the financing gap would be covered; the sending of the Letter signalled the Fund's seal of approval and its implicit commitment to extend a loan. Secondly, the country negotiated a debt restructuring accord with the banks' advisory committee, the so-called London Club; this accord typically included a frontloaded capitalization of arrears, a rescheduling of capital amortization and some 'new money'. Lastly, once this financial settlement had been reached, the IMF Board would formally announce a Stand-By loan and allow disbursements. The old bilateral 'exchange transaction' that had been developed since the 1950s had now been transformed into a three-party transaction structured by a rule of mutual veto. If for instance the banks considered the economic program as too weak, they could indeed reject the whole package; and, similarly, the Imf could reject a financial settlement which they considered as insufficient. Hence, debt restructuring remained a voluntary decision, not a jurisdictional one. Furthermore, the new process extended over time the bond between the three parties, thanks primarily to the enforcement guarantees that came with policy monitoring by the Fund. The ad hoc, non-contractual machinery that had been built around the SBA was thus instrumental in allowing the parties to commit themselves anew, after the first contracts had failed. This new process was obtained despite the fact that neither the banks nor the debtor country ever formally delegated authority over their contractual engagements to the IMF. Expediency Until 1987 success rested on one further condition: outside the squared framework of the SBA, all pre-existing statutes, laws, regulations, guidelines or contractual clauses that could have affected debt renegotiations were de facto suspended, circumvented or put on hold; if not, the fragile envelope of the SBA might have been punctured by this harder legal material and soon torn to pieces. This condition first applied to international public law, specifically to the elements of a doctrine on restructuring that had developed since the early twentieth century: for example, on issues of sovereign immunity, hierarchy among creditors or the pari passu clause. These principles were discussed by international lawyers in the 1970s and early 1980s, 67 but they never surfaced in discussions at the or in its written statements so that lawyers struggled to account for the Fund's actions, which they insisted implied no notion of legal responsibility. In fact, the Fund 'carefully avoid[ed] reliance upon traditional legal remedies'. 68 MacCallum also concluded 'sovereign debt restructuring takes place in the absence of a legal framework'. 69 Norbert Horn noted that 'the service a lawyer can render in the restructuring of loans appears to be modest: he can provide the patterns and formulae for solutions dictated by economic necessities and achieved in negotiations where non-legal questions prevail'. 70 In fact, more than 'the absence of a legal framework', the procedure was marked by a strong pattern of 'deformalization', which does not imply that legal rules have been entirely voided. 71 Rather, hard law was substituted by soft, ad hoc, short-lived rules that delivered predictability and finality for a certain period before being shelved. Expediency gained precedence over legal consistency. At least as significant is how the debt contracts that structured relationships between banks evolved over time. Each initial loan syndication listed some fiduciary responsibilities that the agent bank organizing the initial lending operation kept afterwards. Yet these responsibilities were generally loosely defined and, critically, they did not allow for a binding collective arrangement in case of a restructuring. 72 From the 1970s on and most clearly after 1982, agent banks were however substituted by ad-hoc Steering committees, which coordinated all creditor banks regardless of the syndicated loan(s) to which they had initially subscribed. These committees (often known as London Clubs) had no formal mandate or legal standing of their own. The debtor country chose a lead bank when announcing its intention to restructure; the latter would then convene a Steering committee, made up of 5-10 other banks. But all creditor banks never endorsed collectively this committee, its mandate or its initiatives; instead, each individual bank would sign a separate agreement with the country and thus formalize ex post the terms arranged by the Steering committee. 73 The signing of these agreements marked the moment when restructurings became legally binding and impacted the banks' balance sheets and tax liabilities. This was, most clearly, when 'moral suasion' was applied. The relationships between creditor banks also changed substantially over the course of the restructuring cycle. 74 As underlined by Buchheit and Reisner, 75 these operations typically did not abolish the initial loan contracts; they changed the payment conditions and at the same time called for the forbearance of many pre-existing legal rights that stemmed from the initial contracts. But what proved most problematic was not the rescheduling of maturities on existing obligations, but the extension of 'new money' loans in which no bank was legally bound to participate. Moreover, new money loans were not formalized as an extension of the first generation of debt contracts, but as de novo contracts to which all banks involved in the negotiation adhered and which worked thereafter as a 'master syndicate'. 76 Solidarity among creditor banks was then formalized inter alia by Cross default clauses and Sharing clauses, the latter requesting that banks that obtained some side-payments share these proceeds with all others. 77 The overall effect was to make disruptive strategies by holdout investors more difficult, so that the collective of banks would act as a 'rather uneasy confederation', loosely governed by the Steering committee. 78 Lastly, the preference for expediency over legal consistency is also observed within the Fund. The most distant origin of this pattern is in the rule, adopted at Bretton Woods, whereby the Fund is the sole interpreter of the Articles of Agreement, ie its own constitution 79 -no judicial forum or dispute resolution procedure can be called upon. Given the predominant role of the Executive Board over the Board of Governors, which meets only twice a year, the result is that the Fund is very much a self-governed organization: the Executive Board can adopt with little external constraints the rules and decisions it sees fit. Moreover, the procedure for interpreting the Articles of Agreements that was envisaged at Bretton Woods (Article XVIII) has been substituted by an even more casual practice: interpretations typically take the form of a Decision that is the least constraining form of rule that the organization can adopt. 80 The only tangible backstop to this remarkable agency structure is not legal, in fact, as it rests on the principle of a 'continuous session' of the Executive Board. Representatives of key Member States, each with close links to their home governments, meet at least once a week and make or endorse the day-by-day decisions based on the quota system. Because the principal-agent relationship is very hands-on in practice, there is no need for a complex, formal legal machinery to constrain and monitor the discretion of officials. The Fund's bias towards relative informality is indeed deeply rooted. As already said, the inventors of the SBA explicitly avoided relying on broader Fund rules, like the Articles of Agreement. Hard sanctions against non-compliance had thus been discarded early on and the first 'guidelines' on SBAs were adopted only in 1968. Similarly, the prohibition to 'lend into arrears' was adopted as only a low-key Decision in 1970, and no formal act sanctioned the later shift from the soft interpretation of this rule (interest arrears should be paid by the country during the duration of the SBA) to the hard one (they should be capitalized upfront by the banks). Last but not least, there is no mention in the Fund's paper record of its Board voluntarily conditioning its decision to lend on the acquiescence of a representative group of banks with no legal existence. Justification How could an organization like the IMF act within such a weak legal framework? How were its interventions justified at least minimally by its statutes? Even a summary description of its actions must have drawn from an accepted vocabulary that would signal its reliance on a given set of procedural rules. These issues were at stake in March 1983 when the Fund carried out the first review of its experience since the Mexican quasidefault, six months earlier. At this point the formal or legal conditions for routinizing the new method of negotiations should have been discussed. In four large surveys, the staff described the conditions and lessons of the recent negotiations and somewhat obliquely addressed the innovations: 'In a few recent cases [i.e. Mexico, Argentina, Brazil and some others], the Fund has found it necessary to establish a close link between commercial bank debt restructuring arrangements and Fund-supported programs by requesting an explicit commitment from the banks regarding their lending posture.' 81 Yet beyond this single statement there was no further presentation or analysis of these recent cases. Rather than suggesting how the Fund could find a legal basis for conditioning its own decisions to the deliberations of an informal club of bankers, the staff seized on a completely external argument: '. . . the Fund has assumed a more direct coordinating role than has normally been the case'. The reason for this new posture, it was added, was the need to preserve 'the stable functioning of the international financial system'. 82 Another paper, discussed at the same meeting, takes the same consequentialist line and alludes to arguments. 84 This is a line that the Fund would use again and again in the future in order to evade or rewrite its existing rules of engagement. 85 Yet this discursive strategy adopted in spring 1983 would never be challenged; until 1989, the Fund maintained that the Mexican experiment had just been an exceptional foray into unknown territory, it had not created any precedent. As the staff wrote in March 1983, 'it would not be appropriate to seek to formalize any general policy criteria concerning the precise role of the Fund in such situations'. Instead, the priority should be to act 'on an individual case-by-case basis, in close consultation with all parties concerned and keeping the Executive Directors fully informed of the possible alternative courses of action under consideration'. 86 Discretion and informality should thus rule, with the sole (implicit) caveat that core Member States might object at any point. Alternately, it was up to the Executive Directors to take the lead and offer guidance: 'It is recognized that this is an especially sensitive area and difficult decisions will be required regarding the extent and nature of the Fund's involvement.' 87 Yet the Executive Board refused to be that specific. When debating the Mexican programme, first in December 1982, then during the March 1983 policy review, the Executive Board endorsed the Fund's recent action and applauded its Managing Director. The US Executive Director commended both management and staff for addressing the recent crisis 'with remarkable care and in an admirable fashion'; the UK director emphasized the 'considerable skill' exhibited during crisis management. 88 One after the other, the Directors followed suit with only minor variations in their balance of compliments and implicit touchiness regarding the Managing Director's recent action. If ever there were an example of delegation to a multilateral agent being affirmed ex post, after it had taken the organization far away from its usual practice, this was it. But the Directors refused to be more specific. The US Director simply noted that it would be important in future cases for the Fund 'to conclude that there is a reasonable degree of certainty that the program is viable from the point of view of financing' and hence it 'must always have reasonable confidence' concerning the contributions of banks. 89 Other than that, it was merely remarked that a 'continued cautious and careful approach will be necessary', and that it would be important to avoid 'unnecessary rigid linkages that could put undue burdens or responsibilities on the Fund'. 90 At the end of the day, Jacques de Larosière concluded softly, 'the relationship between Fund-supported programs and the debt relief . . . should continue to be handled on a case-by-case basis'. 91 In other words, as long as the principals agreed, the Fund had a free hand: the Executive Directors trusted its competence and judgement and, of course, they relied on the 'continuous session' of the Board. Again, informality and expediency rested ultimately on the tight-leash monitoring structure that had been designed in 1944, not on legal rules. In the following years this position was consistently defended, even after the three-step decision procedure had become standard practice and systemic risk had receded. The strategy thus remained wrapped in euphemistic language. In its own writings, the Fund asked the banks only for 'reasonable assurances' in the framework of a 'concerted lending' strategy-that is, 'an organized and collective effort on the part of commercial banks, official creditors, and the Fund to secure commitments to close an ex-ante financing gap for a member country'. 92 Hence, the Fund would not 'condition' its own action on such commitments; it would instead 'have come to expect' them or have 'indicated the Fund's support' and 'encouraged the banks to commit themselves'; 93 or again it would 'advise banks to maintain a minimum level of new lending'. 94 Even a highly informative internal report, which was prepared jointly by the different area departments in 1983-1984, failed to ask why banks would commit themselves to a burden sharing accord. 95 Any suggestion of a coercive relation was carefully avoided, while the hint that this regime was remotely judicial in character was shunned. 96 This omission does not imply that the Fund's officials and their principals were delusional. They knew very well what they were doing, and, at least at the beginning of the crisis, they considered this procedure to be the best option on the table-they even applauded it. The point is that legal uncertainty caused serious problems of justification and legitimacy. Since it was not possible to describe comprehensively the procedural rules, institutionalization remained impossible: rules first must be put into words if they are to be endowed with any legal authority. If not, as here, these rules may work only as long as all involved parties have no alternative, no exit option. The decline of a regime Here was the real fault line in this experiment. On the one hand, the principle that adjustment costs should be shared equitably before the debtor country re-enters the market mimicked the expected outcome of a bankruptcy procedure, which, in practice, was substituted by a rule of mutual veto and some 'third-party services' provided by the Fund (a single forum, information, expertise, etc). On the other hand, the stark divergence of interests between debtors and creditors could be endogenized only as long as the IMF and its core Member States had enough leverage over them-exercising raw power (or its threat) in order to corral the parties and make sure that they reached agreement. The evolution of these power relationships that initially made collective action possible was a main cause for the demise of this regime after 1987. On the one hand, economic adjustment in debtor countries was fairly consistent after 1982, although the rewards in terms of growth and access to capital markets were modest. 97 Attempts to coordinate these countries politically then started to emerge, in particular the so-called Cartagena Group in Latin America. 98 On the other hand, the build-up of loan-loss reserves by banks and the expansion of secondary debt markets steadily increased each bank's room for manoeuvre. 99 The consequence was increasing difficulty to contain wayward strategies by individual players, small and big. Inevitably, this brought direct pressure on the Fund's self-binding commitment not to disburse loans until private financial accords were reached; once it had accepted an economic programme, the Fund could not wait indefinitely for banks to conclude their negotiation before actually lending. In its 1987 annual review of the debt strategy, the Fund recognized for the first time that on future occasions holdout could become problematic. Hence, it suggested that 'the Fund-supported program may become fully effective while relations between the country and the commercial banks remain unresolved [so that] external arrears to creditor banks would likely increase'. 100 In other words, the Fund might break with the jurisprudence it had adopted in 1970 and decide to 'lend into arrears'-that is, to ignore the travail of the private lenders. Yet as the Fund envisaged dismantling its overall strategy, it did not follow its usual line of approach based on expediency. Instead, it grandly stated that the banks' equivocations 'should not appear to prejudice the autonomy of the Fund in deciding its future relations with members, nor conflict with existing Executive Board guidelines'. 101 However, it did not admit that for five years it had done just the contrary. Finally, the decision in March 1989 to accept fully-fledged debt write-offs under the umbrella of the Brady Initiative came together with the (formal) announcement that the (informal) 1982 regime would be shelved. 102 As the Managing Director then said: 'It is clearly the wish of this Board that the Fund discharges in full its central responsibilities in the debt strategy, but without interference in negotiations between debtors and creditors.' 103 A public news release even formalized the ending of the 'no lending into arrears' doctrine, hence that of the rule of mutual veto. 104 Conclusion A detailed reading of the IMF Board Minutes and the Fund's Staff Memoranda offers a unique view of how between 1971 and 1989 this multilateral organization first assembled and then operated a stable and predictable procedure for restructuring sovereign debts. Specifically, these archives document how coordination rules, flows of information, the Fund's role as a muscular broker or the reliance upon the old practice of conditionality enforcement, allowed the Fund to address the generic problems that arose from the absence of an authority with a fully-fledged jurisdiction over sovereign debt restructurings. A further feature of this procedure is its remarkable informality, from two-sides: the restructuring rules were not themselves institutionalized, while alternate, stronger 'hard-law' regulations were systematically ignored. This double informality exposed the strategy to continuing problems of justification and legitimacy. At the core of this multilateral procedure lay the old structure of the SBA that the Fund had gradually developed since the early 1950s as its standard instrument for lending under macroeconomic conditions to member-countries. This original two-track 'exchange transaction', which had been conceived as neither a contract nor a treaty, proved remarkably adaptable as it was gradually opened to commercial banks: between 1982 and 1989 private creditors, debtor countries and the IMF coordinated within this 'transactional vehicle' whose governance was structured by the rule of mutual veto though, as seen power relationships were not absent. Once adhesion had been obtained, however, the Fund could mobilize its many policy tools and give credence to the commitments made by the parties, especially regarding the economic policies of the debtor country. This analysis of the procedure that governed debt restructuring during the 1980s underlines further the magnitude of the structural changes observed since then. Usually, analysts first point to how renegotiations were made easier by the relatively stable relationship between a group of dominant international banks and sovereign states. It appears, however, that the success of the regime of the 1980s hinged also on the capacity of national authorities to arm-twist local banks and to adjust their own domestic regulations when expedient. Successful pragmatism at the multilateral level thus rested indirectly on national regimes of regulated finance typical of the post-World War II Keynesian settlements. By comparison, the present, twenty-first-century non-regime reflects two movements away from this past institutional context. On the one hand, financial deregulation at the domestic level has considerably reinforced private contractual rights, a trend that is associated with the emergence of the national courts of issuing countries as the key regulatory forum in debt matters. During the 1980s courts had no role in the debt game, while today their (domestic) constitutional mandate to interpret and enforce contractual rights has de facto displaced the Fund's mandate. The latter's parallel weakening is therefore the other striking feature of the present situation. Not only are its meeting rooms in Washington no longer where the action takes place. Today, the Fund is fighting for a place at the table. Not surprisingly, the link between economic adjustment and debt settlement that was at the core of the bargain over burden sharing has now become ad hoc, and any notion of comparability of treatment across countries has been essentially lost. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2019-05-18T13:04:14.485Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "2b6730e4691b45b1380db6c2642b0f2c5cec2b88", "oa_license": "CCBYNCSA", "oa_url": "https://hal-sciencespo.archives-ouvertes.fr/hal-03627457/file/sgard-how-the-imf-did-it-cmlj-11-1.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "10850dc97b5e4e499b1f125aa7fa00e47d0eea2c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
268175108
pes2o/s2orc
v3-fos-license
Modelling gravitational-wave emission and detection on spandex using a high-speed camera This paper demonstrates a variation of the rubber sheet experiment (Gravitational Waves Work Like This Drill on Spandex) for measuring the properties of modelled gravitational waves. Mechanically induced waves on the rubber sheet are observed by a high-speed camera and the slow-motion videos are analysed with the Tracker program. We describe the theoretical background and the execution of the measurement process. The measured displacements are suitable for modelling real gravitational-wave signals and determinating properties of the sources. Introduction One hundred years after Albert Einstein's prediction of the existence of gravitational waves [1,2], in 2015 the LIGO-Virgo Collaboration made the first detection [3].In 2017 three prominent members of the LIGO-Virgo Collaboration-Barry C Barish, Kip S Thorne and Rainer Weiss-were * Author to whom any correspondence should be addressed. awarded the Nobel Prize in Physics for the discovery of gravitational waves.It has been the subject of much educational material but is not yet integrated into school curricula in most countries. There are several current successful attempts on introducing general relativity [4][5][6] and gravitational-wave astronomy [7][8][9] on lower educational levels.Burko has formulated several ideas to introduce in high school [10].North made a complete hands-on workshop about data analysis [11].Douglas and Ingram developed a lab activity for building a Michelson interferometer [12,13]. The measurement experiment presented here was developed with two high school students, authors of this paper, and shows a new method to demonstrate the detection and data processing of gravitational waves by using a student-centred model.The setup provides the opportunity to measure the amplitude and frequency of the modelled waves, two of the most important parameters in the detection of gravitational waves. Theory General relativity fundamentally breaks with Newton's conception of gravity.According to this theory, no gravitational force occurs between bodies with mass, but the geometry of spacetime is altered and warped.This means that temporal and special distances around masses are distorted, so that other bodies move in the distorted fabric of space-time.We see this as a gravitational force affecting the motion of the bodies. The curvature of space-time is not a static thing, it is constantly changing due to the movements of masses.Such perturbations illustrated in figure 1 propagate through space-time as waves, called gravitational waves. When passing between two points in space, gravitational waves contract and stretch the distance between the points.Being transverse waves in a plane perpendicular to the direction of travel, they alternately cause stretching in a given direction and contraction perpendicular to it.This effect is what can be detected by interferometers.The longer the arms of an interferometer, the greater the change in length of a gravitational wave.The Laser Interferometer Gravitational-Wave Observatory (LIGO), the arms are 4 km long, with bundles of arms that bounce back and forth hundreds of times, increasing the effective arm length [14].The instrument can thus measure the change in the length of the arms relative to each other as a tenth of a thousandth of the size of an atomic nucleus!Such sensitivity is necessary, since the distorting effect of incoming waves is so small even for the near-light-speed motions of bodies with masses comparable to that of the Sun.The precision of the measurement makes noise filtering extremely important, since any small ripple in the environment of the detectors can have a stronger effect than a passing gravitational wave.Sources of the waves that can potentially be detected today include, for example, spiralling compact binaries shown in figure 2 (e.g. two black holes orbiting each other and then merging), nonrotationally symmetric neutron stars and highenergy explosions such as gamma-ray bursts or supernova explosions [15]. Setup and procedure The rubber sheet is a tool often used to demonstrate the basic idea of general relativity.Although the concept of the model has its drawbacks and the rubber table itself does not model the laws of general relativity well [16], it is a good demonstration of the concept and can be used to model phenomena such as gravitational waves.Some of the limitations of the model are: the speed limit of the spandex can be broken [17]; dissipation and edgereflections change the wave-pattern; motions on the spandex fabric are not identical to motions according to Newton's law of gravity or general relativity [16]. The rubber sheet represents space-time, and the waves generated on it represent gravitational waves.The material of the sheet should be flexible, we used spandex with 10% elastane on a table with a diameter of 1.4 m.The table itself was folded from 2 cm diameter Pex-Al-Pex tubing, but you can find different methods of assembly on the internet. The source of the waves is a drill head consisting of two rollers 10 cm apart attached to a wooden plate shown in figure 3. Rotated with the drill, this models a compact binary, in the constant phase before the merger.The drill was used with the sheet lightly pressed against the drill bit at 1200 RPM.The resulting waves are visible to the naked eye, shown in figure 4. Choosing the right rotational frequency is essential for a successful experiment.If the speed is too law, waves are not generated, while a rotation too fast may rip the fibre.The tightness of the sheet has also a significant effect on the waves.In our set-up, the ideal tightness can be described as a fitted sheet stretched as usual. The set-up can be improved by a bigger table and some reflection damping to make the pattern and the data clearer.Also, our table was built mobile so we can show it at different locations.A more stable (e.g.wooden or metal) frame makes the measurements easier and more consistent.More details about the set-up can be found in the video of Steve Mould [18]. The resulting waves were analysed in slow motion (see in supplementary material 1) 5 .For the measurement a high-speed camera is needed.According to the available settings of our camera, we recorded at 500 and 1000 fps.It is worth to choose the higher fps-rate, even though it increases the processing time.The waves need to be measured in some way.Since the amplitude is small, it is hard to measure the waves from a side view by observing the height of the waves.Therefore, we set the camera in a top view position and used the principle that if an object is closer to the camera, it will be larger in the image.Two points were drawn on the sheet with a marker at a distance of L = 8 cm, from each other, which is the object size.This 'detector' was 18 cm away from the centre of mass of the source.As the wave crest reaches these two points, they move closer to the camera, so the apparent size of the distance in the image increases.As the wave trough reaches them, they move away from the camera, at which point the apparent distance between the two points is at a minimum on the picture.The geometry of this stage and the one at rest is illustrated in figure 5.The apparent distance is i(t) determined by the quotient of the object size and the size of the field of view: f 0 at rest and f(t) in motion, at rest and during the measurement process.Thus Because of the similarity of two triangles, where z (t) is the distance between the camera and the rubber sheet.Written into equation ( 3), The displacement is the difference of the instantaneous and original camera distance: Compared with equation ( 5) and doing some transformation, This means, that by measuring the apparent distances on the frames and knowing the original camera distance, the instantaneous displacement of the rubber sheet at the measuring points is determinable.Measuring the length of the distance between the two points is modelling the length change of one arm of the laser interferometer. It is important to clarify to the students, that the deformation of the rubber sheet is vertical, but the measured length-change is horizontal, which is only a virtual change.It is also worth to consider introducing only the apparent-distance change, as a fairer analogy to gravitational waves, without speaking about how it is caused by the cameradistance change.It also makes the theoretical explanation easier.The downside of this way is that the process gets less transparent, and we are not measuring the very amplitude of the tangible wave. The frequency can then be easily read off the displacement-time graph.Of course, we know the speed of the drill, but students can measure this as well.Figure 6 shows the experimental layout in real life. The propagation speed was not measured in this setup, as gravitational waves propagate at the speed of light, so it is only a theoretical check, and no new information is expected in the model.That said, it can be easily measured by plotting two points in the radial direction.The propagation velocity of a wave crest can be determined from the time it takes to register two points in succession, given the distance between the points.Knowing frequency, the wavelength can be calculated as well.The wavelength is also measurable from a single frame.In our case, it was 13.6 cm. When choosing the distance of the two measurement points, the wavelength may be a good reference.The LIGO and the Virgo detectors are optimized to frequencies between 10-10 000 Hz which equals to 30-30.000km of wavelength [19].The effective length of the LIGO detector arms is 1600 km, so the 8 cm separation and 13.6 cm wavelength ratio are comparable to the gravitational wave detections.A smaller separation may be considered, but it requires a better resolution image we could reach. The source localisation in LIGO-data needs at least two detectors (Hanford and Livingston, separated by 3002 km).If the wave reaches one of the detectors earlier, that means the source is closer to that detector.And the model allows to make this very similar direction determination as well.If the wave reaches of the measurement point earlier, it means the source is closer to that point.Nevertheless, this breaks the analogy, since now one point is representing one detector, not one end of a detector-arm. Data analysis The videos were analysed using the Tracker software.By selecting the two measurement points, their positions can be automatically tracked to obtain the x and y coordinates of the two points as a function of time in tabular form.These were exported and analysed in Excel. The distance between the two points can be determined using simple coordinate geometry. If the wave crest does not reach the two points simultaneously, one of the points' data series must be shifted by as many frames as the delay.We established the shift from the video, which was typically between 0 and 2 frames.Our time shifting is like when the two (or more) detector datasets are shown on the same diagram, time-shifted (see figure 8).This can be avoided with some attention if needed, or by choosing a smaller value for L. On the dataset we show in this study, there was no time-shifting needed. From the resulting distance data, the variation of the sheet deflection with time is obtained using the previous formula.In a measurement, a time interval of one second at most is analysed, which at 1000 fps means one thousand data points.At 1200 RPM, this means 20 complete rotations a second (since both rollers of the head create a surge, this will result in 40 complete wave periods). Sensitivity A theoretical sensitivity of the experiment can be determined from the pixel size of L, which was 966 pixels in our case.If we assume that we can measure a displacement of a measurement point as big as 1 pixel, equation (7) gives the following result: The largest length-change measured was about 18 pixels, which means 9 pixels displacement of one point. Results Figure 7 shows the result of a measurement using the methods described. The data clearly resemble a sine curve.After taking many measurements, and eliminating various possibilities for error, we have concluded that there will always be anomalies in the data.For example, the reason for the alternating smaller and larger minima is likely caused by unintended eccentric orbits [20]. If we look at the real gravitational-wave detections in figure 8, we also find large variations due to different sources of noise.The pale yellow and blue lines show the theoretically predicted signal shape of a spiralling binary, while the darker orange and blue ones show the first real detection by LIGO's Hanford and Livingston Observatory. We made further measurements to find the possibilities of the tool, to model parameter estimation (e.g.source-size, roller-separation, sourcedistance, source-type, asymmetry).The data is currently under evaluation and may be the subject of a forthcoming article focussing on parameter estimation and a student workshop based on this process. Conclusion Present paper demonstrates that a simple measurement experiment that uses elementary mathematics is suitable for modelling the process of gravitational-wave detection.The set-up is eligible for determining the frequency, amplitude and propagation velocity of the waves generated.The results and data processing can be compared with real processes, bringing the field closer to the students. András Molnár is a PhD student at the Eötvös Loránd University in Budapest and teaches physics in high school since 2016.He is a member of the LIGO-Virgo-KAGRA Collaboration, which also records the first direct detection of gravitational waves in 2015. He is interested in teaching gravitational-wave astronomy on high school level and researching the ways to promote physics. Márk Czura is a student at the Tamási Áron High School in Budapest.He is interested in astrophysics and particle physics. Bence A. Dercsényi is a student at the Városmajori High School in Budapest.He is a future astrophysics university student. Figure 2 . Figure 2. The theoretical waveform of a gravitational wave generated by a binary black hole.Reproduced from [3].CC BY 3.0. Figure 3 . Figure 3.The drill head and the drill used in the experiment. Figure 4 . Figure 4.The wave-pattern generated by the drill.The reflecting waves are visibly interfering with the original waves, making the middle region the best choice for measurements. Figure 5 . Figure 5.Because of the proportionality of the sides in the similar triangles, the camera distance determinates the apparent distance. Figure 6 . Figure 6.The rubber sheet with the markers and the camera. Figure 7 . Figure 7. Displacements determined according to the measurements on the rubber sheet.See raw data in supplementary material 2. Figure 8 . Figure 8.The first observed gravitational-wave signal by each LIGO detector.Credit: Caltech/MIT/LIGO Lab.
2024-03-03T16:35:00.155Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "f087aada0b917dd47bd4e2646237bbb0b96ce58e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1361-6552/ad2810", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "de645c0f8831cdb66aa836a75dd435ddb21666c9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }