text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Possibilities of using alizarin and neutral red indicators to determine the neutralized layer of concrete in the field . The article concerns with the prospects of using the phenolphthalein test solution in the practice of surveying concrete and reinforced concrete building structures in planta. The problem of the study is the limits of application of the phenolphthalein test solution on the pH level of the determined concrete (the indicator works only at 8 ≤ rn ≤ 10). There is a need of expanding the phenolphthalein test solution application. In order to determine the different pH zones of concrete we have to modernize the method by adding other indicators to phenolphthalein. The standard phenolphthalein test solution does not allow a high degree of accuracy in determining the most sensitive to carbonation boundary zones of concrete. We proposed to modernize the phenolphthalein sample solution by using additional acid-base indicators alizarin and neutral red. We presented the results of experiments on measuring the surface neutralized layer of concrete by alcoholic solutions of acid-base indicators alizarin and neutral red on different age and size concrete samples. As a result, one of the proposed indicators (neutral red) allows to expand the phenolphthalein test solution application. We compared the results obtained by the traditional and modified methods. According to it, the proposed modified method is more accurate one. Introduction The areas of damage to the protective concrete layer are revealed during the inspection of reinforced concrete building structures more than 10-15 years old, operated in conditions of open air and atmospheric humidity [1,2]. Having less adhesion, these areas provide the penetration of atmospheric moisture and, consequently, the corrosion of the reinforcement of unconfined concrete. The hydrogen pH value influences on the passivation properties of the protective concrete layer significantly. According to various studies, the concrete potential of hydrogen at loss of passivation is 8˂rH˂9.5 [2][3][4][5]. Loss of passivation of steel reinforcement to concrete begins at pH˂11 [4]. A neutralized concrete layer is formed by carbon dioxide and water, which are the negative components of carbonation and wetting. The process is long-term (takes decades) and involves the following reactions: (1) рН˃11 рН˃8 СаСО3↓ + СО2 +Н2О → Са(НСО3)2 (2) By reaction (1), the resulting calcium carbonate is a low-soluble compound, as a result, it bleeds on the concrete surface and pores in the form of salt [3,5]. As a consequence, there is a neutralization of the alkaline environment of the concrete. The excessive carbon dioxide in a humid environment (2) provides the formation of calcium hydrogen carbonate, a soluble compound, is able to be washed away by ambient moisture and removed away by atmospheric water [2][3][4][5]. The carbonation causes decreasing of the reinforced concrete structures durability, especially of supporting and bending elements, due to concrete pH decreasing. The carbonation also passivates steel reinforcement properties and reduces the cross-sectional area of the reinforcement, as a result of the formation of corrosion products [3,5]. It can be clearly seen on examining structures operated in a humid environment for a long time (Fig. 1a,b) [4]. Figure 1 clearly shows the concrete protective layer failing as a result of the concrete carbonation; the horizontal prestressed steel reinforcement is bared and covered with corrosion products. Nowadays, phenolphthalein test solution is the most widespread method for detecting carbonation and determining its depth in planta [3]. The single working pH-transition coloring interval of phenolphthalein solution, as well as the absence of consensus on concrete pH are their significant disadvantages. [4,5]Under these conditions concrete pH begins to lose its passivation properties relative to the steel reinforcement. But, in spite of the concrete carbonation studies, experts do not achieve a consensus on this issue [6][7][8]. The urgent tasks of the study are the modernization of the phenolphthalein test solution, increasing its ergonomical properties, creating alternative methods of detecting carbonation in planta along with the increasing of accuracy and cost-effectiveness when examining building structures. The availability of alternative methods will enhance the capacity of survey specialists in planta. The most significant disadvantages of the phenolphthalein test solution are in the detection of carbonation in the initial and intermediate stages of the process [4,5]. It can be shown by the study of carbonation at pH˃10 and pH˂8 [4,5,9]. The operating range of the phenolphthalein solution is approximately 8≤p≤10.1, which initially defines the limits of its application. Thus, at the pH˃10 and pH˂8 range the method will not be valid. There will be no colour change in the indicator (colourless staining). The purpose of the study is to modify the phenolphthalein test method by using additional solutions of acid-base indicators in order to expand the boundaries of application, accuracy and ergonomics. The objective of the study is the theoretical and practical substantiation of the effectiveness of the introduction of solutions of neutral red and alizarin by comparing the technical and economic indicators of the modernized method in comparison with the typical one and the accuracy of work under given conditions. Materials and methods The objects of the study are 5 concrete cubic samples of 10x10x10 cm, strength grade of concrete is B30, tightness to water is W6 (Fig.2, a) and 10 fine grain concrete cubic samples of 3x3x3 cm of Portland cement CEM I 42,5 with water-cement ratio equal to 0.3 (Fig.2, b). The deviation in geometric dimensions of the samples did not exceed ±1.5 mm, the age of 10x10x10 cm samples was 3~5 years and that of 3x3x3 cm samples did not exceed 180 days. The different sample ages were chosen in order to determine the performance of the acidbase indicator solutions under study more accurately. During experimental studies, preparation of acid-base indicator solution and sample preparation the following instruments and equipment were used: moisture meter di-elcometricTesto 606-1, portable electrometric pH-meter Testo 206-рН1 (certificate of State Register of the Russian Federation DE.C.31.010.A#43924/1), digital SLR camera with a fast shooting capability Canon 1200D, purified and distilled water carbonator Oursson OS1000SK, battery perforator DeWALT DCH133M1. A new developed technique was used to create artificial conditions for the neutralisation of "free" calcium hydroxide (approximating the conditions of real structural operation) in order to create a concrete carbonation process and to identify areas of damaged concrete in laboratory conditions [5]. According to this technique, the laboratory samples were premoistened by distilled water and immersed into the open plastic vessel. The plastic vessel was then placed in a closed, sealed, larger vessel. The active corrosive medium -distilled (purified) water saturated with carbon dioxide (carbonic acid solution) was in the vessel. The samples were kept in carbon dioxide for 28 days, after they were removed and placed directly into a carbonic acid solution for 120 days. The carbonic acid solution was changed every 2 days to maintain the correct pH level of the solution (the solution itself was also acidified) and the moisture content of the concrete samples was checked and controlled daily. The samples under study in an aggressive environment were 3 pcs (size 10x10x10 cm) and 6 pcs (size 3x3x3 cm). At normal atmospheric pressure, carbon dioxide was released from the carbonic acid solution; its molecules were adsorbed into the pores of the concrete samples; slow process of neutralisation of "free" calcium hydroxide began in a humid environment by reactions (1) and (2). We realised the acceleration of the neutralisation of the concrete alkaline medium after the placement of the samples into the corrosive medium [15,16]. Samples 2 pcs (size 10x10x10 cm) and 4 pcs (size 3x3x3 cm) were kept in open air without access to carbonized corrosive medium to compare the values of the actual depth of the carbonized (neutralized) concrete layer. We totally dried the samples after they were in the aggressive carbon dioxide environment for the specified period. In order to guarantee the samples dryness and provide their moister control, a portable moisture meter, operating by dielcometric method, was used [17,18]. A portable pH meter operating on the principle of electrometric pH determination was used to measure the pH value of an aggressive medium (carbonic acid solution) [19,20]. The dried samples were pre-treated by line -marking (Fig. 3, a). Samples (3x3x3 cm) were split along the median line (Fig.3, b), the actual depth of carbonized layer was determined with acid-base indicator solutions at the nick points (Fig.3, c, d). Samples 10x10x10 cm were pre-treated by line -marking, too. In the centre of the intersection of the centrelines, inspection hole with a diameter of d=24 mm and a depth of 36 mm were drilled (Fig.4,5). The holes were thoroughly cleaned of dust (by distilled water). The acid-base indicator solutions were applied to the cleaned holes. The characteristic colouring of the indicator solution was recorded with a camera at the carbonized layer locations. Alizarin and neutral red indicator solutions were prepared in accordance with the requirements for the preparation of alizarin-containing and other indicators according to national standards used for the determination of pH of various media [21][22][23]. 0.1 and 1% solutions of alizarin and neutral red in ethyl alcohol were used. Pure (non-technical) alcohol was used to prepare the solutions in order to avoid possible negative side effects on impurities of the technical product. Also 0.1% and 1% solutions of alizarin/neutral red in ethyl alcohol were applied to the concrete surface at the nicks after the first phenolphthalein solution applying. Main part The alcoholic alizarin solution has 3 colour transition intervals. It allows to define the boundaries of zones of high alkalinity with 12≥pH≥10.1 and those of neutralised and acidified concrete with pH˂7.5. According to theoretical data, the alizarin colour tint is darker. So the areas of carbonised concrete should be visible when the 2 indicators are alternately applied to the surface. Consequently, the use of this indicator solution in theory will increase the versatility of the phenolphthalein test solution. The alcohol solution of neutral red has one colour transition interval, but towards the lower pH value of the concrete with 6.8≤pH˂8.0. The change of the indicator colouration is very distinct. It allows us to control the condition of the concrete surface with ease (Fig. 6). The actual depth of carbonation (neutralisation) was determined with acid-base indicator solutions by the areas of visible areas of characteristic staining (Fig. 7). The actual carbonation depth of [24][25][26] several concrete samples was also determined by using the previously known phenolphthalein method in order to compare the accuracy and limits of the methods. A 1% solution of phenolphthalein in ethyl alcohol was used. A solution of an additional indicator (neutral red) allows you to expand the range of applicability of the phenolphthalein solution, since the color transition occurs at 6.5<pH<8. As a result, the ergonomics (convenience of use) of the phenolphthalein test method in the field increases, since the zones of highly carbonized concrete will be clearly and contrastingly defined (yellow color against a crimson background) compared to zones of normal concrete. The expirimental results provide following conclusions: modernized phenolphthalein test solution has an increased accuracy (in some cases more than 8-10%) of detecting areas of carbonised concrete in compare with the conventional one; neutral red solution helps to extend the use of phenolphthalein test solution to determine the areas of concrete carbonation at 6 < pH < 10.5; Experimentally showed, it is difficult to use alizarin in addition to the phenolphthalein solution because of the implicit solution colouring (Fig. 7). The detection of strongly alkaline concrete areas is inaccurate (the phenolphthalein colouring overlaps the alizarin colouring). Conclusions Theoretically, both test indicators were identified as ideal for extending the scope of phenolphthalein test solution, but in practice the colouring of the alizarin solution proved to be rather dim both alkaline and neutralized concrete areas. To summarize the experimental results, the phenolphthalein test solution has several disadvantages may hinder the work of surveyors in planta. There is a need to continue the search for indicators to replace phenolphthalein for these purposes.
2,752.4
2023-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
A Coupling between Integral Equations and On-Surface Radiation Conditions for Diffraction Problems by Non Convex Scatterers : The aim of this paper is to introduce an orignal coupling procedure between surface integral equation formulations and on-surface radiation condition (OSRC) methods for solving two-dimensional scattering problems for non convex structures. The key point is that the use of the OSRC introduces a sparse block in the surface operator representation of the wave field while the integral part leads to an improved accuracy of the OSRC method in the non convex part of the scattering structure. The procedure is given for both the Dirichlet and Neumann scattering problems. Some numerical simulations show the improvement induced by the coupling method. Introduction During the last decades, time-harmonic wave propagation has proved to be central in many engineering and technological key developments, based, e.g., on acoustics, electromagnetism or elastic mechanisms. When one wants to simulate the associated boundary-value problem, one of the difficulties is related to the fact that the solution which has to be computed is set in an exterior unbounded domain Ω + defined as the complementary of a finite scatterer Ω − . Therefore, to use a standard numerical method, it is necessary to bound the computational domain. One well-known possibility is to use an absorbing boundary condition [1][2][3][4][5][6][7] or a Perfectly Matched Layer [8][9][10][11][12][13] to bound the domain and then solve the resulting problem by the finite element method [14][15][16]. Another widely used alternative is to rewrite equivalently the initial exterior PDE problem as an integral equation over the finite surface Γ of the scatterer Ω − based on the Green's function [17][18][19][20][21][22][23][24][25][26][27][28]. Then, this has the advantage of reducing from one the dimension of the problem. One of the main drawbacks is that, unlike the initial problem which involves partial differential operators, the integral equation is defined by construction as a nonlocal pseudodifferential operator. When a discretization technique is then applied, as the boundary element method, then the corresponding discrete version of the integral equation leads to the numerical solution of a highly indefinite complex-valued dense linear system which is particularly difficult to tackle in the high-frequency regime. Many technical aspects are then necessary to make it working correctly for some applications, for example to reduce the storage and to accelerate the solution of the linear system [17,18,29]. Various numerical methods were further developed to propose some other ways to solve, at least approximately, the initial scattering problem. One of them is the On-Surface Radiation Condition (OSRC) method introduced in [30] and further developed by many authors (see, e.g., [31]). Without giving too much details now, the OSRC approach also leads to solve an equation given over the surface of the scatterer, but defined through local surface partial differential operators. Therefore, after the application of the boundary element method, the linear system is highly sparse, yielding an efficient way to solve the scattering problem. The price to pay is that the method can be considered as a numerical asymptotic method, and therefore can lose some accuracy in some cases, in particular when the scatterer is not convex and includes some concave parts [31] where multiple scattering effects are present. Considering the problem of getting better accuracy for OSRC based techniques, a coupled formulation between the OSRC method and a two-dimensional finite element technique was proposed in [32] for scattering by single non convex obstacles. For the numerical simulation of multiple scattering problems (then leading to non convex global structures), coupled OSRC/integral equation formulations were presented in [33,34], where the shapes of the single obstacles are convex [33] or slightly non convex [34]. The aim of the present paper is to contribute to the improvement of the OSRC method for non convex single obstacles by proposing a direct simple coupling between the OSRC and the surface integral equation method. Unlike [32], the presented method is naturally only set on the boundary of the scatterer and does not suffer from the pollution error related to the finite element method. The plan of the paper is the following: In Section 2, we introduce the two-dimensional Dirichlet scattering problem and the basic informations about the integral equation representations and their numerical approximation. Section 3 presents the notion of OSRC and its numerical discretization after writing the variational formulation. Section 4 develops the original coupling procedure for the Dirichlet problem. In particular, we explain how to formulate the problem and to improve its convergence properties if it is used in a Krylov solver, based on operator preconditioning. The coupling is validated in a simple two-dimensional example. The extension to the Neumann problem is shortly given in Section 5. Finally, Section 6 is a conclusion. The Two-Dimensional Scattering Problem Let us consider Ω − as a scatterer with polygonal boundary Γ := Ω − . The homogeneous isotropic exterior domain of propagation is denoted by Ω + = R 2 \ Ω − . For the sake of conciseness in the presentation, we first assume that the scatterer is acoustically sound-soft (i.e., Dirichlet boundary condition). Nevertheless, the case of a sound-hard scattering problem (Neumann boundary condition) is also treated shortly in Section 5. We now consider a time-harmonic incident plane wave u inc (x) = e ikθ inc ·x (with x = (x 1 , x 2 ) ∈ R 2 ) illuminating Ω − , with an incidence direction θ inc = (cos(θ inc ), sin(θ inc )) for a time dependence e −iωt , setting ω as the wave pulsation and k as the wavenumber. The sound-soft scattering problem of u inc by Ω − leads to the computation of the scattered wavefield u ∈ C 2 (Ω + ) ∩ C 0 (Ω + ) as the solution to the boundary-value problem We designate by (∆ + k 2 ) the Helmholtz operator, where ∆ = ∂ 2 x 1 + ∂ 2 x 2 is the laplacian. The gradient operator is ∇ and ||x|| = √ x · x, where x · y is the scalar product of two vectors x and y of R 2 . The last equation of (1) is the well-known Sommerfeld's radiation condition at infinity that ensures the uniqueness of the scattered wave field u. The outwardly directed unit normal vector to Ω − is n. A schematic representation of the problem is given in Figure 1. The existence and uniqueness of the solution to this BVP is well-known and detailed in [20], Theorem 2.12. From standard arguments connecting formulations in classical and Sobolev spaces ( [20], p. 107), then u ∈ H 1 loc (Ω + ) and u is C ∞ up to Γ, excluding the corners of the polygonal curves ( [20], Lemma 2.35). Following [20], Theorem 2.12, then it can also be proved that ∂u ∂n ∈ L 2 (Γ). Integral Operators for Scattering Let us define G as the two-dimensional free-space Green's kernel where H Proposition 1. If v ∈ C 2 (Ω + ) ∩ H 1 loc (Ω + ) is a solution to the Helmholtz equation in an unbounded domain Ω + which also satisfies the Sommerfeld radiation condition, then the following relation holds In addition, let us assume that v − ∈ C 2 (Ω − ) ∩ H 1 (Ω − ) is solution to the Helmholtz equation in a bounded domain Ω − . One can write Let us now introduce the volume single-and double-layer integral operators, respectively, denoted by L and M , that are defined for ρ ∈ L 2 (Γ) by The wave fields v and v − (see (3) and (4)) can be expressed as Furthermore, the single-and double-layer integral operators provide some outgoing solutions to the Helmholtz equation [20]. Direct Boundary Integral Equations for the Dirichlet Problem The aim of this section is to provide without any detail the standard integral equation formulations for solving the two-dimensional scattering problem with Dirichlet boundary condition that will be used later. More details can be found in [17,20,28] for the derivation and properties of these integral equations (well-posedness, existence of resonant modes,. . . ). Let us introduce the following boundary integral operators Then, based on the expressions of the trace and normal derivative trace of the volume single-and double-layer potentials (see, e.g., [20] for further details), a first formulation is based on the trace of the single-layer operator where the unknown density ρ is an element of L 2 (Γ). The equation is well-posed and equivalent to the exterior scattering problem (1) as soon as k is not an irregular interior frequency of the associated Dirichlet boundary-value problem [17,20,28]. This integral equation is called Electric Field Integral Equation (EFIE) in electromagnetism. In the sequel, this formulation will be denoted by Single-Layer Integral Equation (SLIE). In the case of a closed boundary Γ, which is the situation in the paper, it can be proved that the spurious internal modes do not radiate. Therefore, there is no pollution in the far-field computation, which justifies that the SLIE can be considered as a reference solution. In addition, for an open surface Γ, the SLIE is the only possible integral equation that can be written. Another surface integral formulation is given by (see [20] for the functional framework associated to the integral equations) It is also well-posed and equivalent to the exterior scattering problem (1) if k is not an interior Neumann resonance [17,28]. This formulation is often designated by Magnetic Field Integral Equation (MFIE). Nevertheless, this integral equation is not recommended in practice since the spurious modes radiate and introduce some errors when computing the far-field pattern. To avoid the interior resonance problem, Burton and Miller [17,19,28] proposed to rather use a linear combination between the EFIE and MFIE. If α is a realvalued parameter such that 0 < δ < 1 and if η is a complex parameter with (η) = 0, where (η) is the imaginary part of η, then one gets the Combined Field Integral Equation (CFIE) [17,23,28] This integral equation is well-posed for any wavenumber k but can only be applied to closed surfaces. An important point is that all these integral equations are based on the single-layer representation where the unknown surface field ρ is the physical quantity defined by the jump (set as the difference between the interior and exterior traces) To compute the far-field pattern, let us recall that we have: u = L ρ + M λ, where ρ and λ are two unknown densities. In the polar coordinates system (r, θ), the use of asymptotic expansions when r → +∞ leads to the following relation [22] ∀θ where a L and a M are the radiated far-fields for the single-and double-layer potentials, respectively, defined for any angle θ of [0, 2π] by with θ := (cos(θ), sin(θ)). In addition, the Sonar Cross Section (SCS) (in dB) is such that When using the single-layer representation (9), only a L is needed while a M is set to zero. In the paper, we will focus on the SLIE. Numerical Approximation To numerically solve Equation (6) (or (8)), we first introduce a polygonal interpolating surface Γ h which approximates Γ. The triangularization of Γ h is built by using n K linear segments K j of size h. Therefore, we have Γ h = ∪ n K j=1 K j . In the numerical examples, we take about 10 elements per wavelength which is enough to get a suitable accuracy for the Boundary Element Method (BEM). The notation P designates the space of complex-valued polynomials of order . In (6) (or (8)), no derivative operator is applied to the unknown ρ. However, as seen below, the OSRC method, and the integral equation technique for the Neumann problems introduce some tangential derivatives that apply to the surface fields. For this reason, we propose to use the linear BEM all along the paper, where the conformal finite element space V h of P 1 piecewise continuous functions is defined by For the Dirichlet problem, a P 0 (constant per segment) BEM could also be applied. For the numerical approximation, we naturally use the weak form of the integral Equation (6) which leads to [27] ∀q ∈ L 2 (Γ), Let us consider now that the BEM is applied. To compute the elementary interaction for two segments K 1 and K 2 of Γ h , a standard semi-analytical integration formula is used. This means that there is a first outer integration in (13) according to x on K 1 and based on a two-points Gauss-Legendre quadrature. Next, for the inner integral with respect to y on K 2 , if K 1 = K 2 , then again a two-points Gauss-Legendre quadrature is applied, an exact integration formula is used [35] when K 1 = K 2 . By using the interpolating surface Γ h and the linear approximations of both ρ and q in V h , the discrete form of (12) yields the linear system where L is the complex-valued matrix associated with the single-layer operator and M is the surface mass matrix. If n P is the number of points of the curve Γ h , then all the matrices are elements of the space M n P (C) of complex-valued matrices of size n P × n P . Indeed, assuming that Γ is a closed boundary, then the number of degrees of freedom of the linear boundary element method, i.e., the number of points n P := dim(V h ), is equal to the number of segments: n P = n K . In addition, the unknown complex-valued nodal vector ρ and the nodal incident vector u inc are in C n P . For computing the coefficients of L, some semi-analytical quadrature formula are used to integrate the kernel singularity. If one rather uses the BWIE (8), the discrete form leads to where ∂ n u inc is the nodal complex-valued vector related to the normal trace of the incident field. Finally, we have ρ = −(∂ n u + ∂ n u inc ). First and Second-Order OSRCs The on-surface radiation condition (OSRC) method was introduced in the middle of the eighties by Kriegsmann, Taflove and Umashankar [30]. At that time, the main idea was to develop an approximate but efficient and low memory numerical solution for scattering problems, most particularly in the high-frequency regime. Starting from local approximations of the Dirichlet-to-Neumann (DtN) operator, they were able to propose the computation of the scattered field by two-dimensional simple obstacles. Since then, the OSRC method has received much attention from many researchers and many improvements and extensions have been proposed (see, e.g., [31]). For the sake of conciseness, we restrict our presentation to the so-called first-and second-order Bayliss-Turkel-like radiation conditions [36]. The first-order OSRC (that we denote by OSRC 1 ) is given by In the above equation, (u 1 , ∂ n u 1 ) is the OSRC Cauchy data that approximate the exact Cauchy data (u, ∂ n u) on Γ. Here, let us remark that (u 1 , ∂ n u 1 ) must be understood as a notation since the OSRC is an approximate DtN map, which means that we do not a priori know if ∂ n u 1 is the normal derivative trace of a function u 1 . The function κ := κ(s) is the curvature at a point s of the surface, where s is the curvilinear abscissa counterclockwise directed along Γ. Equation (17) can be seen as a simple impedance boundary condition for the exterior domain Ω + . Various ways of deriving (17) are available in the literature. Let us mention for example [1] were formal and rigorous approaches are developed. The first-order OSRC (17) can be improved, leading to the so-called symmetrical second-order Bayliss-Turkel condition (denoted by OSRC 2 in the sequel) In (18), the curvilinear derivative operator is written ∂ s u = ∇u · τ, where τ is the tangent vector to Γ (n · τ = 0). In the same spirit, an Engquist-Majda-like OSRC can be derived [1,31]. However, some numerical simulations in various papers show that the boundary condition (18) provides a higher accuracy. Therefore, in the present paper, we restrict our study to (18). Numerical Approximation To solve (18), we introduce the weak formulation for some suitable test-functions v in H 1 (Γ). The pair of approximate Cauchy data (u 2 , ∂ n u 2 ) is discretized in the finite element space V h × V h . When the BEM is introduced, then the discretization of (19) can be rewritten at the matrix level as where the functions α and β are defined for OSRC 2 by The matrices S α and M β are, respectively, the (sparse) n P × n P generalized stiffness and mass matrices associated with the functions α and β. In addition, the vectors u 2 and ∂ n u 2 are complex-valued vector fields of C n K . Finally, the second-order OSRC method requires the following stable approximation scheme for the curvature [36]: let K = (a 1 a 2 a 3 ) be a triangle whose vertices a j , j = 1, 2, 3, are points on Γ h , then, the curvature at the vertex a 2 can be approximated by where a j , for j = 1, . . . , 3, are the lengths of the edges of K ordered with respect to the increasing size. This formula is directly applied on the triangles associated with the surface mesh and built on two adjacent segments. Using Formula (21) implies that the curvature is not equal to zero at the corner of the square since the triangle used to compute the numerical curvature is not flat. For a uniform surface mesh with size h, the numerical curvature is given as √ 2/h. This allows us to reproduce the scattering phenomenon arising at the corner (see also [36]). Since we are solving a scattering problem with given Dirichlet boundary condition, then u 2 is given by −u inc and then This means that ∂ n u 2 is simply obtained through the solution of a complex-valued sparse linear system defined by the mass matrix. Once the unknown is obtained then one can compute an approximation of the jump of the normal derivative trace by the relation Another way of writing this relation is that ρ 2 is solution to The far-field can be directly obtained through expressions (11). Even if the boundary condition (17) can be a priori applied to any scatterer Ω − , we will see during the numerical simulations that a serious loss of accuracy is observed when the scatterer presents some concave parts (see [36] and Figures 1-3). Indeed, in the concavity, the presence of multiply bounced rays cannot be modeled by local differential operators since the nature of this phenomena is nonlocal. Therefore, the aim of the next sections is to provide an original way to directly couple the OSRC formulation in the convex part of the scatterer to the SLIE restricted to the concave part to improve the accuracy. Weak Coupling Procedure and Boundary Element Approximation Since the OSRC solution is locally inaccurate in the non convex part of the domain, we propose to build a solution which is computed first by the OSRC method and improved thanks to the integral equation where the quality of the OSRC approximation deteriorates. To this end, let us assume that the geometry Γ can be decomposed into two non-overlapping parts Γ := Γ 1 ∪ Γ 2 , with Γ 1 ∩ Γ 2 = {a 1 ; a 2 }. The geometry Γ 1 is related to the boundary part of Γ which is convex, the complementary (where we will use an integral equation) is Γ 2 (see Figure 1). The proposed procedure uses 1) a global computation of an approximate surface density ρ 2 through the OSRC on Γ and 2) injects the restriction of this approximation to Γ 1 into the global integral equation formulation on Γ to obtain an approximate density on Γ 2 . This way of partitioning the computation allows us to solve a smaller size boundary element system and can be seen as a "one shot" computation. In the following, if A is a global matrix on Γ, we denote by A j the extracted matrix related to the interaction between the part Γ j of the boundary and Γ , j, = 1, 2. We then have In a similar way, the restriction of a nodal complex-valued vector z to the part of the boundary Γ is written z , for = 1, 2. Then, we propose the following algorithm for the SLIE formulation: compute ρ 2 := (ρ 2 1 ,ρ 2 2 ) ∈ C n P such that (1) OSRC: extract ρ 2 1 = (ρ 2 ) 1 ∈ C n P1 −2 from the computation of Mρ 2 = g 2 , (2) SLIE: computeρ 2 2 ∈ C n P2 as the solution to L 22ρ Even if other coupling possibilities are available, this one has the advantage that no modification of an existing code is required. Let us denote by n Kj (respectively, n Pj ) the number of segments (respectively, points) of the boundary Γ j , j = 1, 2, based on the uniform mesh used for the full BEM on Γ h . Then, we have n K = n K1 + n K2 and n P = n P1 + n P2 − 2 (the two junction points a 1 and a 2 must be counted once). In step 1), we only retain the values of the OSRC solution that are not considered in the cavity, including the two endpoints a 1 and a 2 . This first step needs O(n K ) operations to solve the sparse complex-valued linear system (by using an LU factorization or a preconditioned iterative Krylov solver [37]) while the memory storage is O(n K1 ). In the second step, the solution to a complex-valued full matrix is necessary. If a full storage is considered, then the linear system solution needs O(n 3 K2 ) elementary operations and the memory storage is O(n 2 K2 ). Usually, in integral equation solvers, most particularly for high-frequency scattering, it is preferable to use a subspace Krylov solver [17,37,38] in conjunction with a fast matrix-vector product algorithm like for example the Multilevel Fast Multipole Method (FMM) [24], high-order solvers [18] or the recent direct Adaptive Cross Approximation (ACA) techniques [29]. In this case, the memory requirement is O(n K2 ) for a computational cost O(n K2 log n K2 ). In addition, it is well-known that preconditioning is a requirement to get a fast convergence. The second equation of (26) is defined through the single-layer integral operator. Then, the integral equation is a first-kind Fredholm equation which is known to be badly conditioned. To improve this, we propose a preconditioned version of the algorithm (26) based on the Caldèron relations [17] and well-adapted to fast iterative solvers but without modifying the solution of the initial coupling procedure. Let D be the discrete version of the normal derivative trace operator D and D 22 its restriction to Γ 2 . Then, the preconditioned version of (26) is given by: find ρ 2 := (ρ 2 1 ,ρ 2 2 ) ∈ C n P such that (1) compute ρ 2 1 = (ρ 2 ) 1 ∈ C n P1 −2 solution to Mρ 2 = g 2 , (2) obtainρ 2 2 ∈ C n P2 as the solution to D 22 L 22ρ The second equation of (27) is defined by a second-kind integral equation formulation which has better clustering properties for the convergence of Krylov subspace solvers. The price to pay is that each iteration of a Krylov solver requires to apply L 22 first and next D 22 . The preconditioned Caldèron SLIE coupling procedure (27) is called SLIE-OSRC 2 . Onceρ 2 := (ρ 2 1 ,ρ 2 2 ) ∈ C n P has been computed, all the usual quantities of interest can be obtained like for the SCS given by (11). The SLIE-OSRC 2 formulation may suffer from the existence of interior resonances. Even if the spurious modes do not radiate for the SLIE, it can be better to have access to a stable formulation. To this end, the following BWIE-OSRC 2 coupling procedure could alternatively be used: computeρ 2 := (ρ 2 1 ,ρ 2 2 ) ∈ C n P such that Unlike the SLIE-OSRC 2 formulation, the BWIE-OSRC 2 approach is defined by a second-kind Fredholm integral equation and does not really need to be preconditioned when correctly choosing δ and η (see, e.g., [17]). An extra computational cost and memory storage is required since the evaluation of the double-layer potentials N 22 and N 21 are required. In the following examples, we will report only the results related to the SLIE-OSRC 2 formulation since no resonance were met and we always plot the SCS. In addition, we do not use any iterative solver for the resulting linear systems because we are considering toys problems for a proof of concept. However, for three-dimensional problems, where the method directly extends, the difference between all the formulations may be important in terms of convergence rate. Here, we rather focus on the accuracy improvement. Further investigations are therefore needed, as well as improved formulations and implementation of higher order OSRCs. A Numerical Example-Validation of the Procedure As previously said, we use the SLIE-OSRC 2 formulation. The model toy problem is the following. We consider the obstacle Ω − as being composed of the square cylinder centered at the origin and with side length 2, with an inner square cavity defined by the corners (1/3, −1/3), (1, −1/3), (1, 1/3) and (1/3, 1/3). The boundary Γ 2 is then defined as the internal boundary to the cavity (blue curve on Figure 1) and Γ 1 is the complementary boundary on Γ (red curve on Figure 1). For an incident plane wave, the reference solution is given by the SLIE (14) discretized by the BEM (with 10 points per wavelength). The results are clearly improved compared to the pure OSRC approach for wave numbers k ≤ 15 and this, independently of the angle of attack θ inc . As it can be observed on the first picture of Figure 2 (Case 1, for k = 2), we almost obtain the reference solution by using the SLIE-OSRC 2 approach while a pure OSRC approximation gives a poor accuracy which deteriorates as k increases. For a higher wave number (Case 2, k = 8), the results are still accurate (see Figure 2). By increasing the frequency, we still have an acceptable solution as seen on the third example of Figure 2 (Case 3, k = 14). For both the integral equations and OSRCs, since we consider a mesh with ten elements per wavelength, the size of the matrix L 2,2 is about n K2 = 2.5k, which is more than four times less than for a pure SLIE solution. The Neumann Scattering Problem The Neumann scattering problem, i.e., considering (1) but with the boundary condition: ∂ n u = −∂ n u inc on Γ, can be treated similarly with the EFIE [17] based on the normal derivative trace of the double-layer potential written under its weak form (called DLIE). A hyper-singular kernel must then be integrated carefully by semi-integration techniques. For the OSRC, the method applies quite similarly and leads to the solution of a sparse linear system. In general, the coupling procedure (called DLIE-OSRC 2 ) has proved to be less accurate than for the Dirichlet problem (i.e., for SLIE-OSRC 2 ). We think that this is due to the fact that the OSRC approach requires a higher order operator to provide a suitable fast solution during the first step of the method. We recommend to rather use the square-root OSRC developed in [2]. The resulting coupling technique will be analyzed in a forthcoming work. For low and moderate wave numbers (k ≤ 4), we still get quite acceptable results having always in mind the lower computational cost of the DLIE-OSRC 2 method. However, there is a moderate accuracy for wave numbers such that k ≥ 10 as seen on the two cases reported in Figure 3. Nevertheless, this must be counterbalanced by the lower cost of the procedure and the possibility to increase the OSRC accuracy in the convex part. In addition, we always obtain a good prediction of the main lobs where the energy is mostly radiating. As a general concluding remark, the coupling DLIE-OSRC 2 algorithm provides an interesting alternative to the DLIE. Furthermore, the method is always more accurate than a classical OSRC approach and gives a possibility of its extension to non-convex obstacles. Conclusions In this paper, we introduced a simple and original algorithm coupling the surface integral equation method and the OSRC technique for solving the scattering problem by nonconvex scatterers. While simple, the coupling method leads to an improved accuracy compared with the pure OSRC approach and reduces the computational cost of a direct integral equation formulation. In addition, we explain how operator preconditioning can be directly included into the formulations. The method is validated on a simple twodimensional problem as a proof of concept. In particular, the method is accurate for the Dirichlet problem, but still needs to be further investigated by using high-order OSRCs in the case of the Neumann problem. We expect that the formulation can be useful for three-dimensional high-frequency scattering problems solved iteratively by Krylov solvers with acceleration algorithms. Author Contributions: Conceptualization, software, formal analysis, investigation, writing-original draft preparation, writing-review and editing, S.M.A., X.A. and C.C.; project administration, funding acquisition, S.M.A. and C.C. All authors have read and agreed to the published version of the manuscript. Funding: The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant code 18-SCI-1-01-0017. Informed Consent Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
7,068.6
2021-08-27T00:00:00.000
[ "Mathematics" ]
Spaceborne GNSS-R for Sea Ice Classification Using Machine Learning Classifiers : The knowledge of Arctic Sea ice coverage is of particular importance in studies of climate change. This study develops a new sea ice classification approach based on machine learning (ML) classifiers through analyzing spaceborne GNSS-R features derived from the TechDemoSat-1 (TDS-1) data collected over open water (OW), first-year ice (FYI), and multi-year ice (MYI). A total of eight features extracted from GNSS-R observables collected in five months are applied to classify OW, FYI, and MYI using the ML classifiers of random forest (RF) and support vector machine (SVM) in a two-step strategy. Firstly, randomly selected 30% of samples of the whole dataset are used as a training set to build classifiers for discriminating OW from sea ice. The performance is evaluated using the remaining 70% of samples through validating with the sea ice type from the Special Sensor Microwave Imager Sounder (SSMIS) data provided by the Ocean and Sea Ice Satellite Application Facility (OSISAF). The overall accuracy of RF and SVM classifiers are 98.83% and 98.60% respectively for distinguishing OW from sea ice. Then, samples of sea ice, including FYI and MYI, are randomly split into training and test dataset. The features of the training set are used as input variables to train the FYI-MYI classifiers, which achieve an overall accuracy of 84.82% and 71.71% respectively by RF and SVM classifiers. Finally, the features in every month are used as training and testing set in turn to cross-validate the performance of the proposed classifier. The results indicate the strong sensitivity of GNSS signals to sea ice types and the great potential of ML classifiers for GNSS-R applications. used for classifying FYI and MYI using the spaceborne GNSS-R data. The results indicate that RF and SVM-based GNSS-R have great potential in sea ice classification. This study demonstrates that the great potential of GNSS-R for classifying sea ice types, which can be an effective and com- plementary approach for remotely sensing sea ice. In future studies, more GNSS-R fea- tures, ML algorithms, and environmental effects (such as ocean wind) should be investi- gated to improve the accuracy of classifying FYI and MYI. and Introduction Arctic sea ice is one of the most significant components in studies of climate change [1]. The knowledge of sea ice information is useful for shipping route planning and offshore oil/gas exploration. As one of the most important sea ice parameters, sea ice type is of particular interest since the characteristics of first-year ice (FYI) and multi-year ice (MYI) are different [2]. Compared to FYI, MYI has greater thickness and higher albedo, which is critical for energy exchange in the air-sea interface. Some previous studies indicated that the Arctic sea ice has reduced in extent and a part of ice cover is becoming thinner, changing from thicker MYI to thinner FYI [3]. The surface roughness and dielectric constant of different sea ice types change at different stages of ice growth. It is well known that the thinner, changing from thicker MYI to thinner FYI [3]. The surface roughness and dielectric constant of different sea ice types change at different stages of ice growth. It is well known that the surface of FYI is usually smoother than that of MYI. In addition, the salinity of FYI is higher than that of MYI. The FYI around the floe edges tends to undergo deformation when it collides with thicker ice. In general, the ice that survives at least one summer is regarded as MYI, which retains low salinity values and an undulating surface. These characteristics of different sea ice types are the basis for classification. A wide variety of techniques has been applied to characterize changes in sea ice. Sea ice can be monitored from different platforms, such as buoys [4], ships [5], aircraft [6], and satellites [7]. Among them, satellite-based microwave remote sensing has been regarded as the most effective tool for monitoring sea ice [7]. In recent years, Global Navigation Satellite System (GNSS) Reflectometry (GNSS-R) has emerged as a powerful tool for sensing bio-geophysical features using L-band signals scattered from the Earth's surface [8]. GNSS-R was initially proposed for ocean altimetry in 1993 [9] after the concept of GNSS-R was proposed in 1988 [10]. Subsequently, the scope of the applications of GNSS-R has been extended to various fields, such as wind speed retrieval [11], snow depth estimation [12], soil moisture sensing [13], ocean altimetry [14], and sea ice detection [15]. Most GNSS-R studies were carried out using reflected L-band data collected on ground-based, airborne, and spaceborne platforms, and the latter one is regarded as the future trend due to its global coverage and high mobility [16]. The United-Kingdom (UK) TechDemoSat-1 [16] and NASA Cyclone GNSS (CYGNSS) [17], launched in 2014 and 2016 respectively, have promoted the research of spaceborne GNSS-R since their data are publicly available. Particularly, the TDS-1 data can be used for polar research as its global coverage, while the CYGNSS can only be applied in middle and lower latitude regions as its coverage of interest is the oceans within the latitude of ±38°. Recently, some other spaceborne GNSS-R missions have been successfully carried out one after another, such as the Chinese BuFeng-1 A/B [18], and Fengyun 3E [19]. Many applications of GNSS-R make use of the specular scattering geometry since GNSS signals reflected from the Earth's surface have the greatest amplitude at specular scattering. The Delay-Doppler Map (DDM), which is one of the most important GNSS-R observables, demonstrates the power scattering from the reflected surface as a function of time delays and Doppler shifts ( Figure 1). The reflection over rough surfaces, such as open water (OW), is usually regarded as incoherent, which results in a "horseshoe" shape of DDM presented in Figure 1a. The specular scattering is considered coherent when the reflected surface is relatively smooth. The typical coherent DDM is shown in Figure 1b. The TDS-1 data have been successfully used to sense sea ice parameters over the past few years. Yan et al. [20] firstly explored the sensitivity of TDS-1 Delay-Doppler Map (DDM) to sea ice presence using the pixel number of DDM with a signal-to-noise ratio (SNR) above a threshold. Similarly, Zhu et al. [21] proposed a differential DDM observable, which was used to recognize ice-water, water-water, water-ice, and ice-ice transitions. Schiavulli et al. [22] reconstructed a radar image based on DDM to distinguish sea ice from water. Alonso-Arroyo et al. [15] applied a matched filter ice detection approach with a probability of detection of 98.5%, a probability of false alarm of 3.6%, and a probability of error of 2.5%. Afterward, another study has proven that spaceborne GNSS-R can be effective for sea ice discrimination with a success rate of up to 98.22% compared to collocated passive microwave sea ice data [23]. Cartwright et al. [24] combined two features extracted from DDMs to detect sea ice with an agreement of 98.4% and 96.6% in the Antarctic and Arctic regions by comparing with the European Space Agency Climate Change Initiative sea ice concentration product. On the other hand, TDS-1 data were expanded to altimetry applications [25,26]. The ice sheet melt was also investigated in [27] using TDS-1 data. Furthermore, it has been shown that spaceborne GNSS-R is also useful for retrieving sea ice thickness [28], sea ice concentration [29], and sea ice type [30]. The GNSS-R observables derived from DDMs were used to classify Arctic sea ice in [30], where the TDS-1 sea ice types were compared with the sea ice type maps derived from Synthetic Aperture Radar (SAR) measurements. In recent years, machine learning (ML) based methods have been widely used in geosciences and remote sensing applications [31]. ML has been proven powerful for applications in various remote sensing fields, such as classification [32,33], object detection [34], and parameter estimates [35]. Yan et al. [36] firstly adopted the neural network (NN) method to detect sea ice using spaceborne GNSS-R DDMs from TDS-1. This study demonstrated the potential of an NN-based approach for sea ice detection and sea ice concentration (SIC) estimation, which was further explored through the convolutional neural network (CNN) algorithm [37]. The DDMs were directly used as input variables in the CNN-based approach. Compared to NN and CNN, support vector machine (SVM) achieved the best performance in sea ice detection [38]. These three studies utilized the original DDM and the values derived from DDMs as input features for sensing sea ice. Zhu et al. [39] employed feature sequences that depict the characteristics of DDMs as input parameters to monitor sea ice using the decision tree (DT) and random forest (RF) algorithms. The RF aided method can discriminate sea ice from water with a success rate up to 98.03% validated with the collocated sea ice edge maps from the special sensor microwave imager sounder (SSMIS) data provided by the Ocean and Sea Ice Satellite Application Facility (OSISAF). Llaveria et al. [40] applied the NN algorithm for sea ice concentration and sea ice extent sensing using GNSS-R data from the FFSCat mission [40]. Rodriguez-Alvarez et al. [30] initially exploited the implementation of the classification and regression tree (CART) algorithm for sea ice classification using GNSS-R observables derived from GNSS-R DDM. The results showed that the FYI and MYI can be classified with an accuracy of 70% and 82.34% respectively. In order to illustrate the ML for sea ice sensing based on GNSS-R, relevant information about the above-mentioned studies is presented in Table 1. As one of the most powerful ML algorithms, RF has been widely applied for remote sensing image classification [33,42,43]. However, RF has not been considered for classifying FYI and MYI using spaceborne GNSS-R data. In addition, the SVM-based method showed great potential in sea ice detection and classification in some previous studies [33,38]. Although SVM has been applied to sea ice classification using SAR images [33], the application of SVM to GNSS-R sea ice classification has not been investigated. Therefore, RF and SVM classifiers are adopted in this study to develop algorithms for sea ice classification. The purpose of this research is to demonstrate the feasibility of spaceborne GNSS-R to classify sea ice types using ML classifiers. The spaceborne GNSS-R dataset and reference sea ice type data from the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) OSISAF [44] used in this study is firstly described in Section 2. Then, the theoretical basis and the proposed method for sea ice classification are described with details in Section 3. The sea ice classification results are presented in Section 4 and discussed in Section 5. The conclusions are finally addressed in Section 6. TDS-1 Mission and Dataset The TDS-1 satellite began its data acquisition in September 2014 after its launch in July 2014. As one of eight instruments placed on the TDS-1 satellite, the Space GNSS Receiver-Remote Sensing Instrument (SGR-ReSI) was turned on only two days of an eight-day cycle until January 2018 [45]. The SGR-ReSI started its full-time operation in February 2018, and it came to the end in December 2018. The TDS-1 has provided a large amount of data at a global scale as the satellite runs on a quasi-Sun synchronous orbit with an inclination of 98.4 • . Currently, the TDS-1 data are freely available on the Measurement of Earth Reflected Radio-navigation Signals by Satellite (MERRBYS) website (www.merrbys.co.uk, accessed on 11 September 2021). The TDS-1 data are processed into three levels, including Level 0 (L0), Level 1 (L1), and Level 2 (L2). Among them, L1 data are usually adopted in scientific research. L0 refers to the raw data, which are not accessible except for a small amount of sample data. L2 includes wind speed, mean square slope, and sea ice products. One of the most important GNSS-R observables is DDM, which is generated by the SGR-ReSI through the cross-correlation between scattered signals and locally generated code replicas with different delays and Doppler shifts. The orbit and instrument specifications of the TDS-1 mission are presented in Table 2 [45]. Reference Sea Ice Data The sea ice type (SIT) provided by the OSISAF [44] is adopted as the reference data to train the sea ice classification model and validate the results. The OSISAF provides daily SIT maps with a spatial resolution of 10 km in the polar stereographic projection. The SIT products discriminate OW, FYI, and MYI through analyzing multi-sensor data. The data with a confidence level above 3 are adopted to avoid using low-quality data [44]. In addition, the SIC products generated from the SSMIS measurements provided by OSISAF are also used as reference data to analyze the impacts of SIC on sea ice classification. The reference SIC products are mapped on a grid with a size of 10 km × 10 km in the polar stereographic projection. The TDS-1 data is matched with the SIC maps through the SP location and data collection date, which are available in the TDS-1 dataset. Figure 2a presents the GNSS-R ground tracks over OW, FYI, and MYI on 26 November 2015. The TDS-1 DDMs over OW-FYI and FYI-MYI transitions are shown in the figure. Meanwhile, the GNSS-R ground tracks are mapped against the SIC map in Figure 2b. In general, the SIC of MYI is higher than that of FYI. replicas with different delays and Doppler shifts. The orbit and instrument specifications of the TDS-1 mission are presented in Table 2 [45]. Reference Sea Ice Data The sea ice type (SIT) provided by the OSISAF [44] is adopted as the reference data to train the sea ice classification model and validate the results. The OSISAF provides daily SIT maps with a spatial resolution of 10 km in the polar stereographic projection. The SIT products discriminate OW, FYI, and MYI through analyzing multi-sensor data. The data with a confidence level above 3 are adopted to avoid using low-quality data [44]. In addition, the SIC products generated from the SSMIS measurements provided by OSISAF are also used as reference data to analyze the impacts of SIC on sea ice classification. The reference SIC products are mapped on a grid with a size of 10 km × 10 km in the polar stereographic projection. The TDS-1 data is matched with the SIC maps through the SP location and data collection date, which are available in the TDS-1 dataset. Figure 2a presents the GNSS-R ground tracks over OW, FYI, and MYI on 26 November 2015. The TDS-1 DDMs over OW-FYI and FYI-MYI transitions are shown in the figure. Meanwhile, the GNSS-R ground tracks are mapped against the SIC map in Figure 2b. In general, the SIC of MYI is higher than that of FYI. Surface Reflectivity The GNSS-R instrument (e.g., SGR-ReSI) receives signals directly transmitted from GNSS constellations and scattered from the Earth's surface. For the TDS-1 mission, the signals observed are the L1 C/A codes with a center frequency of 1.575 GHz. The L-band signal is useful for remote sensing applications due to its insensitivity to the precipitation and atmosphere. The specular reflections are dominant when the surface is relatively flat and smooth. As demonstrated in [15], the GNSS-R signal received over sea ice is usually Remote Sens. 2021, 13, 4577 6 of 27 coherent due to its smooth surface. Thus, the coherent surface reflectivity SR coh at the specular point (SP) can be modeled as [46,47], where P coh is the coherently received power reflected by the surface, R r and R t are the distances from the receiver and transmitter to the SP respectively, G r and G t are the antenna gain of the receiver and transmitter, λ is the GNSS signal wavelength, and P t is the power transmitted by GNSS satellites. Most of the variables in (1) can be obtained from the TDS-1 L1b data. R r and R t can be easily calculated according to the positions of the transmitter, receiver, and SP, which are stored in the metadata. G r at SP can be directly extracted from the metadata. The noise floor is determined using the average value of the first four rows of DDM [23]. The transmitted signal power can be derived using [48], where P t G t is also termed as effective isotropic radiated power (EIRP). P d is the direct power, R d is the distance from the transmitter to receiver, and G d is the zenith antenna gain, which is set as 4 dB in this study according to the Merrbys documentation [45,49]. As demonstrated in [28], the received power originates from a region surrounding the SP, which is usually the first Fresnel zone. As demonstrated in [50], the Fresnel reflectivity SR coh can also be modeled as, where θ i stands for the incidence angle and σ is the surface root mean square (RMS) height. The second term in Equation (3) represents the surface roughness. The R represents the Fresnel reflection coefficient, which can be derived through Equations (4)-(6): where θ i is the incidence angle over the sea ice, and i is the permittivity of sea ice, which is related to sea ice types. In this study, the data with an incidence angle below 45 • are applied. Data marked by the quality flags eclipse or direct signal in DDM [51] have also been removed to deal with reliable data. The TDS-1 surface reflectivity and corresponding incidence angle distribution in Arctic regions in 5 days (from 11 to 15 February 2018) is presented in Figure 3. Features Derived from DDM Besides the surface reflectivity, the other seven GNSS-R observables are chosen for sea ice classification. GNSS-R observables are defined as characteristics derived from the DDMs that describe their power and shape. The DDM average (DDMA), which represents the mean signal-to-noise ratio (SNR) values over the point of peak SNR (Figure 4a), is applied in this study to classify sea ice types. As shown in Figure 4a, the DDMA expands 3 delay bins and 3 Doppler shifts centered at the peak SNR point. The distribution of DDMA in 5 days (from 11 to 15 February 2018) is presented in Figure 4b compared with the OSISAF SIT map. As illustrated in [23,39], the integrated delay waveform (IDW) is defined as the summation of 20 delay waveforms, which are the cross-sections of DDM at 20 different Doppler shifts. The cross-section of zero Doppler shift is defined as the central delay waveform (CDW) of the DDM. The degree of difference between IDW and CDW is described by differential delay waveform (DDW), which is depicted as follows, Features Derived from DDM Besides the surface reflectivity, the other seven GNSS-R observables are chosen for sea ice classification. GNSS-R observables are defined as characteristics derived from the DDMs that describe their power and shape. The DDM average (DDMA), which represents the mean signal-to-noise ratio (SNR) values over the point of peak SNR (Figure 4a), is applied in this study to classify sea ice types. As shown in Figure 4a, the DDMA expands 3 delay bins and 3 Doppler shifts centered at the peak SNR point. The distribution of DDMA in 5 days (from 11 to 15 February 2018) is presented in Figure 4b compared with the OSISAF SIT map. Features Derived from DDM Besides the surface reflectivity, the other seven GNSS-R observables are chosen for sea ice classification. GNSS-R observables are defined as characteristics derived from the DDMs that describe their power and shape. The DDM average (DDMA), which represents the mean signal-to-noise ratio (SNR) values over the point of peak SNR (Figure 4a), is applied in this study to classify sea ice types. As shown in Figure 4a, the DDMA expands 3 delay bins and 3 Doppler shifts centered at the peak SNR point. The distribution of DDMA in 5 days (from 11 to 15 February 2018) is presented in Figure 4b compared with the OSISAF SIT map. As illustrated in [23,39], the integrated delay waveform (IDW) is defined as the summation of 20 delay waveforms, which are the cross-sections of DDM at 20 different Doppler shifts. The cross-section of zero Doppler shift is defined as the central delay waveform (CDW) of the DDM. The degree of difference between IDW and CDW is described by differential delay waveform (DDW), which is depicted as follows, As illustrated in [23,39], the integrated delay waveform (IDW) is defined as the summation of 20 delay waveforms, which are the cross-sections of DDM at 20 different Doppler shifts. The cross-section of zero Doppler shift is defined as the central delay waveform (CDW) of the DDM. The degree of difference between IDW and CDW is described by differential delay waveform (DDW), which is depicted as follows, where NIDW and NCDW represent the normalized IDW and CDW respectively. The NIDW is related to the power spreading characteristics caused by surface roughness. The delay waveforms from delay bins −5 to 20 over OW, FYI, and MYI surfaces are shown in Figure 5. Remote Sens. 2021, 13, x FOR PEER REVIEW 8 of 28 where NIDW and NCDW represent the normalized IDW and CDW respectively. The NIDW is related to the power spreading characteristics caused by surface roughness. The delay waveforms from delay bins -5 to 20 over OW, FYI, and MYI surfaces are shown in Figure 5. Features extracted from delay waveforms are described as follows: • RESC (Right Edge Slope of CDW). The fitting slope of NCDW with 5 delay bins starting from the zero delay one is defined as RESC, which is depicted by the slope of the blue line in Figure 6a. • RESI (Right Edge Slope of IDW). The fitting slope of NIDW with 5 delay bins starting from the zero delay one is defined as RESI, which is depicted by the slope of the green line in Figure 6a. • RESD (Right Edge Slope of DDW). The fitting slope of DDW with 5 delay bins starting from the zero delay one is defined as RESD, which is depicted by the slope of the magenta line in Figure 6a. Features extracted from delay waveforms are described as follows: • RESC (Right Edge Slope of CDW). The fitting slope of NCDW with 5 delay bins starting from the zero delay one is defined as RESC, which is depicted by the slope of the blue line in Figure 6a. • RESI (Right Edge Slope of IDW). The fitting slope of NIDW with 5 delay bins starting from the zero delay one is defined as RESI, which is depicted by the slope of the green line in Figure where NIDW and NCDW represent the normalized IDW and CDW respectively. The NIDW is related to the power spreading characteristics caused by surface roughness. The delay waveforms from delay bins -5 to 20 over OW, FYI, and MYI surfaces are shown in Figure 5. Features extracted from delay waveforms are described as follows: • RESC (Right Edge Slope of CDW). The fitting slope of NCDW with 5 delay bins starting from the zero delay one is defined as RESC, which is depicted by the slope of the blue line in Figure 6a. • RESI (Right Edge Slope of IDW). The fitting slope of NIDW with 5 delay bins starting from the zero delay one is defined as RESI, which is depicted by the slope of the green line in Figure 6a. • RESD (Right Edge Slope of DDW). The fitting slope of DDW with 5 delay bins starting from the zero delay one is defined as RESD, which is depicted by the slope of the magenta line in Figure 6a. The distribution of these six features versus the OSISAF SIT maps is illustrated in Figure 7. The distribution of these six features versus the OSISAF SIT maps is illustrated in Figure 7. Sea Ice Classification Method Many previous studies indicate that RF and SVM show great potential in classifications [33,38,39,42], while they have not been used for classifying FYI and MYI using GNSS-R features. Thus, these two classifiers are used in this study to classify sea ice types. Random Forest (RF) The RF is one of the ensemble methods that have received increasing interest since they are more robust and accurate than single classifiers [52,53]. RF is composed of a set of classifiers where each classifier casts a vote for the allocation of the most frequent type to the input vectors. The principle of ensemble classifiers is based on the basic precondition that a variety of classifiers usually outperform an individual classifier. Some of the advantages of RF for remote sensing applications include (1) It has high efficiency on large datasets; (2) It can deal with a large number of input variables without variable deletion; Sea Ice Classification Method Many previous studies indicate that RF and SVM show great potential in classifications [33,38,39,42], while they have not been used for classifying FYI and MYI using GNSS-R features. Thus, these two classifiers are used in this study to classify sea ice types. Random Forest (RF) The RF is one of the ensemble methods that have received increasing interest since they are more robust and accurate than single classifiers [52,53]. RF is composed of a set of classifiers where each classifier casts a vote for the allocation of the most frequent type to the input vectors. The principle of ensemble classifiers is based on the basic precondition that a variety of classifiers usually outperform an individual classifier. Some of the advantages of RF for remote sensing applications include (1) It has high efficiency on large datasets; (2) It can deal with a large number of input variables without variable deletion; (3) The variable importance can be estimated in the classification; (4) Relatively strong robustness to noise and outliers. The design of a decision tree needs to choose an attribute selection measure and a pruning approach. The Gini index [54] is usually used to measure the impurity of training samples in RF. For a given training dataset T, the Gini index (G index ) can be expressed as, where m is the number of categories and p i is the proportion of samples T that belongs to class i. In the case of binary classification problems, the most appropriate characteristics at each node can be identified by the Gini index, given by: The RF can also evaluate the relative importance of different features in the classification process. It is helpful to select the best features and know-how each feature influences the classification results [55,56]. Support Vector Machine (SVM) SVM is a powerful machine learning algorithm based on statistical learning theory and has the purpose of determining the location of decision boundaries that produce the optimal separation of classes [57]. It is a supervised classification approach, which can deal with linear, nonlinear, high-dimensional samples and result in good generalization. SVM is primarily used for binary classification problems, in which the sample data can be expressed as, where x i represents the input samples that consist of features for classification, y i is the class label of x i For the linear classification, the SVM classifier satisfies the following rule, where w T x + b = 0 represents a hyperplane, with parameters w and b being the coefficient and bias, respectively. The maximum classification interval algorithm can be formulated as, In order to obtain good predictions in nonlinear classification, the slack variables are introduced to express the soft margin optimal problem, where ξ i represents the ith slack variable, C is a penalty parameter. φ is a high-dimensional feature projection function related to the kernel function [58]. Sea Ice Classification Based on RF and SVM In this study, the sea surface is divided into three categories, including OW, FYI, and MYI. As a powerful ML classifier, RF can deal with both binary and multi-classification problems. The strategy of one-against-one and one-against-all can be chosen to address the multi-classification problem. The RF classifier can be split into several binary classifiers using a one-against-all strategy (Anand et al., 1995) [59]. Although the standard RF algorithm can deal with multi-type problems, the one-against-all binarization to the RF can achieve better accuracy with smaller forest sizes than the standard RF [60]. In addition, the SVM is usually used for binary classification. Thus, the one-against-all binarization strategy is used in this study to classify OW, FYI, and MYI in two steps. In the first stage, the FYI and MYI are regarded as one category (sea ice), whereas the OW is regarded as another category. Then, the classification of sea ice types (FYI and MYI) will be carried out in the second stage using a similar method. The python software package from the Scikit-Learn is adopted in this study [61]. The process flow of sea ice classification is shown in Figure 8. the FYI and MYI are regarded as one category (sea ice), whereas the OW is regarded as another category. Then, the classification of sea ice types (FYI and MYI) will be carried out in the second stage using a similar method. The python software package from the Scikit-Learn is adopted in this study [61]. The process flow of sea ice classification is shown in Figure 8. The sea ice classification method can be generally categorized into four parts: • TDS-1 data preprocessing and features extraction. Firstly, the TDS-1 data with a latitude above 55°N and peak SNR above -3 dB is adopted to extract delay waveforms, which are further normalized to extract features. A total of eight features, namely SR, DDMA, RESC, RESI, RESD, REWC, REWI, and REWD, are extracted from the TDS-1 data. Then the TDS-1 features are matched with OSISAF SIT maps based on the data collection date and specular point position through a bilinear interpolation approach. • OW-sea ice classification. In this step, the FYI and MYI are regarded as one category (i.e., sea ice). 30% of samples are randomly selected as training set and the rest 70% of samples are used to test the OW-sea ice classification results. • FYI-MYI classification. The sea ice datasets are applied in this step to classify FYI and MYI. As with the process of OW-sea ice classification, sea ice samples are randomly selected as training and test sets to classify FYI and MYI. • Performance evaluation of sea ice classification. The classification performance is firstly evaluated using the confusion matrix and some evaluation metrics, which are defined in Figure 9 and Table 3, respectively. The sea ice classification method can be generally categorized into four parts: • TDS-1 data preprocessing and features extraction. Firstly, the TDS-1 data with a latitude above 55 • N and peak SNR above −3 dB is adopted to extract delay waveforms, which are further normalized to extract features. A total of eight features, namely SR, DDMA, RESC, RESI, RESD, REWC, REWI, and REWD, are extracted from the TDS-1 data. Then the TDS-1 features are matched with OSISAF SIT maps based on the data collection date and specular point position through a bilinear interpolation approach. • OW-sea ice classification. In this step, the FYI and MYI are regarded as one category (i.e., sea ice). 30% of samples are randomly selected as training set and the rest 70% of samples are used to test the OW-sea ice classification results. • FYI-MYI classification. The sea ice datasets are applied in this step to classify FYI and MYI. As with the process of OW-sea ice classification, sea ice samples are randomly selected as training and test sets to classify FYI and MYI. • Performance evaluation of sea ice classification. The classification performance is firstly evaluated using the confusion matrix and some evaluation metrics, which are defined in Figure 9 and Table 3, respectively. The row and column represent the actual and predicted types, respectively. As shown in Table 3, TP stands for the number of positive samples that are classified correctly. FN represents the number of positive samples that are classified into a negative type incorrectly. FP is the number of negative samples that are classified into a positive type incorrectly. TN is the number of negative samples that are classified correctly. The evaluation metrics that are used to evaluate the classification performance are defined in Table 3. Accuracy represents the ratio of the correct classification number in all the samples. Precision depicts the precision of predicted results that positive samples are classified correctly. Recall means the completeness of the samples that are classified correctly in all positive samples. F-value is the harmonic mean of classification performance for positive samples, where β is usually set as 1. Then the F-value is the F1 score, which is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false negatives into account. G-mean can be used to evaluate the overall classification performance. G-mean measures the balance between classification performances on both the majority and minority classes. In addition, the kappa coefficient is also used to measure inter-rater reliability for classifiers. The kappa coefficient measures the agreement between two raters who each classify N items into C mutually exclusive categories. The row and column represent the actual and predicted types, respectively. As shown in Table 3, TP stands for the number of positive samples that are classified correctly. FN represents the number of positive samples that are classified into a negative type incorrectly. FP is the number of negative samples that are classified into a positive type incorrectly. TN is the number of negative samples that are classified correctly. The evaluation metrics that are used to evaluate the classification performance are defined in Table 3. Accuracy represents the ratio of the correct classification number in all the samples. Precision depicts the precision of predicted results that positive samples are classified correctly. Recall means the completeness of the samples that are classified correctly in all positive samples. F-value is the harmonic mean of classification performance for positive samples, where is usually set as 1. Then the F-value is the F1 score, which is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false negatives into account. G-mean can be used to evaluate the overall classification performance. G-mean measures the balance between classification performances on both the majority and minority classes. In addition, the kappa coefficient is also used to measure inter-rater reliability for classifiers. The kappa coefficient measures the agreement between two raters who each classify N items into C mutually exclusive categories. Evaluation Metrics Equation Accuracy Figure 9. The definition of the confusion matrix. Table 3. The definition of evaluation metrics. Results Although the OSISAF SIT products are provided continuously, there is a lack of MYI information from May to October most of the time in the OSISAF SIT products. In addition, the TDS-1 data is very sparse from 2015 to 2017 as the GNSS-R receiver was working only two days in an eight-day cycle. Therefore, the TDS-1 datasets collected in 2018 are selected according to the availability of MYI information of OSISAF SIT products. Figure 10a presents the data availability status of both OSISAF SIT products and TDS-1 data that are used in this study. The number of OW, FYI, and MYI samples every month are shown in Figure 10b. According to the statistics, a total number of 460,685 samples, including 165,434 OW samples, 266,680 FYI samples, and 28,571 MYI samples, are used in this study. tion, the TDS-1 data is very sparse from 2015 to 2017 as the GNSS-R receiver was working only two days in an eight-day cycle. Therefore, the TDS-1 datasets collected in 2018 are selected according to the availability of MYI information of OSISAF SIT products. Figure 10a presents the data availability status of both OSISAF SIT products and TDS-1 data that are used in this study. The number of OW, FYI, and MYI samples every month are shown in Figure 10b. According to the statistics, a total number of 460,685 samples, including 165,434 OW samples, 266,680 FYI samples, and 28,571 MYI samples, are used in this study . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 16 15 17 18 19 20 21 22 23 24 25 26 27 28 29 30 3 4 5 6 7 8 9 10 11 12 18 19 20 21 23 24 27 28 Feb. Mar. Apr. Oct. 2018 Day of Month As illustrated in Section 3, the sea ice classification is implemented using a two-step method. The first step aims to discriminate OW from sea ice using the RF and SVM classifiers. During the RF model training, the number of trees is set to different values ranging from 10 to 200, in order to find the best classifier. The number of estimators is finally set as 70 in this study. After identifying the OW, the sea ice samples are employed to classify FYI and MYI in the second step using the RF classifier. It is worth noting that the number of FYI samples is about nine times that of MYI samples. When randomly selecting samples from the whole FYI and MYI dataset as the training set, only a small amount of MYI samples are selected out. The proportion gap between FYI and MYI samples in the training set is too large and the training model may not grasp the features of MYI. Therefore, 30% of MYI samples are randomly selected as training set and the number of FYI samples is controlled to be three times that of MYI. The confusion matrix of RF and SVM classifiers are presented in Figure 11, which presents the classification results of each class. Table 4 presents the evaluation metrics which are computed according to equations listed in Table 3. Table 4 demonstrates the evaluation metrics through validating with the OSISAF SIT maps using the test dataset. As shown in Figure 11a,b, a total of 322,480 samples, which include 109,559 OW and 212,921 sea ice samples, are applied for testing. The overall As illustrated in Section 3, the sea ice classification is implemented using a two-step method. The first step aims to discriminate OW from sea ice using the RF and SVM classifiers. During the RF model training, the number of trees is set to different values ranging from 10 to 200, in order to find the best classifier. The number of estimators is finally set as 70 in this study. After identifying the OW, the sea ice samples are employed to classify FYI and MYI in the second step using the RF classifier. It is worth noting that the number of FYI samples is about nine times that of MYI samples. When randomly selecting samples from the whole FYI and MYI dataset as the training set, only a small amount of MYI samples are selected out. The proportion gap between FYI and MYI samples in the training set is too large and the training model may not grasp the features of MYI. Therefore, 30% of MYI samples are randomly selected as training set and the number of FYI samples is controlled to be three times that of MYI. The confusion matrix of RF and SVM classifiers are presented in Figure 11, which presents the classification results of each class. Table 4 presents the evaluation metrics which are computed according to equations listed in Table 3. [15,38,40]. Similarly, the FYI-MYI classification results are evaluated using 20,000 FYI samples and 240,967 MYI samples (Figure 11c,d), which results in an overall accuracy of 84.82% for RF, whereas 71.71% for SVM. The performance of RF and SVM for OW-sea ice classification is comparable, whereas RF outperforms SVM significantly in the FYI-MYI classification. The overall space distribution of classification results is shown in Figure 12, which depicts the distribution of predicted and reference types. For illustration purposes, parts of classifications are shown in Figure 13, which demonstrates the predicted, reference, and their comparison results. Firstly, sea ice and OW are labeled as -1 and 1, respectively, in the OW-sea ice classification, whereas FYI and MYI are labeled as -1 and 1, respectively, in the FYI-MYI classification. The comparison results between predicted and reference types are computed as follows, where Ref represents the value of reference types, Pre is the value of predicted types, Diff is the comparison results. In the OW-sea ice classification, -2 represents the sea ice is misclassified as OW (ice-OW), 2 represents the OW is misclassified as sea ice (OW-ice), and 0 means the predicted and reference type is consistent. In the FYI-MYI classification, -2 represents the FYI is misidentified as MYI (FYI-MYI), 2 represents the MYI is misidentified as FYI (MYI-FYI), and 0 means the predicted and reference type is consistent. As shown in Figure 12b, c, most OW and sea ice misclassifications of RF and SVM classifiers appear similarly at the points of OW-sea ice transitions. This can also be seen in Figure 13a,b. The confusion matrix in Figure 11c,d presents that the number of FYI misclassifications is much larger than that of MYI misclassifications. This may be due to the large gap in the number of FYI and MYI samples. The overall accuracy is affected less by the MYI misclassifications since the number of FYI samples is more than 10 times that of MYI samples. As shown in Figure 12e,f, both RF and SVM classifiers have lots of misclassifications, which are widely distributed. In general, the number of misclassifications of SVM is more than that of RF, which can also be seen in Figure 13c,d. In addition, the kappa coefficient of OW-sea ice classification is 0.97 both for RF and SVM, whereas that of FYI-MYI is only 0.39 and 0.23 respectively for RF and SVM. In addition, although SVM can achieve comparable accuracy as RF in OW-sea ice classification, the time consumption of SVM is about nine times that of RF for the same test dataset. Table 4 demonstrates the evaluation metrics through validating with the OSISAF SIT maps using the test dataset. As shown in Figure 11a,b, a total of 322,480 samples, which include 109,559 OW and 212,921 sea ice samples, are applied for testing. The overall accuracy of RF and SVM classifiers for OW-sea ice classification is 98.83% and 98.60% respectively, which are comparable to the results in previous studies [15,38,40]. Similarly, the FYI-MYI classification results are evaluated using 20,000 FYI samples and 240,967 MYI samples (Figure 11c,d), which results in an overall accuracy of 84.82% for RF, whereas 71.71% for SVM. The performance of RF and SVM for OW-sea ice classification is comparable, whereas RF outperforms SVM significantly in the FYI-MYI classification. The overall space distribution of classification results is shown in Figure 12, which depicts the distribution of predicted and reference types. For illustration purposes, parts of classifications are shown in Figure 13, which demonstrates the predicted, reference, and their comparison results. Firstly, sea ice and OW are labeled as −1 and 1, respectively, in the OW-sea ice classification, whereas FYI and MYI are labeled as −1 and 1, respectively, in the FYI-MYI classification. The comparison results between predicted and reference types are computed as follows, where Ref represents the value of reference types, Pre is the value of predicted types, Diff is the comparison results. In the OW-sea ice classification, −2 represents the sea ice is misclassified as OW (ice-OW), 2 represents the OW is misclassified as sea ice (OW-ice), and 0 means the predicted and reference type is consistent. In the FYI-MYI classification, −2 represents the FYI is misidentified as MYI (FYI-MYI), 2 represents the MYI is misidentified as FYI (MYI-FYI), and 0 means the predicted and reference type is consistent. As shown in Figure 12b, c, most OW and sea ice misclassifications of RF and SVM classifiers appear similarly at the points of OW-sea ice transitions. This can also be seen in Figure 13a,b. The confusion matrix in Figure 11c,d presents that the number of FYI misclassifications is much larger than that of MYI misclassifications. This may be due to the large gap in the number of FYI and MYI samples. The overall accuracy is affected less by the MYI misclassifications since the number of FYI samples is more than 10 times that of MYI samples. As shown in Figure 12e,f, both RF and SVM classifiers have lots of misclassifications, which are widely distributed. In general, the number of misclassifications of SVM is more than that of RF, which can also be seen in Figure 13c,d. In addition, the kappa coefficient of OW-sea ice classification is 0.97 both for RF and SVM, whereas that of FYI-MYI is only 0.39 and 0.23 respectively for RF and SVM. In addition, although SVM can achieve comparable accuracy as RF in OW-sea ice classification, the time consumption of SVM is about nine times that of RF for the same test dataset. In each figure of (a) to (d), the top plot represents the predicted types of the classifier, the middle one is the reference types from OSISAF SIT and the lowest one is the comparison results between predicted and reference types. OW-ice means the OW is misidentified as sea ice, and ice-OW means the sea ice is misclassified as OW. FYI-MYI means the FYI is misidentified as MYI, and MYI-FYI means the MYI is misclassified as FYI. Discussion In order to analyze the influence of sea ice growth and melt on sea ice classification and the robustness of the sea ice classification method, another experiment is implemented. Each month of data is used as a training set individually, and the remaining four months of data are used as a test set. The strategy is similar to k-fold cross-validation [62] usually applied in ML. However, the k-fold cross-validation is to split the dataset into k parts randomly, which would eliminate the seasonal effects. Firstly, the data in February are used as the training set, and the data in March, April, November, and December are used as the testing set in turn. Hereafter, the data in Mar, April, November, and December are used as training set successively. The space distribution of classification results is presented in Appendixes A ( Figures A1 to Figures A10) and B (Figures A11 to Figures A20). The accuracy and kappa coefficients of OW-sea ice and FYI-MYI classification are shown in Figures 14 and 15, which indicate that the overall accuracy of OW-sea ice changes less In each figure of (a) to (d), the top plot represents the predicted types of the classifier, the middle one is the reference types from OSISAF SIT and the lowest one is the comparison results between predicted and reference types. OW-ice means the OW is misidentified as sea ice, and ice-OW means the sea ice is misclassified as OW. FYI-MYI means the FYI is misidentified as MYI, and MYI-FYI means the MYI is misclassified as FYI. Discussion In order to analyze the influence of sea ice growth and melt on sea ice classification and the robustness of the sea ice classification method, another experiment is implemented. Each month of data is used as a training set individually, and the remaining four months of data are used as a test set. The strategy is similar to k-fold cross-validation [62] usually applied in ML. However, the k-fold cross-validation is to split the dataset into k parts randomly, which would eliminate the seasonal effects. Firstly, the data in February are used as the training set, and the data in March, April, November, and December are used as the testing set in turn. Hereafter, the data in Mar, April, November, and December are used as training set successively. The space distribution of classification results is presented in Appendix A (Figures A1-A10) and Appendix B (Figures A11-A20). The accuracy and kappa coefficients of OW-sea ice and FYI-MYI classification are shown in Figures 14 and 15, which indicate that the overall accuracy of OW-sea ice changes less than that of FYI-MYI classification. This may be caused by the change of samples distribution as the proportion of OW and sea ice is relatively stable, while that of FYI and MYI changes more easily with ice growth and melt. ens. 2021, 13, x FOR PEER REVIEW 18 of 28 the ice growth from November to December, the successful classification rate increases relatively obviously, which is consistent with the sea ice extent trend shown in Figure 16. As shown in Figure 14, the accuracy of OW-sea ice classification changes slightly from February to April and then drops from April to November, followed by a larger increase from November to December. The accuracy in November reaches the lowest point among all tested months. Figure 15 demonstrates that the general changing trend of accuracy for FYI-MYI classification is similar to that for OW-sea ice classification. This can be explained in combination with the sea ice extent trend in 2018, shown in Figure 16. As shown in Figure 16a, the sea ice extent reached the maximum and minimum on 14 March 2018 and 21 September 2018, respectively. The sea ice extent increases firstly to the peak sea ice extent and then decreases slightly from March to April, while it increases continuously from November to December. Moreover, among five months of use, the ice extent in November is the smallest (Figure 16b). The increase and decrease of sea ice extent can be regarded as ice growth and melt process, respectively. The ice extent during February to April changes less than that during November to December since November is a period of the early stage of winter. The sea ice extent during February to April is relatively stable. From February to March, the FYI grows and the presence of ocean water in FYI becomes less, which results in less misclassification for FYI. During the melting process from March to April, the surface of FYI can be more easily affected by ocean winds due to the decrease of sea ice concentration of FYI. This may lead to more misclassification of FYI to MYI. The overall accuracy for OW-sea ice and FYI-MYI classification in November is the smallest. The explanation is that the sea ice extent is relatively low in November when newly added sea ice is mostly surrounded by ocean water, which leads to more misclassifications. With the ice growth from November to December, the successful classification rate increases relatively obviously, which is consistent with the sea ice extent trend shown in Figure 16. Figure 14. The (a) accuracy and (b) kappa coefficient of OW-sea ice classification using data of each month as the training set in turn. The top and middle plots in (a) represent the accuracy of RF and SVM respectively, while the lowest plot is the accuracy of RF minus that of SVM. The top and middle plots in (b) represent the kappa coefficient of RF and SVM respectively, while the lowest plot is the kappa coefficient of RF minus that of SVM. Conclusion This study investigates the RF and SVM-based classifiers for Arctic Sea ice classification using the one-against-all binarization, which converts a multi-classification into several binary classification problems. Thus, the classification is implemented in a two-step way. The first step aims to discriminate OW from sea ice (FYI and MYI), which is further classified in the second step. The selected data periods include February to April and November to December in 2018, during which the information of different surface types (OW, FYI, and MYI) is available and can be used for comparison. Through validating against OSISAF SIT maps, the overall accuracy of the RF and SVM for OW-sea ice classification is up to 98.83% and 98.60%, which is comparable to results from some previous studies using TDS-1 data [15,30,[38][39][40]. Hereafter, the FYI-MYI classifier is modeled in the second using the samples of sea ice that include FYI and MYI. The overall accuracy of RF Conclusion This study investigates the RF and SVM-based classifiers for Arctic Sea ice classification using the one-against-all binarization, which converts a multi-classification into several binary classification problems. Thus, the classification is implemented in a two-step way. The first step aims to discriminate OW from sea ice (FYI and MYI), which is further classified in the second step. The selected data periods include February to April and November to December in 2018, during which the information of different surface types (OW, FYI, and MYI) is available and can be used for comparison. Through validating against OSISAF SIT maps, the overall accuracy of the RF and SVM for OW-sea ice classification is up to 98.83% and 98.60%, which is comparable to results from some previous studies using TDS-1 data [15,30,[38][39][40]. Hereafter, the FYI-MYI classifier is modeled in the second using the samples of sea ice that include FYI and MYI. The overall accuracy of RF and SVM for classifying FYI and MYI is 84.82% and 71.71%, respectively, which are lower than those of the OW-sea ice classifier. Moreover, the influence of ice growth and melt for sea ice classification is evaluated through a cross-validating strategy, which applies each month of data as the training set and the remaining four months of data as the test set. To the best of the author's knowledge, the RF and SVM are firstly used for classifying FYI and MYI using the spaceborne GNSS-R data. The results indicate that RF and SVM-based GNSS-R have great potential in sea ice classification. This study demonstrates that the great potential of GNSS-R for classifying sea ice types, which can be an effective and complementary approach for remotely sensing sea ice. In future studies, more GNSS-R features, ML algorithms, and environmental effects (such as ocean wind) should be investigated to improve the accuracy of classifying FYI and MYI. Acknowledgments: The authors would like to thank the TechDemoSat-1 team at Surrey Satellite Technology Ltd. (SSTL) for providing the spaceborne GNSS-R data. Our gratitude also to Ocean and Sea Ice Satellite Application Facility for the sea ice edge product used in comparisons. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The Appendix A Appendix A presents the results of RF and SVM for OW-sea ice classification using each month of data as the training set in turn. The RF classifier results are shown in Figures A1-A5, and the SVM classifier results are presented in Figures A6-A10. In each figure title, the RF and SVM represent method, then followed by the test data of month, such Feb, Mar, Apr, Nov, Dec, and OWseaice represents OW-sea ice classification. Appendix B Appendix B presents the results of RF and SVM for FYI-MYI classification using each month of data as the training set in turn. The RF classifier results are shown in Figures A11-A15, and the SVM classifier results are presented in Figures A16-A20. In each figure title, the RF and SVM represent method, then followed by the test data of month, such Feb, Mar, Apr, Nov, Dec, and FYIMYI represents FYI-MYI classification. Appendix B Appendix B presents the results of RF and SVM for FYI-MYI classification using each month of data as the training set in turn. The RF classifier results are shown in Figures A11-A15, and the SVM classifier results are presented in Figures A16-A20. In each figure title, the RF and SVM represent method, then followed by the test data of month, such Feb, Mar, Apr, Nov, Dec, and FYIMYI represents FYI-MYI classification. Appendix B Appendix B presents the results of RF and SVM for FYI-MYI classification using each month of data as the training set in turn. The RF classifier results are shown in Figures 13, A11, A12, A14 and A15, and the SVM classifier results are presented in Figures A16-A20. In each figure title, the RF and SVM represent method, then followed by the test data of month, such February, March, April, November, December, and FYIMYI represents FYI-MYI classification. Figure A10. SVM for OW-sea ice classification using the data in December as the training set. The data in (a) February, (b) March, (c) April, and (d) November are used as the test set in turn. Appendix B Appendix B presents the results of RF and SVM for FYI-MYI classification using each month of data as the training set in turn. The RF classifier results are shown in Figures A11-A15, and the SVM classifier results are presented in Figures A16-A20. In each figure title, the RF and SVM represent method, then followed by the test data of month, such Feb, Mar, Apr, Nov, Dec, and FYIMYI represents FYI-MYI classification.
13,122.2
2021-11-14T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Harwood Academic Publishers imprint, part of the Gordon and Breach Publishing Group Printed in Malaysia K21-Antigen: A Molecule Shared by the Microenvironments of the Human Thymus and Germinal Centers The mouse IgG1 monoclonal antibody (mAb) K21 recognizes a 230-kD molecule (K21-Ag) on Hassall's corpuscles in the human thymus. This mAb also stains cultured thymic epithelial cells as well as other epithelial cell lines, revealing a predominant intracellular localization. Further analysis with mAb K21 on other lymphoid tissues showed that it also stains cells within the germinal centers of human tonsils, both lymphoid (B) cells and some with the appearance of follicular dendritic cells. Double immunostaining of tonsil sections shows that K21-Ag is not expressed by T cells, whereas staining with anti-CD22 and -CD23 mAb revealed some doublepositive cells. A subpopulation of the lymphoid cells express the K21-Ag much more strongly. This K21++/CD23++ subpopulation of cells is localized in the apical light zone of germinal centers, suggesting that K21-Ag may be an important marker for the selected centrocytes within germinal centers and may play a role in B-cell selection and/or development of B-cell memory. Flow cytometric analysis showed that K21-Ag is expressed on the surface of a very low percentage of thymocytes, tonsillar lymphocytes, and peripheral blood mononuclear cells. Analysis of purified/separated tonsillar T and B lymphocytes showed that T cells do not express the K21-Ag; in contrast, B cells express low levels of the K21-Ag, and this together with CD23 is upregulated after mitogenic stimulation. Our data therefore raise the possibility that the K2l- Ag may play a role in B-lymphocyte activation/selection. INTRODUCTION Monoclonal antibodies (mAbs) have provided valuable tools with which to study anatomical and Corresponding author. functional niches within the thymic microenvironment Ritter and Boyd, 1993). Several of these mAbs, raised in our laboratory to both human and mouse thymic epithelial cells, have been shown to detect antigens shared by thymic epithelium and leukocytes (DeMaagd et al., 1985;Kampinga et al., 1989;Imami et al., 1992). One such molecule, gp200-MR6, which is expressed by human thymic cortical epithelium, is thought to be functionally associated with the human interleukin-4 receptor (IL-4R; Larch6 et al., 1988;Imami et al., 1994;A1-Tubuly et al., 1996). These data, together with our observations that the phenotype of thymic epithelium can be modulated by IL-4 suggested that lymphocyte-derived IL-4 might play a role in maintaining the thymic microenvironment (Freysd6ttir and Ritter, 1998). In order to further analyze IL-4R structure and function on thymic epithelium, we attempted to generate mAbs to the human IL-4R following immunization of (CBA/BALB/c)F1 mice with keyhole limpet hemocyanin (KLH)-coupled synthetic peptides based on the published cDNA sequence of the 140-kD chain of the IL-4R (CD124; Galizzi et al., 1990). After fusion and subsequent cloning, several mAbs were identified that were reactive with thymic epithelial cells and lymphocytes, although their distribution did not resemble that of the IL-4R, as revealed by fluoresceinated IL-4 (Mat et al., 1991). We have now analyzed these mAbs further and show that one of them, mAb K21, which labels thymic Hassall's corpuscles and surrounding patches of medullary epithelium, also recognizes tonsillar epithelium and germinal center cellsmboth lymphoid cells and some with the appearance of follicular dendritic cells. Further studies with two-color immunohistochemistry, flow cytometry, Western blotting, and mitogen activation have been undertaken to determine the nature of the K21-Ag, and the lineage and functional status of the cells that express it. These studies should help to elucidate the role of the thymic epithelium and its similarity with other lymphoid microenvironments. Distribution, Expression, and Biochemical Analysis of the K21-Ag MAb K21 was raised against a cocktail of KLHcoupled synthetic peptides of the IL-4R (Galizzi et al., 1990). It was screened by ELISA using individual peptides (coupled to BSA) and showed strong reactivity with WSXWS, but was not reactive with other peptides. As this motif is a functional part of all cytokine receptors within the haematopoietin cytokine receptor superfamily, including IL-2R/3 (CD122), IL-4R (CD124), IL-7R (CD127), and the common y chain (yc), we pursued with analysis of this mAb (Bazan, 1990;Miyajima et al., 1992;Sugamura et al., 1996). Additionally, the mAb K21 was screened on human thymus and was shown to strongly label some Hassall's corpuscles and occasionally the associated medullary epithelium (cluster of thymic epithelial staining; CTES group V). The K21 Hassall's corpuscles are generally those that are medium to large in size ( Figure 1). Isotyping showed it to be an IgG1 mAb. Western blotting analysis, using freshly prepared lysates from frozen tonsillar tissue, revealed that K2 l-Ag comprised one band with an apparent molecular weight of approximately 230 kD ( Figure 2). Western blotting carried out under reducing conditions, using either 2-Mercaptoethanol (2-ME) or Dithiothreitol (DTT), destroyed the epitope recognized by mAb K21. Flow cytometric analysis of thymic epithelial cells from primary cultures ( Figure 3) shows that most of K21-Ag is expressed internally and that this expression increases over 5 to 8 days in culture medium. Neither IFNy nor IL-4 had any effect on the expression of this molecule (data not shown). In addition, indirect immunofluorescence staining and flow cytometric analysis of the human epithelial cell line (HT29) showed that this molecule is also expressed internally in other epithelial cell lineages, whereas surface molecules are also present on a small percentage of cells (2-5%; Figure 4). Indirect immunoperoxidase staining of these cells in culture confirmed the predominant intracellular localization of the K21-Ag ( Figure 5). When tested on other lymphoid tissue, mAb K21 revealed staining of human tonsils. Single immunoperoxidase staining of human tonsil sections showed that this mAb stains the stratified squamous tonsillar epithelium and cell populations within the germinal centers of the follicles where both the B cells lymphocytes showed that T cells do not express the K21-Ag, whereas --14% of B cells are K21-Agpositive (Table I). Flow cytometric analysis of separated tonsillar B lymphocytes after 24-hr mitogen stimulation (lipopolysaccharide, LPS) raised the percentage of K21-Ag-positive cells from 14% to 62% (total percentage of B cells was 80%). Therefore, it appears that K21-Ag expression is upregulated upon B-cell activation. DISCUSSION The aim of this study was to raise mAbs to the IL-4R as part of the project to explore the role of cytokine/ cytokine receptor interactions between the thymic stromal microenvironment and the maturing T lymphocytes, since it has been shown that interactions such as those occurring between IL-4/IL-4R and IL-7/IL-7R are important in development of both thymocytes and the thymic epithelial component Peschon et al., 1994;Zlotnik and Moore, 1995;Freysd6ttir and Ritter, 1998). Although we immunized with known peptides of the IL-4R, after the fusion, selection, and subsequent cloning, none of the mAbs showed recognition of the IL-4R. However, some of them showed reactivity with thymocytes and/ or the thymic epithelial cells. K21, which recognized Hassall's corpuscles and associated medullary epithelium, was selected for further study. It is not clear why the methodology used failed to generate mAbs to the IL-4R. Mice were immunized with a mixture of five different synthetic peptides based on the sequence of the human IL-4R. Peptides were selected for their potential immunogenicity and exposure on the surface of the native molecule, and were coupled to KLH to provide a highly immunogenic carrier. The failure to generate anti-IL-4R mAb may result from close conservation of sequence between mouse and man; alternatively, the threedimensional structure of the peptide-carrier complexes may differ from that of the equivalent sequence in the native molecule. The generation of non-IL-4R mAb was not unexpected since similar findings have been observed previously in our laboratory as well as by other investigators (Sharif et al., 1990;Freysd6ttir et al., manuscript in preparation). One possibility is that we have selected auto-antibody producing cells that cross-react between species. Thus, human anti-GlcNAc antibodies cross-react with cytokeratin from human skin (Shikhman and Cunningham, 1994), although the 230-kD K21-Ag is too large to be a keratin (Laster et al., 1986). Moreover, the mAb TOTO-1 generated from a patient with systemic lupus erythematosus (SLE) has been shown to be reactive with Hassall's corpuscles, indicating the presence of autoreactive antibodies with specificity for thymic epithelium (Numasaki et al., 1995 HT29 cells in culture confirmed the predominant intracellular localization of this antigen. Whether the K21-Ag functions intracellularly or whether the intracellular molecules provide a pool from which molecules can rapidly be recruited to the cell surface is not known. MAb K21 also stains tonsillar epithelial cells. This fits well with the data of Perry and Slipka (1993), who studied the formation of the tonsillar corpuscles (epithelial structures found in the vicinity of the crypts) and showed that these structures closely resemble thymic Hassall's corpuscles (a phenomenon explained by endodermal embryological origin of the primordia of both organs; Perry and Slipka, 1993). K21 also labels patches of cells that resemble follicular dendritic cells in the tonsillar germinal centers, and is expressed at low levels on tonsillar B cells (identified by their location and dual staining with B-lineage markers). K21-positive cells in the Tcell area are CD5-negative, thus indicating that T cells do not bear this antigen and that the scattered positive cells are likely to be B cells. The presence of K21-Ag in/on follicular dendritic cells indicates that for both germinal centers and thymus (Hassall's corpuscles), the K21-Ag seems to be present on elements of the microenvironment where lymphocyte selection and apoptosis occur (Gray, 1993; Douek and Altmann, manuscript in preparation), suggesting a role for K21-Ag in the interaction of the selected lymphocytes with their microenvironment. Further flow cytometric and functional analysis showed that a subpopulation of B cells is K21-Agpositive and that this is expanded on activation. Since activation is associated with the events of lymphocyte selection, this raises the possibility that K21-Ag in the tonsil is present on both the selecting microenvironment and on the selected cells, paralleling data obtained with CD23 (Liu et al., 1991;MacLennan et al., 1992;Gray, 1993;MacLennan, 1994). A similar situation may exist in the thymus since B lymphocytes have been shown to be present in the normal human thymic medulla; these are concentrated around Hassall's corpuscles and often have rosettes of thymocytes around them (Spencer et al., 1992). Various disorders, including infections, therapeutic treatments, irradiation, and immune deficiencies, are associated with thymic involution and loss of medullary epithelium, including/excluding loss of Hassall's corpuscles. In HIV-related disease, in the pediatric population of of the most common disorders is thymic dysinvolution with loss of Hassall's corpuscles (Mishalani et al., 1995). In contrast, in adults with early stage of AIDS, the adipose-involuted thymus shows persistence of many Hassall's corpuscles together with lymphoid follicular hyperplasia (Prevot et al., 1992). These differences might reflect differences in the lymphocyte populations involved in the two disease situations since maintenance of medullary epithelium is known to be influenced by the surrounding thymocytes . Thus, loss of mature medullary thymocytes either due to cyclosporin A treatment or in T-cell-deficient mice leads to loss of the thymic medullary epithelium (including Hassall's corpuscles) (Kanariou et al., 1989;van Ewijk, 1991;Palmer et al., 1993). The mAb K21 therefore recognizes a novel 230-kD molecule expressed by microenvironmental cells of thymus and tonsil that play an important role in lymphocyte selection. Investigations to define more fully the structure and function of this molecule and to determine its role, if any, in lymphocyte-selection events are currently in progress. Source of Tissues, Cells, and Cell Lines Human thymus and tonsil tissue and tissue sections Thymus samples were obtained from children undergoing cardiac surgery (Great Ormond Street Hospital for Sick Children, London) and freshly excised tonsils from children that had undergone tonsillectomy (ENT Clinic, St. Mary's Hospital, London). Six-micron tissue sections cut from liquid nitrogen frozen blocks were air dried for 24 hr, fixed in acetone (BDH, Poole, UK) for 10 min, and either used immediately or stored at -20C until used. Thymocytes, tonsillar lymphocytes, and PBMC Fresh thymus or tonsil tissue was thoroughly teased to release the lymphocytes, samples were allowed to sediment for 10 min, and the lymphocyte-containing supernatant was collected and washed three times in HEPES (20 mM) buffered RPMI 1640 (ICN Flow, Thame, UK). The recovered cell suspension was then centrifuged over Ficoll-Hypaque (688 g, 20 min; Pharmacia Biotech, St. Alban's, UK). Mononuclear cells were collected from the interface and washed twice in HEPES/RPMI. Medium containing 50/xg/ml gentamycin was used for tonsils. Peripheral blood from healthy donors was collected into heparinized 30-ml universal containers (Bibby Sterilin, Stone, UK) containing 100 U of preservative-free heparin (Leo Laboratories, Princes Risborough, UK). PBMC were isolated by density-gradient centrifugation using Ficoll-Hypaque as described before. Fresh stromal cells and human thymic epithelial cell cultures The remaining thymic stroma (sedimenting fragments) were processed by digestion in collagenase and epithelial cells were further isolated and cultured as described by Freysd6ttir and Ritter (1998). Cells were cultured for 8 days in either medium only, IFN y (100 U/ml) or IL-4 (25 U/ml) (both Genzyme, West Malling, UK), harvested on days 0, 5, and 8, and analyzed by flow cytometry as described in what follows. Source of Antibodies K21 is an IgG1 murine mAb, reactive with Hassall's corpuscles of the human thymic epithelium, raised following i.p. immunization of (CBA/BALB/c)F1 mice with synthetic peptides (MKVLQXPTCVSDY, KPSXHVKPR, SXNDPADFRI, WSXWS, and FVSVGPTYMRVS coupled to KLH), based on the published cDNA sequence of the 140-kD chain of the IL-4R (Galizzi et al., 1990), and fusion of immune spleen cells with NSO myeloma cells (Kearney et al., 1979). Immunization, fusion, HAT selection, and cloning were carried out as described previously (DeMaagd et al., 1985;Ladyman and Ritter, 1995). Screening was carried out by ELISA using immobilized individual peptides and by indirect immunoperoxidase staining of frozen tissue sections of human thymus. All antibodies were titrated before use. lmmunohistochemistry Single indirect immunohistochemical staining of human thymus and tonsil sections Tissue sections were incubated with primary antibody for 1 hr at room temperature, washed in PBS, and then incubated hr further with either FITCor HRPconjugated rabbit anti-mouse Ig at 1/20 in PBS containing 5% normal human serum (NHS). For immunofluorescence, sections were mounted in a AF2 mountant (Citifluor, Canterbury, UK). For enzyme staining, sections were developed for 10 min using 6 mg diaminobenzidine (DAB; Sigma, Poole, UK) and 5 /xl 30% H202 (Sigma) per 10 ml PBS, counterstained with hematoxylin, and mounted in Kaiser's gelatin-based mountant. No staining was seen when the primary antibody was either omitted or replaced by an irrelevant isotype-matched control. Double immunoenzyme staining of human tonsil sections Sections were incubated for 45 rain with one set of primary antibodies, followed by HRP-conjugated secondary antibody (45 min), and developed as described before, and then incubated for an additional 45 min with the second set of primary and APconjugated rabbit anti-mouse Ig (45 min). AP labeling was developed over 20 rain using 0.1% fast blue BB salt with 0.02% napthol-AS-MX phosphate, 0.2% levamisole (all Sigma), and 2% dimethyl formamide (BDH). All incubations were carried out at room temperature and all washes were with Tris-buffered saline (TBS), pH 7.6. No double staining was seen when either the first or the second primary mAb was omitted or replaced by the isotype-matched control. Biochemical Analysis of K21-Ag For Western blotting analysis, 30 sections of frozen human tonsil tissue were cut on a cryostat at a thickness of 10/xm and lysed for 10 min in ml of ice-cold lysis buffer (10 mM Tris/HC1, pH 7.4, containing 150 mM NaC1, 0.5% NP-40, and mM PMSF; all Sigma) and then centrifuged in a 1.5-ml plastic tube (Eppendorf, Hamburg) at 13,000 g for 4 min at 4C. The cell-free supernatant was mixed with an equal volume of either reducing (10% 2-ME or 300 mM DTT) or nonreducing double-strength Laemmli sample buffer, subjected to SDS-10%PAGE (Laemmli, 1970) and followed by electrophoretic transfer (300 mA for to 1.5 hr; Towbin et al., 1979) onto nylon membranes (Milipore, Bedford, MA). Unoccupied charged sites on the membrane were blocked by immersion in 2.5% skimmed milk powder (Cadbury, Stafford, UK) in PBS for 3 to 24 hr. Membranes were cut into vertical strips that were incubated for 2 hr with either mAb K21 (1/3), anti-CD23, anti-CD45, anti-CD4 (all 1/10), mAb MR6 (1/3), or isotype-matched negative control mAb (H17E2; Travers and Bodmer, 1984) at --10/xg/ml in 0.5% skimmed milk powder in PBS (this buffer was used for subsequent Ab dilutions and washing of strips). Following washing for 15 min in three changes of washing buffer, strips were incubated for 1 hr in a 1/100 dilution of HRP-conjugated rabbit antimouse Ig. Strips were then washed and peroxidase activity developed by incubation in 0.6 mg/ml DAB in 10 ml PBS, containing 5/xl H202. Indirect Immunofluorescence and Flow Cytometric Analysis" Surface and Internal Staining For analysis by flow cytometry, 1 106 cells of each group were incubated with 100/zl (neat supernatant or 1/10 dilution of Dako mAbs) of the indicated mAb for hr on ice in polystyrene round-bottomed tubes (Falcon, Becton-Dickinson, Lincoln Park, NJ) (Larch6 et al., 1988). After washing twice in medium, the cells were incubated for an additional hr on ice with 100 /xl of a 1/20 dilution of FITC-conjugated rabbit anti-mouse Ig. After three additional washes, stained cells were analyzed using an Epics XL-MCL, Flow cytofluorimeter with log amplifier (Coulter, USA). For internal staining, cells were treated with 0.1% saponin in PBS, and this buffer was used in all subsequent washes. Tonsillar Cell Separation and Mitogen Stimulation T and B cells were separated by E-rosetting and nylon wool, respectively. For E-rosetting, cells were prepared by mixing tonsillar mononuclear cells with sheep red blood cells (SRBC; TCS, Buckingham, UK; modified from Callard et al., 1987). For nylon wool separation, tonsillar mononuclear cells were resuspended in 2 ml of RPMI medium, and loaded on a prewetted nylon wool (Biotest, Dreieich, Germany) column (0.2 g/2 ml column). The columns were then sealed and incubated at 37C for 30 min, T cells were eluted by washing the columns twice with 10 ml warm (37C) medium, while B cells were collected by applying 10 ml of ice-cold medium and agitating the wool vigorously. A sample of each of these cell preparations was stained by indirect immunofluorescence using both pan-Band pan-T-cell markers (CD20, CD19, CD22 and CD3, CD5, CD7, respectively) to gauge their purity (using flow cytometry as described before). Tonsillar E-rosetted cells were routinely 80-90% (CD3/) T cells, <10% (CD20+) B cells. Tonsillar nylon-wool-bound cells were routinely 90-95% (CD20+) B cells, <5% (CD3+) T cells. In all experiments, 1% to 3% were monocytes (CD14/, CD68/). Separated B cells were cultured in both the presence and absence of lipopolysaccharides (LPS; 10 /zg/ml; Sigma) for 24 hr, harvested, and analyzed by flow cytometry as described before.
4,313.4
1996-01-01T00:00:00.000
[ "Biology" ]
Aortic Valve Reconstruction with Ozaki Technique Modern bioprostheses offer a complete and definitive solution to elderly patients who need aortic valve surgery. Nonetheless, the scenario is more demanding when dealing with younger and less fragile patients. In this setting, any prosthetic aortic valve replacement can provide only a suboptimal solution and its related issues have not been fixed yet. The answer to the needs of this special population is the enhancement and refinement of the surgical technique. The Ozaki technique relies on custom-tailored autologous aortic cusps individually sutured in the aortic position. This approach has been showing optimal results if performed after a dedicated training period. INTRODUCTION The treatment of aortic valve diseases is one of the oldest surgical challenges Cardiac Surgery still must face. Since the implantation of the first Starr-Edwards caged-ball prosthesis in 1960, the evolution and progress of the construction and the design of prosthetic valves led to the biological revolution first, and, currently, to the transcatheter era. If modern biological solutions offer a complete and definitive path to those elderly patients who need aortic valve surgery, the scenario is more demanding when we deal with younger and less fragile patients. Durability of both surgical bioprosthesis and percutaneous valves is a well-known issue in this population and the burden of either reintervention or patient-prosthesis mismatch following a percutaneous valve-in-valve procedure must be considered when a 50-year-old or younger patient suffers from severe aortic valve disease. In this scenario, a prosthetic aortic valve replacement can provide only a suboptimal solution and the industrial technology has not fixed its related issues yet. The answer to the needs of this special population is the enhancement and refinement of the surgical technique. To date, only two procedures can avoid the drawbacks of a long-term anticoagulation or the burden of one or multiple reinterventions, namely the Ross operation and the aortic valve neocuspidization (AVNeo, with the Ozaki technique). Both require expert hands, appropriate training, and optimization of the surgical technique. The use of autologous pericardium in cardiac surgery started in the 1960s, when Bjoerk and Hultquist first implanted autologous pericardial leaflets. Subsequently, in 1986, Love et al. reported the immersion of autologous pericardium in 0.6% glutaraldehyde for 10 minutes to eliminate the problems related to its scarring. After that report, Al Halees published a series of Brazilian Journal of Cardiovascular Surgery Once the aortic valve is exposed and the diseased cusps are excised, an extensive and accurate annular decalcification is crucial. It is paramount to achieve a precise measurement of the cusps after this step. A dedicated Ozaki sizer ( Figure 1) is used to measure the distance between each commissure. The autologous pericardium is then tailored with the original Ozaki template (Figure 2). During the trimming procedure careful attention should be paid to use the thinner part of the pericardial patch for the reconstruction of the smaller cusp, in order to improve its mobility, while the thicker part should be reserved for the biggest, so the tolerance to diastolic stress can be ensured. The final part of the reconstruction involves suturing the neocusp to the aortic annulus ( Figure 3), usually using a single running 4-0 monofilament stitch. The suture starts at the nadir of the annulus, where the monofilament is tied down. The ends of the suture are then used to move the reconstruction bidirectionally towards the commissures. It is important during this step to place the cusp with its inner side facing the left ventricular surface. Another critical aspect of the reconstruction at this point entails the correct spacing of the bites: the distance between each bite on the autologous pericardium must be regular and fixed, while on the aortic annulus it must be different at the nadir and the commissural zone. As the suture comes closer to the nadir, the distance between the bites on the aortic annulus gets shorter than the distance on the cuspid, with a 3:1 ratio (Figure 3). Moving towards the commissure, this discrepancy normalizes itself, with a perfect correspondence between the bites of the running suture at the level of the coaptation TECHNIQUE The Ozaki technique for aortic valve reconstruction [1,2] is based on the independent replacement of the three aortic valve cusps with tailored autologous pericardial neocusps. The preparation and the tailoring of the patient's pericardium is therefore one of the cornerstones of this procedure. Thus, it is the first aspect we will put on focus in the following description. In order to achieve enough tissue for the three cusps, a minimum amount of 7×7 cm of pericardium cleaned from fat and redundant tissue of the outer surface need to be excised. However, autologous fresh pericardium could present some relevant problems. Its elastic and twisty properties make handling the patch potentially difficult. In addition, when left untreated, it exhibits a high propensity to develop fibrosis and calcification. Hence, to address this pitfall, the autologous patch is currently and routinely treated with a 0.6% glutaraldehyde solution for 10 minutes and then rinsed three times with saline solution for a total of 20 minutes. DISCUSSION The reconstruction of the aortic cusps based on the technique described by Ozaki et al. [1,2] is highly reproducible and shows some peculiar and unique features. Since the pericardium is sewed directly to the aortic annulus, this operation offers a maximized valve orifice area with physiologic transvalvular gradients. At the same time, the absence of a stented frame preserves the aortic root physiological ability to expand its diameter during the systolic ejection, thus maintaining the natural coordination between left ventricle, aortic annulus, sinus of Valsalva, and ascending aorta [3] . In addition, resuspending the three neocusps up to the level of the sinotubular junction provides an extensive coaptation area during diastole. The extent of this zone, also known as effective height, is one of the most powerful predictors of the long-term valve competence, as already suggested in the literature [4,5] . The large coaptation obtained with the AVNeo operation, combined with the previously exposed features, is indeed one of the reasons for the excellent mid-and long-term results demonstrated in the series by Ozaki et al. and the literature [6] , which are the basis of the hypothetical prolonged durability of this surgical reconstruction. Unlike what it might be thought, this technique is feasible also when managing bicuspid or unicuspid aortic valves, providing their "tricuspidalization". Tricuspidalization of a bicuspid or unicuspid valve with the Ozaki techniques ensures the possibility to restore the typical orientation of the cusps of a normal valve and the achievement of an appropriate length of the free margin of leaflets, thus permitting its fully opening while maintaining the normal valve shape. In an unicuspid valve, the total length of the free edge of the leaflets is significantly shorter compared to a tricuspid valve. Furthermore, aortic valve reconstruction can be adopted not only for aortic regurgitation but also in case of valve stenosis, infective endocarditis, and reoperation after bioprosthesis deterioration. zone. In this area, the lateral margin of each cusp is sewed with a little plication of the inner pericardium facing the aortic wall. This fashion warrants the cusp's maximized resistance to the stress, and, at the same time, optimizes the coaptation drawing the pericardium towards the other cusps. An additional commissuroplasty 4-0 monofilament stitch is placed through the neocusps, above the last bite of the running suture, and then laterally on the edge of the cusp plication, with the aim of securing the coaptation point and fitting the plicated cusp to the aortic wall. Both the running annular suture and commissural stitch are tied on the outward side of the aorta, with the interposition of a felted pledget (Figure 4). After the resuspension of the three neocusps, a visual check -enhanced with negative pressure made by the left ventricular vent -is needed to evaluate the degree of the coaptation. Brazilian Journal of Cardiovascular Surgery CONCLUSION Surgical valve replacement represents the gold standard to treat aortic valve diseases, but the landscape of possibilities in this field is growing. The need to improve and enhance the performance of the surgical offer is mandatory, particularly in those patients who get otherwise convicted to a suboptimal treatment. Different drawbacks can be acknowledged either in mechanical or bioprosthetic solutions. Despite the need for lifelong anticoagulation, the choice of a mechanical prosthesis is the recommended option for patients under 60 years, even if thromboembolic events, device malfunction, and spontaneous bleeding in the late decades are considerable disadvantages that usually concern and blur the patient's choice. Similarly, concerns are driven by the limited durability of the bioprostheses in young patients. Surely, the presence of small aortic annuli, the young age at surgery, and the pediatric population represent a cluster in which there is a lack of durable and tested solutions. In a recent work by Del Nido et. al. [6] , the Ozaki procedure also demonstrated promising results in children. Particularly, in patients with small annuli undergoing aortic root enlargements and valve reconstruction, the native annuli continued to grow appropriately and remained free from subsequent aortic stenosis. Thus, the refinement of the AVneo operation could represent a more suitable yet versatile option. In addition, it is also our belief that this procedure can be mastered after a relatively short training period, considering the No financial support. No conflict of interest.
2,165.8
2021-07-07T00:00:00.000
[ "Medicine", "Engineering" ]
Interfacial Charge Transfer Influences Thin-Film Polymorphism The structure and chemical composition are the key parameters influencing the properties of organic thin films deposited on inorganic substrates. Such films often display structures that substantially differ from the bulk, and the substrate has a relevant influence on their polymorphism. In this work, we illuminate the role of the substrate by studying its influence on para-benzoquinone on two different substrates, Ag(111) and graphene. We employ a combination of first-principles calculations and machine learning to identify the energetically most favorable structures on both substrates and study their electronic properties. Our results indicate that for the first layer, similar structures are favorable for both substrates. For the second layer, we find two significantly different structures. Interestingly, graphene favors the one with less, while Ag favors the one with more electronic coupling. We explain this switch in stability as an effect of the different charge transfer on the two substrates. ■ INTRODUCTION Organic thin films are materials of increasing interest, mainly by virtue of their application to the field of organic electronics. In comparison to inorganic alternatives, they present advantages such as mechanical flexibility and low cost. With a thickness ranging from less than a nanometer up to a few micrometers, organic thin films are commonly employed in the construction of organic field-effect transistors (OFETs), 1,2 organic light-emitting diodes (OLEDs), 3 and organic solar cells. 4 Of particular interest are films composed of molecules that form ordered structures with relatively high charge carrier mobilities. 5−7 In fact, the properties of molecular materials, and especially their charge carrier mobilities, depend drastically on the polymorph they assume, i.e., the relative arrangement of individual molecules in the thin film. 8,9 Which polymorph a thin film forms depends not only on the fabrication conditions 10 but also the nature of the substrate on which it grows has a decisive impact. Because the substrate interacts with molecules in the first layer and because it changes the way molecules interact with each other (e.g., because they become charged), the second and subsequent layers can either assume the same structure as the first, 11−13 assume a bulk structure, 14 or form a completely different structure altogether. 15 The decisive role of the substrate is highlighted by reports, where even the same molecule forms different structures on different substrates. 16−18 In this work, we shine light on the role of the substrate and tackle the question whetherand whysome substrates are more likely than others to induce polymorphs, which are beneficial for organic electronics. To this end, we use a combination of machine learning and first-principles calculations to investigate the structure of thin films of parabenzoquinone adsorbed on Ag(111) and on graphene. ■ COMPUTATIONAL METHODS To simulate the electronic structure of our systems, we performed density functional theory (DFT) calculations using the FHI-aims package, 72 with the Perdew−Burke−Ernzerhof (PBE) 73 exchange−correlation functional and the TS surf correction 74,75 for long-range dispersion interactions. The repeated slab approach was employed using a dipole correction 76 to electrostatically decouple the periodic replicas in the z direction. Default "tight" basis sets were used for all chemical elements except Ag, for which a mixed quality numerical basis set (see ref 29 for details) was employed. A unit cell height of >80 Å was selected. Predicting the structure of thin films is far from trivial. To date, a variety of specialized algorithms are available, which predict the structures of molecular crystals, 33−41 their surfaces, 42 and single molecules adsorbing on a surface 43−47 or monolayers of molecules adsorbed on a substrate. 48−53 Here, we use an extended version of the SAMPLE approach, which is specifically designed for inorganic/organic interfaces. 51 When applying the SAMPLE approach, one starts with finding the local adsorption geometries that an isolated molecule could adopt on a surface. These structures act as building blocks for the subsequent structure search. To find all of these single-molecule local adsorption geometries on the surface, a three-step procedure is followed. First, a single molecule is relaxed at an arbitrary position on top of the substrate. Second, a Gaussian process regression tool equivalent to the BOSS approach 46 is used to find all stationary points in the potential energy surface (PES) along three dimensions (translations along X and Y, rotation of the molecule around the axis perpendicular to the surface). Starting from these points, the adsorbate molecules are fully relaxed with the BFGS algorithm until the remaining forces on the atoms are below a threshold of 0.01 eV/Å. During this process, all substrate atoms are kept fixed. These optimized geometries later serve as building blocks. As a second step in the SAMPLE approach, polymorph candidateswith numbers ranging in the millionsare built by assembling all possible combinations of the just obtained single-molecule building blocks in a variety of unit cells. A small subset of these polymorphs is then evaluated with DFT as described above. The resulting energies are used to train an energy model utilizing Bayesian linear regression. The trained energy model allows to predict the energies of all remaining polymorphs with a level of accuracy similar to the underlying electronic structure method. A more detailed explanation of the SAMPLE procedure is given in ref 51. For Ag(111), the geometry of the first layer of benzoquinone was taken from an earlier work. 29 The SAMPLE approach was applied to predict the structure of the second layer. As a substrate for the adsorption of the second layer, we used a geometry that includes Ag atoms and the first layer of benzoquinone. This geometry is shown in Figure 1b. To obtain the single-molecule building blocks, all calculations were executed on a 2 × 2 substrate cell, integrating in k-space on a grid of 3 × 3 points per primitive lattice direction and 1 kpoint in the Z direction. To reduce the computational cost of running geometry optimizations with these systems, the search for local adsorption geometries was conducted on a gas-phase monolayer substrate, in which Ag atoms were removed. The adsorption energy of the adsorption geometries was evaluated, reintroducing the metal atoms for a single-point calculation. The full geometry, with metal atoms, was also used for all of the following stages of the structure search, which entailed working with polymorph candidates generated by assembling the building blocks. At this stage of the work, given the necessity to work with a wide variety of unit cells, the k-space integration was conducted on generalized Monkhorst−Pack grids. 78 The grids were built with a sampling density of 11.74 Å, and this same value was used for all other systems as well. Among all polymorphs, a set of 250 was selected, employing the D-optimality criterion. 77 Of these, 200 were randomly selected to compose a training set, while the remaining 50 were used as a test set. In addition, 961 "free-standing" calculations (i.e., polymorph candidates where the metal atoms and firstlayer molecules were removed) were used to calculate priors for all intralayer interaction energies. After training with the conditions described before, SAMPLE predicted the adsorption energies of the test set with a root mean square error (RMSE) of 9 meV/nm 2 . Leave-one-out cross-validation (LOOCV) 79 was also applied on the training set and gave an RMSE of 13 meV/nm 2 . We consider an accuracy of approximately 25 meV/molecule at room temperature (i.e., k b T), which, for the coverages found in our study, corresponds to about 60 meV/nm 2 , as an appropriate threshold to not miss important configurations. For the graphene system, an analogous structure search procedure was applied separately to both the first and the second layers of benzoquinone. For the first layer, the search for local adsorption geometries was conducted on a 5 × 5 graphene cell. This cell size ensured that the interaction of the benzoquinone molecule with its periodic replicas was negligible. At this stage, the k-space integration was conducted on a grid of 6 × 6 points per primitive lattice direction and 1 kpoint in the Z direction. For the SAMPLE prediction, 100 calculations were used, 60 as a training set and 40 as a test set, together with 1000 "free-standing" calculations for the intralayer prior. At this stage, the k-space integration was conducted on generalized Monkhorst−Pack grids. 78 The prediction resulted in an RMSE of 8 meV/nm 2 on the test set and a LOOCV-RMSE of 16 meV/nm 2 . For the second layer, the structure shown in Figure 1a was set as a substrate primitive unit cell. The search for local adsorption geometries was conducted on a 2 × 2 substrate cell, and the k-space integration was conducted on a grid of 6 × 6 points. For the SAMPLE prediction, 250 calculations were used, 200 as a training set and 50 as a test set, together with 997 calculations on hypothetical, gas-phase layers for interaction priors. At this stage, the k-space integration was conducted on generalized Monkhorst−Pack grids. 78 The prediction resulted in an RMSE of 24 meV/nm 2 on the test set and a LOOCV-RMSE of 47 meV/nm 2 . Further details about the results of the structure search procedure can be found in the Supporting Information to this paper. To compare the effect of the two different substrates on the electronic structure of the first adsorbate layer, we performed calculations of the adsorption-induced charge rearrangement Δρ, which is defined as where ρ system , ρ sub , and ρ monolayer are the charge densities of the combined system, of the substrate, and of the isolated benzoquinone monolayer, respectively. From this quantity, we can derive an estimate of the net charge transfer from below the substrate to above the substrate by estimating the maximum value of To compare the energetics of different molecular arrangements, we obtained the pure electrostatic interaction between molecules (see Figure 5b) by applying an energy decomposition scheme. This decomposition scheme combines the electron densities of the isolated fragments to calculate the classical electrostatic energy. 65,67−69 These calculations were performed with a code designed for periodic systems (see ref 65 for details). To emulate a cluster system, the molecules were placed in a 25 × 25 × 50 Å unit cell. Additional charge was added with a layer of point charges analogously to the CREST method. 80 To obtain an estimate of the charge carrier mobility for different systems, we performed calculations of electronic coupling terms between molecules and layers with the Loẅdinorthogonalized 81 second version of the projection-operator diabatization method POD2L, 71 which was recently demonstrated to yield very accurate results for organic molecules. 62 For these calculations, FHI-aims default light basis sets were used in place of the tight basis sets, as the former were found to be more numerically stable under the required blockdiagonalization scheme. In the case of molecular dimers, the coupling between lowest unoccupied molecular orbitals (LUMOs) of the two molecules was calculated. In the case of bilayers on graphene, in which a unit cell of each layer contains one molecule, the coupling between the LUMOs of the two isolated monolayers at the Γ point was calculated. In the case of bilayers on Ag(111), a unit cell of each layer contains two molecules; as a consequence, for each layer, the molecular LUMOs combine to form two orbitals, LUMO and LUMO + 1 of the isolated monolayer, which are almost perfectly degenerate in energy. Interlayer couplings were computed calculating the couplings between all 4 possible combinations of orbitals (LUMO−LUMO, LUMO−LUMO + 1, LUMO + 1-LUMO, LUMO + 1-LUMO + 1), summing the 4 values and dividing by 2 to obtain per-molecule values directly comparable to those on graphene. ■ RESULTS AND DISCUSSION Both Ag and graphene are sensible electrode materials in organic electronics. 19−21 At the same time, they show fundamentally different interactions with organic molecules: Ag is a weakly reactive substrate, which readily undergoes charge-transfer reactions and can form weak covalent bonds with organic adsorbates. 22−29 Conversely, graphene hardly forms covalent bonds at all. Benzoquinone was chosen as a model molecule due to its small size (reducing the computational cost) while exhibiting π-conjugation and functionalization with carbonyl groups. As we have previously shown, the intermolecular interactions of this molecule are qualitatively similar to those of technologically more relevant, larger analogues like 5,12-pentacenequinone. 29−32 Before considering thin-film growth, it is necessary to look at the structure that the first layer of benzoquinone forms on the two substrates. For Ag(111), the polymorph candidates have been obtained in an earlier work, 29 while for graphene, a structure search is performed anew through the SAMPLE approach (see Computational Methods for details about the approach). The best polymorph in the SAMPLE ranking on graphene has one molecule per unit cell and is presented in Figure 1a. In this geometry, molecules adsorb at a height of approximately 3.3 Å and remain almost perfectly flat. For benzoquinone on Ag(111), we find a comparable structure among the energetically best polymorph candidates (details in the Supporting Information). This configuration is shown in Figure 1b. Its unit cell contains two molecules, placed on a top site and on a bridge site of the metal surface. The molecules adsorb at a height of about 2.6 Å and are slightly bent, with oxygen closer to the metal substrate than the carbon backbone. The two geometries appear strikingly similar, and in fact, an equivalent cell of the graphene monolayer, with twice the area, is virtually identical (deviations lower than 0.01 Å) to the cell of the monolayer on Ag(111) (dashed green cell and purple cell in Figure 1b). The fact that the first layer on both substrates shows equivalent lattice parameters and molecular alignment means that any subsequent layers will be subjected to identical stress, and to equivalent templating effects from the first layer. In other words, we can expect that any differences in the energetics and structure of the second layer stem directly from the (electronic) influence of the substrate. As a first step in describing thin films, we study the second molecular layer, and we invoke two assumptions. First, we assume that the geometry of the first layer only undergoes minor changes when the additional material is depositedin particular, the unit cell remains fixed. We note that, in practice, this is not always the case, as in some systems the first layer reorients to form a more tightly packed layer. 54−56 However, predicting such reorientations is beyond the scope of the present work. Second, we assume Frank−van Der Merwe growth, i.e., each layer does not start forming until the previous layer is full. This assumption is reasonable here because benzoquinone shows strongly attractive intermolecular interactions. Together, these two assumptions allow us to use the SAMPLE approach. For this, we employ the monolayer geometries of benzoquinone (plus metal/graphene) as effective substrate unit cells and search for and combine the local adsorption geometries in the second layer. To obtain accurate energies, after the ranking of the polymorph candidates by SAMPLE, we perform full geometry optimizations for the 10 best structures to allow the molecules in the second layer to assume more favorable orientations toward the first layer. For these optimizations, molecules in the first layer were also allowed to relax; the top 2 layers of Ag were kept free, allowing the surface to partially reconstruct, while the bottom 6 layers were kept fixed; all graphene atoms were kept fixed since initial tests showed that the graphene substrate relaxed by less than 0.01 Å/atom (with an energy variation of less than 10 meV/nm 2 ). For Ag, the five energetically best bilayer structures are shown in Figure 2a. The ranking is performed according to energy per area, the most sensible measure for the stability of close-packed adsorbate polymorphs. 57 In the energetically most favorable structure, the benzoquinone molecules in the first and the second layer are partly on top of each other, with one (negatively charged) oxygen of one molecule always aligned with the center (i.e., the least negative region) of the ring of a molecule in the other layer. We refer to this alignment, which is shown in Figure 2a by red molecules in the top layer, as molecule-on-molecule (MoM) hereafter. The second-best geometry is already 50 meV/nm 2 worse in energy. In this geometry, the molecules in the second layer are located above "gaps" of the first layer (marked in orange in Figure 2a). Only the carbonyl groups of the first and the second layer are on top of each other, with oppositely directed dipoles, presumably leading to electrostatic attraction. To distinguish this alignment from the others, we refer to it as molecule-ongap (MoG) hereafter. The energetically next-higher lying structures are combinations of MoM and MoG, variations thereof, and structures with lower coverages. On graphene, we also find the MoM and the MoG geometry as energetically favorable structures. However, in salient contrast to the situation on Ag, here, the MoG structure is energetically more beneficial than MoM by 20 meV/nm 2 . Only two structures are found that are energetically even better than MoG and MoM. Both of these structures are noticeably more complex than MoG and MoM, featuring five adsorbates per unit cell and several adsorption positions similar to MoM and MoG. For the sake of conciseness and clarity, we will focus the following discussion on the MoM and the MoG structures only. A brief discussion of structures 1 and 2 can be found in the Supporting Information. Since the charge carrier mobility (or, more precisely, the electronic coupling) of a crystal depends on the wave function overlap, 8,58,59 already a visual inspection of the MoM and MoG geometries lets us expect that this property will be very different for the two geometries. The fact that the ordering of the two polymorphs reverses depending on the substrate, therefore, deserves further scrutiny, and we should attempt to explain the reasons for this switch and its consequences on interlayer electronic coupling. When considering only the second layer, on each substrate, the MoM and MoG polymorphs exhibit the same unit cell vectors and very similar geometries, differing mostly by a translation relative to the first benzoquinone layer. Thus, we expect the switch in the energetic ordering to be caused by a variation in the interlayer interactions between the first and the second layer. To verify that the switch in stability is caused directly by the different substrates, and not by the small geometric differences in the first layer, we examine the variation in adsorption energy that occurs if we keep the geometry of the first layer fixed but remove all graphene or Ag atoms (Figure 3). We find that for the case of graphene, MoM and MoG experience destabilizations that are moderate and fundamentally equivalent, i.e., a graphene substrate does not notably affect the energetic ordering. For Ag(111), when removing the substrate, MoM becomes energetically destabilized with respect to the MoG geometry. This indicates a stronger influence of the substrate on the MoM structure compared to MoG. We can thus conclude that the Ag substrate massively changes the way the first and the second layer interact with each other. Specifically, we find that the Ag substrate significantly stabilizes the MoM geometry, explaining why it is favored on Ag but not on graphene. We now need to ask which underlying mechanism stabilizes the MoM geometries. We can trace the effect back to the charge rearrangements resulting from the contact between the substrate and a molecular layer. To illustrate this, we calculated the adsorption-induced charge rearrangements Δρ and the net charge transfer max(Q bond ) for the benzoquinone monolayers on Ag and on graphene (for details, see Computational Methods). The profiles of Δρ and Q bond along the z axis are shown in Figure 4a and lead to a value of max(Q bond ) of −0.249 for benzoquinone on Ag(111) and −0.031 for benzoquinone on graphene. In other words, for Ag, over the area occupied by one benzoquinone molecule, a charge corresponding to 0.25 The Journal of Physical Chemistry C pubs.acs.org/JPCC Article electrons is transferred from below the substrate surface to above it. Conversely, graphene is practically inert, and the electron transfer is negligible. Furthermore, by conducting a comparative molecular-orbital projected density of states (MODOS) analysis, 60,61 detailed in Figure 4b, we find that the LUMO and LUMO + 1 of the benzoquinone monolayer (corresponding to the LUMOs of the two benzoquinone molecules in the unit cell) fall largely under the Fermi energy for Ag(111) but remain above it for graphene. As a consequence, the LUMO of the benzoquinone layer gets filled in the case of Ag, reaching an occupation of 1.25 electrons, while in the case of graphene, it remains substantially empty at an occupation of 0.05 electrons. This different charge transfer directly impacts the interaction with the second molecular layer. To analyze the effect of extra charge on interlayer interactions, we use a simple dimer model composed of two stacked benzoquinone molecules. The two molecules are arranged at a distance of 3 Å along the z direction, which is a reasonable approximation of the interlayer distances for our systems. They are then shifted with respect to one another along the long molecular axis. The shifting starts from a position of congruence in x−y coordinates and includes positions corresponding to both the MoM and MoG offsets. For each position, the electronic energy of the system (i.e., the total energy without van der Waals contributions) is evaluated together with the coupling between the LUMOs of the two molecules ( Figure 5a) obtained with the methodology described in ref 62. The suitability of this model for describing the interactions of the full monolayers is discussed in the Supporting Information. It has been observed that, in analogous cases, one can find an inverse correlation between stability and the highest occupied molecular orbital (HOMO)−HOMO coupling, as a consequence of Pauli repulsion. 63−65 In our case, as we are interested in the response of the system to the introduction of additional electronic charge, we focus on the coupling between LUMOs. For the neutral system (shown in purple), there is no correlation between the coupling and the energy. This also would not be expected since the orbitals are completely empty. Rather, the energy of the system decreases systematically as the molecules are shifted away from each other. This can be attributed to a reduction in Pauli pushback, as the wave functions no longer overlap. The situation changes notably when additional charge is introduced. As can be seen, particularly for larger charges, the energy profile now shows an inverse correlation with the LUMO−LUMO coupling, i.e., situations with a large coupling are energetically more favorable than those with a small coupling. The MoM geometry has a significantly larger coupling than the MoG geometry (although both are local (111)) and on gas-phase monolayers having the same geometry as the adsorbed monolayers but with no substrate atoms. The energies are given relative to the value of the most stable geometry for each full-substrate system. The Journal of Physical Chemistry C pubs.acs.org/JPCC Article maxima) and is therefore more stabilized (up until a charge of two electrons, see below). This is in accordance with what we have observed in the behavior of the configurations in Figure 3. This behavior can be readily rationalized by valence-bond theory. When two identical molecules come in contact, their LUMOs (originally at the same energy) will hybridize and form a bonding and an antibonding linear combination. The splitting depends on the orbital coupling, 66 i.e., the bonding combination is more strongly bonding the larger the coupling is. If the system is neutral, this has no effect on the total energy. However, when electrons are introduced, they will first occupy the bonding linear combination. As long as there are less than two additional electrons per dimer, only the bonding one will be occupied, resulting in a net energy gain that is larger the larger the coupling is. Conversely, when more than two electrons are introduced, the effect reverses. This tendency is confirmed by Figure 5b, where the variation of the MoM− MoG energy differences is plotted as a function of charge. One can observe that MoM is favored when increasing charge between 0 and 1 electrons but is disfavored when increasing charge between 2 and 3 electrons. For each value of additional charge, a term describing the pure electrostatic interaction between layers has been calculated. One can see that this electrostatic term disfavors MoM for all values of additional charge, proving that the stabilization of MoM in the 0−1 electron range is caused by the previously discussed orbital hybridization, and not by purely electrostatic effects. In other words, we have demonstrated that the charging of the first layer on Ag(111) is the main factor governing the preferability of MoM compared to MoG because the additional charge in the first layer directly benefits geometries with a large LUMO−LUMO overlap. This provides a simple and solid explanation of why the two arrangements present different stabilities on the two substrates. In addition, it provides an important hint toward the consequences of this stability switch: it is known that charge carrier mobility, within the model of the hopping regime, is fundamentally influenced by the coupling between the origin and destination orbitals. 59 Generally, our results indicate that substrates that undergo significant charge transfer with the first layer will facilitate the formation of polymorphs that have a large LUMO−LUMO overlap. Because the LUMO−LUMO coupling is a relevant ingredient for the electron mobilities of the compound, 8,70 it stands to reason that these polymorphs generally exhibit superior properties. In our case, we can estimate the rate of interlayer charge transfer for the two systems by calculating the electronic coupling between LUMO orbitals with the projection-operator diabatization method. 62,71 The results are shown in Table 1. One can see that MoM exhibits superior electronic coupling over MoG for all of the systems we consider. For the single molecular dimer from Figure 5, the difference is very large, and although a part of this difference is due to the nature of the dimer model, which, lacking periodic boundary conditions, presents some intrinsic geometric differences from the full monolayer geometries, the trend is persistent for more complex systems, up to and including the full bilayer geometries found by our structure search. This shows that the influence of the choice of the substrate is crucial for the performance of any device and exemplifies that, even when we can examine the fortuitous case in which two different substrates would seem to induce the same geometry to the first layer, the influence beyond the first layer can be enough to drastically alter the geometry and, thus, the properties of the system. ■ CONCLUSIONS We have studied the structure of the first two layers of benzoquinone on two different substrates. Employing firstprinciples calculations in combination with machine learning, we have found that for the first layer, similar structures are favorable for both substrates. For the second layer, two structures are very favorable for both systems, but their ranking is swapped for the two substrates. This difference in ranking is a consequence of the difference in the LUMO−LUMO coupling for the two different structures in the second layer. Hereby, the MoM structure has a large coupling compared to the MoG structure. Without induced charge, MoG is energetically more favorable compared to MoM. When charge is induced into the first molecular layer (as is the case for Ag), MoM becomes energetically stabilized due to the LUMO− LUMO coupling. This points to the fact that the two different structures induced by the two substrates would exhibit different vertical charge carrier mobilities. Our computational study therefore indicates that substrates which undergo a notable charge transfer with the first layer are more likely to induce polymorphs with large(r) electronic coupling and, hence, charge carrier mobilities. ■ ASSOCIATED CONTENT * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jpcc.1c09986. Details on the convergence of k-space integration grids; graphical representation of all local adsorption geometries; details about the SAMPLE prediction of the second layer on Ag(111) and of the first and second layer on graphene, including geometry optimizations; discussion of the best first-layer polymorphs to be chosen as a substrate for the second layer; further discussion of the best configurations for the second layer of benzoquinone on graphene; and comparison of the dimer model to more complex models for the comparison of MoM and MoG electronic coupling (PDF)
6,606.6
2021-07-01T00:00:00.000
[ "Physics" ]
A Proof of Concept for Very Fast Finite Element Poisson Solvers on Accelerator Hardware It is demonstrated that modern accelerator hardware specialized in AI, e.g., “next gen GPUs” equipped with Tensor Cores, can be profitably used in finite element simulations by means of a new hardware‐oriented method to solve linear systems arising from Poisson's equation in 2D. We consider the NVIDIA Tesla V100 Tensor Core GPU with a peak performance of 125 TFLOP/s, that is only achievable in half precision and if operations with high arithmetic intensity, such as dense matrix multiplications, are executed, though. Its computing power can be exploited to a great extent by the new method based on “prehandling” without loss of accuracy. We obtain a significant reduction of computing time compared to a standard geometric multigrid solver on standard x64 hardware. Introduction Graphic cards with AI components, particularly Tensor Cores (TC) specialized in accelerating dense matrix multiplications, have recently gained importance in computer systems. As an example, the NVIDIA Tesla V100 SXM2 promises 125 TFLOP/s in half precision (HP) due to its TCs (31 TFLOP/s in HP without TCs, 16 TFLOP/s in single (SP) and 8 TFLOP/s in double precision (DP)) according to the manufacturer specifications 1 . It is of course desirable to exploit this high performance in HP in basic components of matrix-based finite element (FE) simulations. FE discretizations generally result in very sparse stiffness matrices. If matrix-vector or matrix-matrix products with the Poisson matrix stored in standard CSR format are computed on the V100, the observed performance (10-100 GFLOP/s) is far below the peak rates, as the results in [1,2] show. If, however, dense matrices are used, 1 up to 100 TFLOP/s in HP are attained [1,2]. In short, there is a large gap between actual and theoretical performance but also a huge potential if the hardware is appropriately used, e.g., by the hardware-oriented method explained below. We stick with Poisson's equation (in 2D), which is an important component in many flow simulations, and consider the case of one and many (i.e., a matrix valued linear system AX = B) right-hand sides. Due to currently examined time-simultaneous [3] Navier-Stokes solvers, that allow for solving the pressure Poisson problems for each time step at once, the latter is relevant in practice. Prehandling of Finite Element Matrices and a Direct Poisson Solver To construct a solver that is capable of exploiting the hardware mentioned above, there are two conditions to be met. Firstly, the condition numbers of involved matrices need to be reduced because the condition number O h −2 (if h denotes the grid width) of the standard FE Poisson matrix causes high computational errors that prohibit the use of low precision, which is necessary to achieve the V100's peak flop rate in HP, though. To this end, we introduced the concept of "prehandling" [4], an explicit preconditioning without excessively increasing the density of the matrix. The hierarchical finite element method [5], that is based on successive refinement of an initial coarse grid, proved to be successful in the 2D case because it reduces the condition number to O log 2 1 h while preserving sparsity and also accuracy in lower precision as shown in [1,2,4]. Secondly, the solver needs to consist of multiplications with small, dense matrices (or large, sparse ones containing such as blocks) to exploit the TCs of the V100. For this purpose, the nodes of the hierarchical mesh are grouped into three types, the prehandled matrix is renumbered accordingly and a two-step Schur complement is applied (for details see [2]). Consequently, we obtain a direct solver, which we also refer to as PSC method (Prehandling Schur Complement), for Poisson's equation that is based on dense matrix-vector or matrix-matrix products (for many right-hand sides) and thus tailored for next gen GPUs. Numerical Results Compared to Standard Solvers on Standard Hardware As a proof of concept, we consider Poisson's equation on the unit square discetized by bilinear finite elements (Q1) on an equidistant mesh of grid width h ∈ 1 256 , 1 512 , 1 1024 (the latter leads to N ≈ 10 6 unknowns). The performance of the direct PSC method on the V100 GPU measured in GFLOP/s is shown in Fig. 1a. The case of many right-hand sides allows for more dense matrix-matrix multiplications and thus yields the highest performance, due to the TCs especially in HP, namely 60 TFLOP/s if h = 1 1024 . This clearly shows the outstanding performance achieved by exploiting the TCs, which would not be available with common ALUs in HP. However, one can only limitedly draw conclusions on the efficiency of the method from this high performance. Hence, we use the measure "million degrees of freedom solved per second" (MDof/s) that enables a comparison with a standard approach. As such, we consider a FE geometric multigrid (MG) method (implemented in the FEAT3 2 software) including sparse matrixvector operations and in DP -due to the high condition number -on the x64 (multicore) AMD EPYC 7542 CPU 3 . The results of both approaches for many right-hand sides are depicted in Fig. 1b. The MG method requires O(N ) (or as a rule of thumb for the meshes at hand approx. 1, 000N ) operations, whereas, according to an estimate in [2], approx. 12N 3 2 operations are needed for the PSC method if the variable coarse grid width is chosen optimally; an amount of N = 1 million unknowns thus yields 12, 000N operations. The results confirm that the 12 times higher complexity is compensated by the remarkable hardware exploitation of the new method on the V100 GPU leading to lower computing times. In the case of one right-hand side the solution works 8 times and for a matrix of right-hand sides 600 times faster than the standard MG approach while accuracy is preserved. In preliminary numerical tests on the successor model of the V100, the NVIDIA Ampere A100 TC GPU promising 300 TFLOP/s in HP 4 , we were able to further accelerate the solution by approx. 50% using the direct PSC method [2]. Conclusion and Outlook We provided a proof of concept of an efficient algorithm resulting from knowledge of modern hardware that benefits from the high performance of GPUs powered by TCs. The presented studies only treat Poisson problems on a simple mesh so far, but this equation is crucial within flow simulations and, according to recent tests, the PSC method is also applicable in the case of partially unstructured, refined triangular coarse grids. Further research will be focused on extensions to the 3D case, other differential operators and finite element spaces and less storage intensive semi-direct variants of the PSC method.
1,556.8
2021-12-01T00:00:00.000
[ "Computer Science", "Engineering", "Physics" ]
Effect of Printing Parameters on Dimensional Error, Surface Roughness and Porosity of FFF Printed Parts with Grid Structure Extrusion printing processes allow for manufacturing complex shapes in a relatively cheap way with low-cost machines. The present study analyzes the effect of printing parameters on dimensional error, roughness, and porosity of printed PLA parts obtained with grid structure. Parts are obtained by means of the fused filament fabrication (FFF) process. Four variables are chosen: Layer height, temperature, speed, and flow rate. A two-level full factorial design with a central point is used to define the experimental tests. Dimensional error and porosity are measured with a profile projector, while roughness is measured with a contact roughness meter. Mathematical regression models are found for each response, and multi-objective optimization is carried out by means of the desirability function. Dimensional error and roughness depend mainly on layer height and flow rate, while porosity depends on layer height and printing speed. Multi-objective optimization shows that recommended values for the variables are layer height 0.05 mm, temperature 195 ºC, speed 50 mm/min, and flow rate 0.93, when dimensional error and roughness are to be minimized, and porosity requires a target value of 60%. The present study will help to select appropriate printing parameters for printing porous structures such as those found in prostheses, by means of extrusion processes. Introduction In recent years, the fused filament fabrication (FFF) technique, also known as fused deposition modeling (FDM), has been employed to generate not only prototypes, but also small series of parts [1][2][3]. Furthermore, the number of studies related to this topic is growing continuously [4,5]. The FFF technology has important advantages such as the possibility to obtain complex shapes, even with porous structures, or the possibility to use different plastic materials and metal-or ceramic-filled materials [6]. In the FFF technology, crucial aspects, for instance dimensional and geometric precision or surface quality, still need to be improved in comparison with conventional manufacturing processes such as machining [7]. Extrusion printing process can be used in different biomedical applications, for example in tissue engineering or in drug delivery [8]. Specifically, 3D bioprinting offers the possibility to build complex structures that can be used for tissue regeneration, with bioinks that are used as the biomaterials laden with cells and other biological materials [9]. As for FFF processes, different filaments of pharmaceutical-grade polymers can be used nowadays to manufacture drugs, for example from cellulose derivatives, poly (ethylene oxide), poly (vinyl alcohol), etc. The filaments can also contain active pharmaceutical ingredients (APIs), although most of them are still not commercially available at pharmaceutical grade [10,11]. Other medical applications of the extrusion processes are the fabrication of surgical guides, dental fixtures, and customized patient specific implants [12,13]. For example, FFF patterns are currently being used to manufacture implants by means of investment casting [14]. Some implants have also directly been printed by means of FFF, including craniofacial reconstruction and orthopedic spacers in polymethylmethacrylate (PMMA) [15]. One of the main advantages of manufacturing prostheses with 3D printing processes is the possibility to obtain patient specific parts. For this reason, it is advisable to achieve good dimensional accuracy. If low-cost machines are to be used in order to make the technology affordable, accuracy will depend greatly on the printing parameters selected. Some authors have studied the dimensional accuracy of FDM printed parts. For example, Caminero et al. [16] found that the best dimensional accuracy for polylactic acid (PLA) and PLA-graphene parts was in the Z-axis, when printed in flat and on-edge orientations. Beniak et al. [17] assessed the dimensional accuracy of FDM printed parts as a function of layer height and temperature. They found that the worst accuracy corresponded to high layer height and high printing temperature. Nancharaiah et al. [18] studied the surface roughness and dimensional accuracy of FDM printed parts in ABS. According to their work, dimensional accuracy worsened for high layer height of 0.33 mm and improved for high raser angle of 45º. Pennington et al. [19] observed that dimensional accuracy depended on the part size, its location in the work envelope, and the envelope temperature in FDM printing. Garg et al. [20] did not find a significant influence of a cold vapor treatment on the dimensional accuracy of ABS parts. In prostheses, roughness is to be minimized on those surfaces that will be in contact with other parts. For example, in hip prostheses, the internal surface requires good surface finish in order to reduce friction with the femoral head [21]. In extrusion printing processes, roughness depends on the measuring direction. A certain roughness level on the lateral walls of specimens is inherent to the process, because of the superposition of layers [22,23]. Specifically, a study by Alsoufi et al. [24] indicated that the measuring direction of 90 • gives the most representative value of Ra distribution than other angles (0 • and 45 • ). Different parameters influence surface roughness to be obtained. For example, Luzanin et al. [25] used a 2 2 design and showed that the extrusion speed had a dominant, statistically significant effect on Ra, while the extrusion temperature and their interaction were not seen to be significant. Galantucci et al. [26] tested the influence of different parameters on Ra in FDM-built ABS (acrylonitrile butadiene styrene) parts employing factorial analysis. They observed that high tip diameter, low raster width, and low slice height led to lower roughness. Rahman et al. [27] studied the influence of bed temperature, nozzle temperature, printing speed, infill, layer thickness, and the number shells of the parts on surface roughness and dimensional deviations of ABS parts. The best results were achieved with high bed temperature, medium nozzle temperature, low print speed, medium infill, low layer thickness, and high number of shells. Sajan et al. [28] used the Taguchi method to evaluate surface roughness. Their analysis showed that main parameters influencing surface texture and circularity error were bed temperature, number of loops, nozzle temperature, print speed, layer thickness and infill, in ABS parts. Hartcher-O'Brien et al. [29] studied the effect of print speed, angle and layer height on roughness parameters Ra and Rq. They observed that surface roughness increased with layer height and decreased with print speed and angle. Porosity and pore size of printed scaffolds, for example those used in prostheses, are important, because they are related to cell growth as well as to nutrient transport [30]. Moreover, although higher scaffold surface area is known to increase the tissue volume, some studies conclude that the surfaces should also be concave to favor cell growth [31]. Besides, the rough surfaces have an important effect on the transport of fluids through porous media [32]. The usual requirements for prostheses are: Pore size between 100 µm and 500 µm, and total porosity between 40% and 80% [33,34]. Montazerian et al. studied the influence of pore size and porosity of printed scaffolds on their permeability [35]. Regarding measurement of porosity, it is possible to determine pore size from its three-dimensional image obtained by means of computed tomography [36]. This technique has been employed, for instance, to determine the porosity of the grid structure in FFF processes [37]. However, few studies have addressed the effect of FFF printing parameters on the porosity of printed parts. The main aim of the present paper is to study and analyze the influence of the FFF printing parameters on surface roughness, dimensional error, and porosity of porous structures. In addition, the selection of optimal printing parameters in order to simultaneously minimize dimensional error and roughness of customized parts such as prostheses, as well as to achieve a target porosity value to favor cell growth, is investigated. On the other hand, an alternative way to measure porosity is presented here for regular structures such as the grid one, in which the volume of pores is obtained by measurement of the pore dimension with a profile projector. This methodology can be applied to porous structures of different materials, provided that they have a regular structure with through holes. Printing Process PLA filament of 2.85 mm in diameter was used as a feedstock material. Experiments were performed using a Sigma R17 from BCN 3D Technologies. The maximum print volume is 210 × 297 × 210 mm. It has a resolution of 12.5 µm on the X and Y axes and a resolution of 1 µm on the Z-axis. Maximum heating temperature is 280 • C for the printing head and 100 • C for the printing bed. The nozzle diameter was 0.4 mm, and three shells were defined around the parts, corresponding to a total thickness of 1.2 mm. Four variables were chosen in this work: Layer height, temperature, speed, and flow-rate. Layer height is the difference between one layer and the next deposited layer. As a general trend, roughness and dimensional error increase with layer height in FFF processes [18]. The extrusion temperature is the temperature that is required to melt and extrude the material and later deposit it. A temperature value should be chosen in which the melted material is not damaged. According to some authors, an increase in printing temperature worsens the dimensional accuracy of the parts [17], while other authors did not find a significant effect of temperature on roughness [24]. The print speed refers to the velocity at which the printer head moves when printing the parts. Increasing the speed can cause vibrations and errors in the printing process, which decrease the quality of the printed specimens. However, print speed has usually a lower effect on surface roughness than other parameters such as layer height [38]. The flow rate parameter is also known as the extrusion multiplier. It represents the amount of material that comes out of the extruder compared to the theoretical one. Its effect on surface finish and dimensional error has been scarcely studied in the literature about FFF processes. In a different extrusion printing process, direct ink writing (DIW) of ceramic inks, the flow rate or extrusion multiplier parameter had a low effect on surface roughness and dimensional error [39]. However, the flow rate required in FFF or in DIW processes could be quite different because of the different rheological behavior of both materials. In the present study, the samples were designed with a computer-aided program, Solidworks, as cubes with a size of 10 × 10 × 10 mm, with the grid structure and an infill ratio of 40%. The structure has 16 void prismatic channels with square bases (Figure 1a,b). A 2-level full factorial design with 4 variables (2 4 ) was defined with the help of Minitab 19. Three central points were added to the design, with a total number of 19 experiments. The employed levels for the variables are shown in Table 1. In this study, the layer height range was chosen between 0.05 mm and 0.25 mm. The value of 0.05 mm is the minimum value that could be printed in PLA in this case, and the upper value of 0.25 mm is the highest recommended value, corresponding to 80% of the nozzle diameter. Temperature was selected between 190 and 210 • C, as it is usually recommended for PLA, making it possible to melt the material correctly and the layers adhere to the contiguous ones. Printing speed varied between 30 mm/s and 50 mm/s, as is usual for PLA. Finally, for the flow rate some preliminary tests were carried out, where the initial values were adjusted between 0.93 and 0.97. The considered responses were: Dimensional error, roughness parameter Ra (arithmetical mean deviation of the profile), and porosity. Dimensional Error Measurement In the present work, dimensional error is defined as the relative difference between the theoretical and the measured dimensions of the printed parts. The theoretical piece corresponds to a cube of 10 × 10 × 10 mm. In this case, the dimensional error in width is considered. To do this, the width of the four lateral walls of the cubes were measured with a Mitutoyo PJ300 profile projector. Then, the relative error was calculated and the average value of the four errors was calculated for each part. Roughness Measurement Roughness was measured with a Taylor Hobson Talysurf 2 contact roughness meter. One measurement was performed on the lateral wall of each sample. Figure 2 shows an example of a measured roughness profile (experiment 1), with round peaks and sharp valleys, that is typical of the FFF printed parts. The profile is quite regular, suggesting that the different material layers were correctly deposited. In general, the peak width corresponds to the layer height of 0.05 mm that was employed in this experiment. Porosity Measurement The porosity or void fraction is the ratio of the volume of voids within a structure over the total volume. It can take values between 0 and 1, or as a percentage between 0% and 100%. In the present work, a regular structure was considered with the grid pattern and 40% infill ratio. This leads to 16 void channels with square cross-section (Figure 1a). The volume of each pore is calculated by multiplying the dimensions of the sides (1.5 mm) times the dimensions of the side (1.5 mm) times the length of the channels (10 mm). The theoretical porosity of the structure is 60%, corresponding to an infill rate value of 40%. The profile projector was used to measure the porosity of the specimens. To do this, the dimensions of the four central holes of the structure, which are not influenced by the shell of the parts, as well as the length of the parts were measured and the volume of the holes was calculated. Then the volume of the holes was compared to the total volume of the part (without considering the shells). Regression Models and Multi-Objective Optimization Linear regression models were searched for each response with Minitab 19. Once the models for each of the responses were obtained, multi-objective optimization was performed. It simultaneously seeks to minimize dimensional error and roughness, while maintaining a target porosity value of 60%, which is the theoretical porosity value for an infill ratio of 40%. Multi-objective optimization is a useful approach in FFF processes, in order to select the optimal combination of printing parameters [40]. In the present work, the desirability function method is employed for this purpose. A desirability function is a piecewise objective function that ranges from 0 outside of the limits to 1 when it reaches the goal. By means of numerical optimization, a point is found that maximizes the desirability function. The desirability function can be defined for either a single response or for different responses simultaneously. The overall desirability is the geometric mean of the desirability functions for the different responses considered [41]. Table 2 shows the results for dimensional error, roughness, and porosity, as well as the printing conditions of the different experiments. Table 3. Highest dimensional error was found for experiments 1 and 15, suggesting that the low layer height value of 0.05 mm provides some incorrectly printed parts in this case. Lowest dimensional error was observed for experiments 2, 5, and 6, which have in common low temperature and low flow rate, which avoid an excessive fluidity of the material. These results are in accordance with the work of Beniak et al. [17], who recommend low temperature in order to reduce dimensional error. To sum up, within a certain layer height range that ensures correct printing, low temperature, and low flow rate are recommended in order to avoid an excess of material that would increase the dimensional error. Measured Values Highest roughness values above 18 µm correspond to experiments 4 and 8, obtained with high layer height, high temperature, and low speed. Lowest roughness values were reported in experiments 3 and 5 and 13, printed with low layer height. It is well known that roughness in the lateral walls of FDM printed parts increases with layer height [22,23]. Although a low layer height of 0.05 mm would be recommended in order to minimize roughness, the rest of the variables should be selected carefully so as to avoid excessive dimensional error, as was explained in the previous paragraph. In the present work, the target porosity value of 60% ± 1% is achieved under different combinations of parameters, for example in experiments 3, 9, 11 and 13, all of them printed with low layer height. Regression Model for Dimensional Error The regression equation for dimensional error in uncoded units is presented in Equation (1). The R-sq (adj) parameter is 87.17%. Main terms influencing dimensional error are layer height followed by the interaction of layer height, print speed, and flow rate, and by flow rate. Figure 4 shows the main effects plot for dimensional error. It reveals that curvature is significant in this model. The lowest dimensional error is obtained with the medium value for all the variables. The interaction between speed and flow rate shows that low dimensional error is obtained when the combination of high speed and low flow rate is selected. The interaction between layer height and speed indicates that low dimensional error is obtained with high layer height regardless of speed. As observed by Beniak et al. [17], dimensional is directly related to temperature, since higher temperature means easier material flow. Nancharaiah et al. [19] found stable dimensional error values between layer height 0.17 and layer height 0.25 mm, while error increased when layer height of 0.33 mm was employed. In the present research, as a general trend, dimensional error is higher for layer height 0.05 mm than for layer height 0.25 mm. This suggests that extreme layer height values of either 0.05 mm or 0.33 mm could worsen the dimensional accuracy. In the present work, dimensional error values of up to 2.6% are obtained. Messimer et al. [42] reported slightly higher values of up to 3% for high-temperature polylactic acid (HTPLA). Regression Model for Roughness The regression model for roughness in uncoded units is presented in Equation (2). The main term influencing roughness is layer height, followed by flow rate and by the interaction between layer height and flow rate. Figure 7 shows the main effects plot for roughness parameter Ra. Lowest roughness value is achieved with low layer height, when the rest of the variables are kept in their medium values. Curvature is significant in this model. Figure 8 depicts the interaction plot for Ra. The interaction between layer height and flow rate shows that the lowest roughness corresponds to low layer height, regardless of the flow rate employed. The great influence of layer height on surface roughness of FDM printed parts has been reported by different authors [22,26,27]. In the present work, the lowest Ra value achieved in the lateral walls of the parts that were considered in the present research was around 3.9 µm when low layer height, low temperature and a high speed are selected. When layer height of 0.25 mm is used, Ra values up to almost 19 µm are found, which are similar to those obtained in the lateral walls of cylindrical shapes for the same layer height [22]. Regression Model for Porosity The regression equation for porosity in uncoded units is presented in Equation (3). (3) Figure 9 shows the Pareto chart of the standardized effects for porosity. The main terms influencing porosity are layer height, followed by speed and temperature. Figure 10 depicts the main effects plot for the response porosity. The target value of 60 % for porosity is obtained, as a general trend, when low layer height is selected. Figure 11 corresponds to the interaction plot for the response porosity. The interaction plot in Figure 11 shows that the target porosity value is obtained with low layer height, regardless of speed. If high layer height were selected, then low print speed would be recommended. Porosity values obtained range between 53.76% and 72.33%, with 60% being the target. According to a previous work, the measured porosity of the grid structures can be lower than the theoretical porosity because of because of an excess of material [37]. In the present work, higher porosity than expected was also observed in some cases, with thin-walled structures. The structures corresponding to the target porosity of 60 % showed regular structures. Table 4 contains the parameters used for the multi-objective optimization with the desirability function method, and Table 5 shows the solution of the optimization. The weight was 1 in all cases, corresponding to the use of a linear function to define the desirability of the responses. The three responses were given the same importance value of 1. Table 5 shows that, when all the responses are given the same importance, it is recommended to select low layer height, low temperature, high speed and low flow rate. The compound desirability is 0.852. Figure 12 shows a specimen that was printed using the optimized values for the variables, presented in Table 5. The deposition of the material layer upon layer is correct, and the 16 holes show a regular structure with homogeneous wall thickness among then. Conclusions In the present work, cubic parts are printed in PLA by means of FDM, using a grid structure with 40% infill. Different experiments are performed, in which layer height printing temperature, print speed, and flow rate are varied. The main conclusions of the paper are as follows: - Dimensional error values between 0.82% and 2.60% were obtained. As a general trend, selecting layer height of 0.25 mm provides lower dimensional error than layer height of 0.05 mm, suggesting that the latest could be too low to assure low dimensional error under certain printing conditions. Low flow rate is also recommended in order to minimize dimensional error. -Ra roughness values between 3.9 µm and 18.8 µm were obtained. The lower the layer height, the lower roughness is. -Porosity values range between 53.76% and 72.33%, with a target porosity value of 60%. As a general trend, the target porosity value is achieved with low layer height. -Dimensional error and surface roughness Ra depend mainly on layer height and flow rate. Porosity is mainly influenced by layer height and print speed. -According to multi-objective optimization, it is recommended to select low layer height, low temperature, high print speed and low flow rate in order to simultaneously minimize dimensional error and roughness and to obtain the target porosity value. The present work will help to select most appropriate printing parameters to print porous structures with the grid pattern, in extrusion processes. For example, the results will be used in future works to manufacture plastic prototypes for hip prostheses, with the shape of a hemispherical cup and a porous external layer that will favor its fixation by means of osseointegration. Funding: This research was co-financed by the European Union Regional Development Fund within the framework of the ERDF Operational Program of Catalonia 2014-2020 with a grant of 50% of total cost eligible, project BASE3D, grant number 001-P-001646. Data Availability Statement: Not applicable.
5,338.2
2021-04-01T00:00:00.000
[ "Materials Science" ]
Multiplicity results for (p, q)-Laplacian equations with critical exponent in RN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^N$$\end{document} and negative energy We prove existence results in all of RN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^N$$\end{document} for an elliptic problem of (p, q)-Laplacian type involving a critical term, nonnegative weights and a positive parameter λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document}. In particular, under suitable conditions on the exponents of the nonlinearity, we prove existence of infinitely many weak solutions with negative energy when λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document} belongs to a certain interval. Our proofs use variational methods and the concentration compactness principle. Towards this aim we give a detailed proof of tight convergence of a suitable sequence. Introduction In this paper we are interested in nontrivial weak solutions in D 1, p (R N ) ∩ D 1,q (R N ) of the following nonlinear elliptic problem of ( p, q)-Laplacian type involving a critical term − p u − q u = λV (x)|u| k−2 u + K (x)|u| p * −2 u, in R N (P) where m u = div(|Du| m−2 Du) is the m-Laplacian of u, 1 < q < p < N , p * = N p N − p is the critical Sobolev's exponent, the parameter λ is positive, the exponent k is such that 1 < k < p * and the weights are nontrivial and satisfy Communicated by O. Savin. and 0 ≤ K ∈ C(R N ) ∩ L ∞ (R N ) (2) In particular, by using variational methods and concentration compactness principles, we prove multiplicity results for solutions of (P) with negative energy when Towards this aim, we have to deal with a particular property for a sequence of measures called "tightness", following a probabilistic terminology, required in the second concentration compactness principle by Lions [27][28][29][30]. We faced this delicate point in Lemma 7 below for all k with 1 < k < p * . We recall that a first serious problem on unbounded domains is the loss of compactness of the Sobolev's embeddings, which renders variational techniques more delicate. In addition, critical problems in R N represent one of the most dramatic cases of loss of compactness and have been studied intensively in the last 25 years, starting with the pioneering paper by Brezis and Nirenberg [6] for the Laplacian. Later, the p-Laplacian case in the entire R N was investigated by many authors, we refer to [2,12,13,23,42], [16] in exterior domains, [17] with double critical nonlinearities, [15,20] and the references therein. Among these papers, we mention that by Swanson and Yu [42], in which they consider morally the subcase of (P) with p < k < p * and λ = 1, that is with no parameter. The single p-Laplacian case of (P) in a bounded domain without weights is completely described for all parameter λ > 0 by Garcia Azorero and Peral in [3], where they obtain, among other results, two positive values λ 0 , λ 1 such that existence of a nontrivial solution holds for λ ≥ λ 0 if 1 < p < k < p * , while existence of infinitely many solution holds if 1 < k < p for λ ∈ (0, λ 1 ). Critical Dirichlet problems for ( p, q)-Laplacian on bounded domains are studied in [26] where the authors extend partially the multiplicity result due to [3], again without weights, for 1 < k < p, then in [21] where it is proved that the analogous result given in [26] holds when weights are included if k satisfies further restrictions beyond p < k < p * . Furthermore, in [43] the case 1 < q < p < k < p * is treated obtaining the existence of a nontrivial solution for λ ≥ λ 0 > 0 (see also [14] for p > q ≥ 2). Among papers on bounded domains, we mention that by Cherfils and Il'yasov [10], in which, in the subcritical case, nonexistence of solutions for λ small and existence for λ large can be deduced by using a suitable nonlinear spectral analysis. In this direction, we quote the papers by Papageorgiu et al. [36,38]; in particular, in the first they study existence of ground state solutions for a differential operator given by the sum of a p-Laplacian and of a weighted q-Laplace operator with a positive L ∞ weight not bounded away from zero, while in [36] they give existence of a continuous spectrum for the Dirichlet problem with a differential operator given by a linear combination of p and q-Laplacian, so that existence of solutions occurs. For a detailed theory on the subject we refer to the book [37]. Moving to the unbounded case, the situation is fairly delicate. Furthermore, condition (3) is new, since only the cases 1 < k < q < p or 1 < q < p < k < p * are partially investigated in literature, respectively in [9,22] and in [31], as far as we know. The ( p, q)-Laplacian problem (P) comes from a general reaction-diffusion system The system has wide range of applications in physics and related sciences, such as biophysics, chemical reaction and plasma physics. In such applications, the function u describes a concentration, the p and q Laplacian terms in (4) correspond to the diffusion where the diffusion coefficient is |Du| p−2 + |Du| q−2 , whereas the term c(x, u) is the reaction and relates to sources and loss processes. Typically, in chemical and biological applications, the reaction term c(x, u) has a polynomial form with respect to the concentration u. The case q = 2, that is ( p, 2)-Laplacian, recently was studied by Papageorgiou et al. in [33,34] and [35], where they prove existence and multiplicity theorems by using a variational approach and Morse theory with p > 2. In particular, in [33], they consider parametric equations when the parameter λ is near to the principal eigenvalue λ 1 ( p) > 0 of (− p , W 1, p 0 ( )), while in [34] and [35], they consider equations where the reaction term satisfies particular conditions which imply the resonance of problem at ±∞ and at 0 ± . Another important example, widely studied, in which a subcase of problem (P) appears, is the study of solitary waves or solitons which are special solutions whose profile remains unchanged under the evolution in time, of the nonlinear Schrödinger equation see [7,10,41], of the typical form where i is the imaginary unit and the function U is the potential. In particular, a function ψ(x, t) = e −iωt u(x) is a standing-wave solution of (5), where ω ∈ R is the energy, if and only if the function u satisfies The subcase of (5) when q = 2 and a cubic nonlinearity, k = 3, is involved is called the Gross-Pitaevskii equation. A strategy to prove multiplicity of solutions of (P), is to apply the result of multiple critical points for the energy functional E λ associated to (P), given by In particular, we make use of the classical multiplicity result by Rabinowitz in [39] for even functionals, so that 0 is a critical point and critical points occur in antipodal pairs. Under further conditions, the functional possesses additional critical points. Precisely, we apply Theorem 1.9 in [39] in which the Krasnoselskii genus is involved with its properties and furthermore the standard and crucial compactness condition (PS) c is required to be satisfied by E λ , for c < 0. This is a delicate point, indeed for critical problems in all of R N this compactness condition is often loss, for this reason some of the papers treating problems on unbounded domains use special function spaces where the compactness is preserved, such as spaces of radially symmetric functions or weighted Sobolev spaces. In our setting, we have to face with the well known loss of compactness by concentration, which occurs in every problem with critical growth, even on bounded domains. Indeed, one of the hard part in the proof of the main result of the paper will be devoted to a careful analysis of Palais Smale sequences to understand the consequences of spreading or concentration of mass. For this aim, as discussed before, in order to recover compactness, in the spirit of the celebrated first concentration compactness principle by Lions [27][28][29][30], we have to deal with tight convergence (see also [4]). Roughly speaking, "tightness" tells that the values of the functions should belong, in a suitable integral sense, to some compact set, see Lemma I.1 in [27]. As a consequence, in the second concentration compactness principle it is required the notion of tight convergence for a sequence of measures, which is the weak star convergence of measures in the dual space of bounded functions. We point out that the unbounded case is sensibly more complicated than the bounded case since, only in the latter case, tight convergence of a sequence of measures reduces morally to standard convergence of measures, which is the weak star convergence of measures in the dual space of functions vanishing at infinity. Generally, tight convergence is stronger than standard convergence of measures. For details in this direction, we refer to Sect. 4 based on the book by Fonseca and Leoni [18]. In this context, Lemma 7 is completely new since it contains the proof of the tight convergence in R N of a sequence of measures connected to (P S) c sequences for every c < 0, in the spirit of the nice paper by Swanson and Yu in [42] devoted to the p-Laplacian and essentially with no parameter. In particular, in Lemma 7 we prove tight convergence for all λ > 0 when p < k < p * , while for λ small, when 1 < k < p, provided that the weight K is nonnegative. We are now ready to state our main result, which completes and extends Theorem 1.1 in [22] to the new case (3). We observe that condition K ∞ sufficiently small guarantees that K ∞ satisfies a certain inequality given in (70) so that λ * < λ * . In particular, since λ * = C/ V r · K The proof of Theorem 1 is based on concentration compactness principle, on the use of the truncated energy functional and on the theory of Krasnoselskii genus, introduced in [24]. As a standard procedure, we have first to prove the boundedness of (PS) c sequences, c ∈ R, for E λ (u), which we obtain in Lemma 4 for all k such that 1 < k < p * . Then, we have to face to the main difficulty of the paper which consists in verifying the compactness Palais Smale condition at level c for E λ (u) when the critical values c are negative, the point were the lack of compactness becomes manifest. To solve this problem, as described before, we have to deal with tight convergence of (|u n | p * ) n . We emphasize that, due to the new condition (3), the qualitative behavior of E λ (u) is completely different with respect to the case treated in [9] and in [22]. The paper is organized as follows. In Sect. 2, we recall some classical definitions as well as some regularity results on E λ (u), while in Sect. 3 we prove some properties of Palais Smale sequences. In Sect. 4 we state the two concentration compactness principles due to Lions in [27] and [29] and, following the book of Fonseca and Leoni [18], we discuss with all details the relation between tight convergence and standard convergence of sequences of measures, including also the statement of Prohorov Theorem; in addition Sect. 4 contains Lemmas 7 and 8, which are the two crucial lemmas for the proof of the main theorem of the paper. The truncated functional is introduced in Sect. 5 and its properties are listed. Finally, the proof of Theorem 1 is developed in Sect. 6 together with the statement of classical theorems useful in the proof, such as Deformation Lemma and some well known properties of the Krasnoselskii genus. Preliminaries In this section we state some preliminary results, as well as some notations, useful in the proofs of the main theorem of the paper, given in the Sect. 6. In what follows, we denote with X the reflexive Banach space D 1, p where · p is the L p norm in R N . Furthermore, we denote by S the Sobolev's constant, i.e We recall that the value S is achieved in D 1, p (R N ), for details we refer to Appendix A in [17]. Of course, the functional E λ is well defined in X , indeed if u ∈ X , by Hölder's inequality with the exponents r = p * /( p * − k), r = p * /k, we have thanks to (1) and (2). The proof of the regularity of E λ is almost standard, but for completeness we include it. Obviously, it is enough to study the regularity of the functionals Now, we first analyze the regularity of J . Proof Let (u n ) n ∈ D 1, p (R N ) such that u n u in D 1, p (R N ), thus u n u in L p * (R N ) and (u n ) n is bounded in D 1, p (R N ), in L p * (R N ) and also (|u n | k ) n in L p * /k (R N ) since we have |u n | k p * /k = u n k p * . Furthermore, by the compactness of the embedding, Consequently, by using an increasing sequence of compact sets whose union is R N and a diagonal argument, we also have In turn, by Hölder's inequality, V |u n | k 1 ≤ C V r < ∞ by (1) so that using Lebesgue dominated convergence Theorem we have namely, weak continuity holds. In order to prove J ∈ C 1 it is enough to show that J has continuous Gâteaux derivative. Let u, ψ ∈ D 1, p (R N ) and 0 < |t| < 1, it follows By the mean value Theorem there exists λ ∈ (0, 1) such that We now use Hölder's inequality twice with exponents r , p * /(k − 1), p * and r , p * /k respectively, so that (11), thanks to the Lebesgue dominated convergence Theorem, we have that J is Gâteaux differentiable and (9) holds with in the Gâteaux sense. In order to check the differentiability of J , it remains to prove continuity of the Gâteaux derivative. Let u n → u in D 1, p (R N ) then, up to subsequences, by (10), where in the last inequality we have applied Young's inequality with exponents r /( p * ) and so that by Lebesgue dominated convergence Theorem, we get Finally, by Hölder's inequality, for all ψ ∈ D 1, p (R N ), we have as n → ∞ thanks to (12). Actually, we have proved that for every sequence u n → u in there is a subsequence respect to which J is sequentially continuous, from this it is an elementary exercise to conclude that J is sequentially continuous in all of In turn, J ∈ C 1 . Analogously, it holds the following. Finally, using the continuity of the embedding Since the first two terms of E λ are norms with exponents p, q > 1, and thanks to Lemmas 1 and 2, then immediately E λ ∈ C 1 (X ), with E λ : X → X and it results for all ψ ∈ X . A weak solution of problem (P) is a function u ∈ X such that that is u is a critical point of the functional E λ or equivalently, by (13), u satisfies the weak formulation of problem (P), namely for all ψ ∈ X . Now, we present a results about convergence that is also needed in our discussion. On Palais Smale sequences First, we briefly recall the basic definitions. Moreover, we say that E satisfies the (P S) c condition if every (P S) c sequence for E has a converging subsequence in Y . In the next result, we prove the first main property for (P S) c sequences for the functional E λ (u) defined in (6). We point out that here the value k does not satisfies (3), but simply 1 < k < p * . (1) and (2) be verified and let (u n ) n ⊂ X be a (P S) c sequence for E λ (u) defined in (6) for all c ∈ R. Then (u n ) n is bounded in X . In particular, if 1 < k < p and c < 0, it holds where S is the Sobolev's constant. so that |E λ (u n )(u n )| ≤ u n for n large. Now we divide the proof in two cases. Case 1 < k < p: by (13), thanks to (8) and Hölder's inequality with exponents r and r we have where we have used that V ∈ L r (R N ) and Consequently, writing explicitly · given in (7), we get where c 1 , c 2 , c 3 are positive constants independent of n. From (16) it immediately follows that u n should be bounded, indeed if u n D 1,q → ∞ and ( u n D 1, p ) n bounded, then by letting n → ∞ in (16) we obtain a contradiction since the left hand side goes to −∞, being q > 1 while the right term is bounded. If u n D 1, p → ∞ as n → ∞ and ( u n D 1,q ) n is bounded, then by letting n → ∞ in (16) we obtain a contradiction since the right hand side goes to ∞, being p > 1 and p > k, while the left term is bounded. Finally if u n D 1, p , u n D 1,q → ∞ then the left hand side of (16) goes to −∞ while the right goes to ∞. This last contradiction concludes the proof of the first case. Case p ≤ k < p * : arguing as in (15), where c 1 , c 2 are positive constants independent of n. From (17), using a similar argument as in the first case, it follows that u n should be bounded in X. To obtain (14), it is enough to observe that using the boundedness of (u n ) n , from (15), being c < 0, it follows, for n large, that which yields (14) by virtue of Sobolev's inequality and for 1 < k < p. Thus, the proof is completed. Concentration compactness Before stating the first concentration compactness lemma due to Lions in [27], we recall, for completeness, some well known notions, following [18]. Let Y be a locally compact Hausdorff space and let M(Y , R) be the space of all finite signed Radon measures (cfr. Definitions 1.5, 1.166 and 1.55 in [18]). In this setting, we have where C 0 (Y ) is the space of all continuous functions that vanish at infinity or, equivalently, it is the completion of C c (Y ), i.e. the space of all functions whose support is compact, relative by the supremum norm · ∞ . First, we recall the definition of the (standard) convergence of measures, also called in some works, [29,30] and [32], weak convergence of measures. Equivalently, the (standard) convergence of measures is the weak star convergence of measures respect to (C 0 (Y )) . Now, let C b (Y ) be the space of real bounded functions defined in Y and we report the definition of tight convergence of measures in the same setting as above. Definition 3 A sequence of measures Equivalently, the tight convergence of measures is the weak star convergence respect to Remark 1 It is known that the dual spaces of where > 0 is fixed. Indeed, up to a subsequence, one of the following three situations hold: (a) (Compactness) There exists a sequence (y n ) n in R N such that ρ n (· + y n ) is tight that is for any ε > 0 there exists 0 < R ε < ∞ for which (c) (Dichotomy) There exists ∈ (0, ) such that for any ε > 0 there exists n 0 ≥ 1 and The following theorem gives a sufficient condition to obtain the tight convergence for a sequence of bounded Borel measures. Theorem 2 ((Prohorov) Theorem 1.208, [18]) Let Y be a metric space and let (μ n ) n be a sequence of bounded Borel measure. Assume that for all ε > 0 there exists a compact set Then there exist a subsequence (μ n k ) k ⊂ (μ n ) n and a Borel measure μ such that μ n k * μ. Remark 2 From Proposition 1.202 in [18], any sequence of bounded measures admits a subsequence which converges in the sense of Definition 2. Thus, from Theorem 2, to obtain the tight convergence in the sense of Definition 3 we need, in addition to the boundedness of the sequence, that for all ε > 0 there exists a compact set K ε such that (18) holds. From Lemma 5, the compactness condition (a) assert that, for the translated measures μ n := ρ n (y n + ·) condition (18) is satisfied, therefore, the sequence of translated measures (μ n ) n admits a subsequence which converges tightly. Now we are ready to state the second concentration compactness lemma. where δ x is the Dirac-mass of mass 1 concentrated at x ∈ R N . Thus, to apply Lemma 6, we need that |u n | p * * ν, with ν bounded nonnegative measure. This property, up to subsequences, follows immediately if we consider a bounded sequence (u n ) n in D 1, p ( ) with bounded, indeed by standard extensions theorems we may assume, without loss of generality that (u n ) n ⊂ D 1, p (R N ) and |u n | p * * ν, by Remarks 1 and 2. Contrarily, in the case of a bounded sequence (u n ) n in D 1, p (R N ), to obtain the tight convergence, we need to exclude Vanishing and Dicothomy in Lemma 5. The following two lemmas are crucial in the proof of Theorem 1, given in Sect. 6. Lemma 7 Let 1 < q < p. Assume that V and K satisfy (1) and (2). Define, for 1 < k < p, where S is the Sobolev's constant. If c < 0 and either p < k < p * and λ ∈ (0, ∞) or then every (P S) c sequence, (u n ) n , for E λ is such that, up to subsequences, where ν is a bounded nonnegative measure. Proof Let (u n ) n be a (PS) c sequence. Thus, as n → ∞, and where o n (1) → 0 as n → ∞ and · is the norm given in (7). Using Lemma 4, the sequence (u n ) n is bounded in X . By Banach Alaoglu Theorem, since X is a reflexive space, there exists u ∈ X such that, up to subsequences, u n u in X , Moreover, (10) is in force, namely Consider the auxiliary sequence of functions (z n ) n , z n (x) ≥ 0 in R N for all n ∈ N, given by Define η n = z n dx. We claim that η n converges tightly to a bounded nonnegative measure η on R N , that is, z n * η. First, we prove that there is > 0 such that Actually L, M > 0. Indeed, using (1), (2) and Hölder's inequality with exponents r and p * /k, we have Hence, if M = 0, then, by letting n → ∞, thanks to (20), we arrive to 0 ≤ L/ p ≤ c < 0 which is a contradiction. Thus, M > 0 and Sobolev's inequality gives L > 0. The continuity of the functional J in L p * (R N ), J given in Lemma 1, implies the existence of the following limit Clearly H ≥ 0. We claim that H > 0. Multiplying (20) by p * and then subtracting (21), we obtain, as n → ∞, By letting n → ∞, since (u n ) n is bounded in X , we get since p, q, k < p * , λ > 0, L > 0 and Q ≥ 0, necessarily H > 0 being c < 0. Consequently, condition (22) holds with = L + Q + M +λH > 0. We can apply Lemma 5 to the sequence (z n ) n . Hence, up to a subsequence, three situations can occur: Compactness, Vanishing or Dichotomy. In particular, thanks to Theorem 2 (cfr. Remark 2), Compactness is equivalent to tightness so that we have to exclude Vanishing and Dichothomy for the sequence (z n ) n . We immediately see that Vanishing cannot occur. Indeed from (22), we can assume that there exists R 1 ∈ (0, ∞) such that B R 1 (0) z n (x)dx ≥ /2 > 0, in turn (b) in Lemma 5 fails. To prove that Dichotomy cannot hold, we argue by contradiction and we assume that there exists ∈ (0, ) such that for all ε > 0, there exist R > 0, ∈ (0, ), (R n ) n , with 2R < R n → ∞ and (y n ) n in R N such that, for all n large, we get Let for all x ∈ R N and all n ∈ N. Then, Supp(u 1 n ) = {x ∈ R N : |x − y n | ≤ 2R} and Supp(u 2 n ) = {x ∈ R N : |x − y n | ≥ R n } are disjoint sets for every n ∈ N. In addition, dist Supp(u 1 n ), Supp(u 2 n ) → ∞. In particular, it follows So that, by (24) and the facts that Dϕ 1 n ∞ ≤ c/R, Dϕ 2 n ∞ ≤ c/R n , this yields where o ε (1) → 0 as ε → 0. Similar formulas hold for R N |Du i n | q dx, i = 1, 2. Furthermore, by Hölder's inequality and (24), we get Similar formulas hold for R N K |u i n | p * dx, i = 1, 2. Consequently, (20), (21), (24) give, respectively, where we first let n → ∞ and then ε → 0. As above, eventually passing to subsequences, there exist nonnegative limits α i , β i , i = 1, 2, defined by Multiplying (25) by q and p, respectively, and then subtracting (26), both evaluated in u i n , we obtain from which we deduce, for n → ∞ e since u i n is bounded, as ε → 0. In particular, since q < p and using that the left hand side is nonnegative, then (27) and (28) give, respectively, and If q < p < k < p * , then (29) is trivial, while (30) cannot occur since c < 0 but the right hand side is positive being p < k. This contradiction proves that, in this case, Compactness holds. We claim that inequality (30) cannot occur also when 1 < k < p, so that we have covered both cases q < k < p and 1 < k ≤ q < p. At this aim note that, from (24), it follows either α 1 = 0 or α 2 = 0 depending whether (y n ) n is unbounded or not. Indeed, if (y n ) n is unbounded then Supp(u 1 n ) reduces to the empty set when n → ∞, consequently, from where C is the constant obtained from the boundedness of the (PS) c sequence and thanks to the continuity of the embedding of D 1, p (R N ) in L p * (R N ), then, thanks to (1) we can apply Lebesgue dominated convergence Theorem to the function χ B R (y n ) V r , obtaining On the other hand, if (y n ) n is bounded, then arguing as above and noting that in this case Supp(u 2 n ) becomes the empty set for n → ∞, we get α 2 = 0. First, consider the case α 2 = 0, of course α 1 > 0 since From (26) with i = 2, by the definition of β 2 and Sobolev's inequality we get, as in [42], Inserting (32) in (30) and using that β 1 ≥ 0, we have which is a contradiction since c < 0 while the right hand side is nonnegative if λ satisfies (19) 2 thanks to , where we have used Sobolev's inequality and (14). In the case α 1 = 0, we can repeat the argument above to reach the required contradiction. The proof of the claim is so concluded, in other words, Compactness holds also in case (19). Consequently, the first concentration compactness principle guarantees that there exists a sequence (y n ) n in R N such that z n (· + y n ) is tight in the sense of Lemma 5, that is for arbitrary ε > 0 there exists R = R(ε) ∈ (0, ∞) with so that from the definition of (z n ) n . It must be that (y n ) n is a bounded sequence otherwise if y n → ∞ then lim n→∞ |x−y n |<R V |u n | k dx = 0, thus, combining the above limit with (34), we arrive to contradicting H > 0 in (23). Hence, we can replace y n by 0 in (33) to obtain the tightness of z n . Moreover, since where B R is the ball centered at the origin and radius R, we obtain the tightness of |u n | p * . Finally, we define for all n ∈ N the measure ν n = |u n | p * dx on R N which is nonnegative, bounded since M > 0, and such that verifies all the assumptions of Theorem 2 thus, it admits a subsequence which converges tightly (cfr. Remark 2) to ν, a bounded non negative measure on R N , that is ν n * ν as claimed. The proof is complete. Lemma 8 Let 1 < k < p. If c < 0 then there existsλ * > 0 such that E λ satisfies (PS) c condition for all λ ∈ (0,λ * ], whereλ * is defined as followŝ Proof Let (u n ) n be a (PS) c sequence, clearly (u n ) n is bounded in X by Lemma 4. Furthermore, sinceλ * < λ * , then Lemma 7 implies that there exists u ∈ X such that, up to subsequences, we get (I) u n u in X , (II) Since Du n Du in L p (R N ) and Du n Du in L q (R N ), then the sequence of measures (|Du n | p dx + |Du n | q dx) n is bounded, thus we have |Du n | p dx + |Du n | q dx μ, (III) (|u n | p * dx) n converges tightly to ν, where μ, ν are bounded nonnegative measures on R N . Applying (i) of Lemma 6, there exist at most countable set J , a family (x j ) j∈J of distinct points in R N and two families (ν j ) j∈J , (μ j ) j∈J ∈]0, ∞[ such that where δ x is the Dirac-mass of mass 1 concentrated at x ∈ R N , with ν j and μ j satisfying Take a standard cut-off function ψ ∈ C ∞ c (R N ), such that 0 ≤ ψ ≤ 1 in R N , ψ = 0 for |x| > 1, ψ = 1 for |x| ≤ 1/2. For each index j ∈ J and each 0 < ε < 1, define Since E λ (u n )ψ → 0 for all ψ ∈ X being (u n ) n a (P S) c sequence, choosing ψ = ψ ε u n in (13) we have, as n → ∞ for ω R N , being p * /k < p * , taking for instance ω = B ε (x j ) and using (10), we have, up to subsequences, |u n | k → |u| k in L p * /k (ω) , |u n (x)| k → |u(x)| k a.e. in ω and there exists w in ω and in turn by the Lebesgue dominated convergence Theorem, we get Consequently, using (II), (III) and (39) in (38), we obtain From Hölder's inequality, we have Furthermore, arguing as above and since D 1, p (R N ) → → L p (ω) for ω R N , being p < p * , then taking for instance ω = B ε (x j ), we have u n → u in L p (ω), u n (x) → u(x) a.e. in ω, up to subsequences, and there exists w 2 ∈ L p (ω) such that |u n (x)| ≤ w 2 (x) a.e. in ω. Thus, |u n (x)Dψ ε (x)| ≤ Cw 2 (x) a.e. in ω, as well as in R N , and in turn, Lebesgue dominated convergence Theorem gives Consequently, passing to the limit for n → ∞ in (41), using the boundedness of (u n ) n , Hölder's inequality with exponents N /(N − p) and N / p, we obtain, thanks to (42), where in the last inequality we have used the properties of ψ ε . Similarly, by replacing p with q, we gain In turn, by letting ε → 0 and then n → ∞, being and, arguing as in (31), Hence, from (40), if ε → 0 we deduce This equality establishes that the concentration of the measure μ cannot occur at points where K (x j ) = 0, moreover, combining (44) and (37), then Sν so that the measure ν cannot concentrate in those points. Hence the set X J := {x j : j ∈ J } does not contain the points x j which are zeros for K . If λ ∈ (0,λ * ], then (46) produces the required contradiction, so that J 2 = ∅, concluding the proof of the claim. On the other hand, a possible concentration at infinity is refused by tightness but, for completeness, we give the proof of it following the idea of Chabrowski in [8] and Ben-Naoum et. al in [4]. Let R > 0 and define It is clear that ν ∞ and μ ∞ both exist and are finite. Now we claim that the analogous of (37) holds, precisely Take, as before, another cut off function ψ R ∈ C ∞ (R N ) such that 0 ≤ ψ R ≤ 1 in R N , ψ R (x) = 0 for |x| < R and ψ R (x) = 1 for |x| > 2R. By (8) we can write where we used (a + b) α ≤ c(a α + b α ) first with α = p > 1 and then with α = 1/ p < 1. On the other hand, by the properties of ψ R , we obtain and similar inequalities hold for |Du n | q and |u n | p * for all n ∈ N. Consequently, by using the definitions of ν ∞ and μ ∞ , we immediately deduce and In turn, replacing in (48), inequalities (49), (50) and (51), we arrive to (47). Then, from E λ (u n )ψ → 0 for all ψ ∈ X as n → ∞ being (u n ) n a (P S) c sequence, choosing ψ = ψ R u n in (13), we get as n → ∞. Similarly to the proof of (43), we have so that, using that u ∈ L p * (R N ) ∩ L q * (R N ), from (52) we obtain Furthermore, we have being (u n ) n bounded in L p * (R N ) and by definition of ν ∞ , we gain Thanks to (47), (53), (54) and (55) we have so obtained Reasoning as above, we deduce that concentration at infinity cannot occur if λ ∈ (0,λ * ]. Consequently Furthermore, since u n (x) → u(x) a.e. in R N from (10), then Brezis Lieb Lemma in [5], implies since (u n ) n is bounded in X and A similar argument shows that Now we define for all u, ϕ ∈ X . Using (13) with ψ = u n − u, we have so that, by (56) and (57), Using the monotonicity of A q , see [19], we have thus, applying the limsup to both terms and using (58) we get lim sup Since u n u in D 1,q (R N ), then A q (u), u n − u → 0 as n → ∞, in turn (59) gives On the other hand, using the monotonicity of A p and the definition of weak convergence, we obtain, thanks to (60), The same argument holds for A q . Thus, by virtue of Lemma 3 applied with a(x, ξ) = |ξ | p−2 ξ , condition (61) is equivalent to Next, take a cutoff function τ ∈ C ∞ (R + 0 ), nonincreasing and such that τ (t) = 1 if 0 ≤ t ≤ T 0 and τ (t) = 0 if t ≥ T 1 . We consider the truncated functional and define 1 which is strictly increasing and positive in (T 1 , ∞), cfr. Fig. 2. Furthermore, by the regularity both of τ and of E λ it follows that E ∞ (u) ∈ C 1 (X , R). Lemma 9 Let E ∞ be the truncated functional of E λ . Proof We prove (a) by contradiction. If u D 1, p ∈ [T 0 , ∞), by the above analysis we see that This contradicts E ∞ (u) < 0, thus u D 1, p < T 0 and the last part of (a) is a consequence of the continuity of E ∞ and (63) 1 . About claim (b), if c < 0 and (u n ) n ⊂ X is a (P S) c sequence of E ∞ , then we may assume that E ∞ (u n ) < 0 and E ∞ (u n ) → 0 as n → ∞. By (a), we have u n D 1, p < T 0 , so that E ∞ (u n ) = E λ (u n ) and E ∞ (u n ) = E λ (u n ). By Lemma 8, since λ * <λ * , E λ satisfies (P S) c condition for c < 0, thus there is a convergent subsequence (u n ) n in X . In other words, E ∞ satisfies (P S) c condition for every c < 0. The proof is complete. Proof of Theorem 1 In this section we prove the main existence Theorem 1, whose statment is given in the Introduction, but, first, we recall briefly the definition of the genus inspired by [1]. Let Y be a real Banach space and let Let A ∈ , the genus γ (A) of A is defined as the smallest integer N such that there exists ∈ C Y , R N \ {0} such that is odd and (x) = 0 for all x ∈ A. We set γ (∅) = 0 and γ (A) = ∞ if there are no integers with the above property. The main properties of genus will be listed in the next proposition. For completeness, we recall the classical Deformation Lemma (see [39]). Lemma 10 Let Y be a Banach space and consider f ∈ C 1 (Y , R) satisfying the (PS) condition. If c ∈ R and N is any neighborhood of (4) f (η t (u)) ≤ f (u) for all t ∈ [0, 1] and for all u ∈ Y ; (7) if f is even, η t is odd in u. Note that, following Remark 3.5 in [40], it is enough to assume that (PS) condition holds at level c. Now, we are ready to prove our main result, that is the existence Theorem 1, whose statement is given in the Introduction. Proof of Theorem 1. To reach the claim it is enough to prove that for all j ∈ N, there is an ε j = ε( j) > 0 such that Since W j is a finite-dimensional space, all the norms in W j are equivalent. Thus, we can define On the other hand, for t ∈ (0, T 0 ), by (64) and since K (x) ≥ 0 in R N , we arrive to for every v ∈ W j with v D 1, p 0 ( ) = 1. Now we obtain, thanks to (67) and (68), E ∞ (tv) ≤ t q a j q − λd j k t k−q + 1 p t p−q , t ∈ (0, T 0 ). Let which proves (66). Consequently, E −ε j ∞ ∈ j , in turn Furthermore, E ∞ is bounded from below, hence c j > −∞ (that is why we consider E ∞ instead of E λ ), thus the proof of claim (65) is concluded. By [11] and [39], it follows from (65) that c j , j ∈ N, is a critical value for E ∞ . Then, from Lemma 9, we see that E ∞ satisfies the (P S) c j condition for all c j < 0 and this implies that K c j is a compact set, hence γ (K c j ) < ∞ by virtue of Proposition 1. We claim that, if for some j ∈ N there is an i ≥ 1 such that if c = c j = c j+1 = · · · = c j+i , then γ (K c ) ≥ i + 1. Hence η 1 (A\U ) ∈ j , so that by definition of c j and thanks to (73), This contradiction proves claim (72). To complete the proof, we observe that for all j ∈ N + , we have j+1 ⊂ j and c j ≤ c j+1 < 0. If all c j are distinct, then γ (K c j ) ≥ 1, so that K c j = ∅ and thus (c j ) j is a sequence of distinct negative critical values of E ∞ , thus a sequence of solutions with negative energy is obtained, as required. If for some j 0 , there exists an i ≥ 1 such that c = c j 0 = c j 0 +1 = · · · = c j 0 +i , from (72) we have γ (K c j 0 ) ≥ i + 1 > 1, which shows that K c j 0 has infinitely many distinct elements. Also in this case we arrive to a sequence of solutions with negative energy. By Lemma 9, then E λ (u) = E ∞ (u) when E ∞ (u) < 0, so that the functional E λ (u), being even, possesses at least m pairs of critical nonzero points of with negative critical values. Therefore, problem (P) has at least 2m weak nontrivial solutions with negative energy.
10,432.8
2020-11-30T00:00:00.000
[ "Mathematics" ]
Natural Flavonoids Quercetin and Kaempferol Targeting G2/M Cell Cycle-Related Genes and Synergize with Smac Mimetic LCL-161 to Induce Necroptosis in Cholangiocarcinoma Cells Cholangiocarcinoma (CCA) is an aggressive cancer associated with a very poor prognosis and low survival rates, primarily due to late-stage diagnosis and low response rates to conventional chemotherapy. Therefore, there is an urgent need to identify effective therapeutic strategies that can improve patient outcomes. Flavonoids, such as quercetin and kaempferol, are naturally occurring compounds that have attracted significant attention for their potential in cancer therapy by targeting multiple genes. In this study, we employed network pharmacology and bioinformatic analysis to identify potential targets of quercetin and kaempferol. The results revealed that the target genes of these flavonoids were enriched in G2/M-related genes, and higher expression of G2/M signature genes was significantly associated with shorter survival in CCA patients. Furthermore, in vitro experiments using CCA cells demonstrated that quercetin or kaempferol induced cell-cycle arrest in the G2/M phase. Additionally, when combined with a Smac mimetic LCL-161, an IAP antagonist, quercetin or kaempferol synergistically induced RIPK1/RIPK3/MLKL-mediated necroptosis in CCA cells while sparing non-tumor cholangiocyte cells. These findings shed light on an innovative therapeutic combination of flavonoids, particularly quercetin and kaempferol, with Smac mimetics, suggesting great promise as a necroptosis-based approach for treating CCA and potentially other types of cancer. Introduction Cholangiocarcinoma (CCA) originates from cholangiocytes lining the biliary tree, both intrahepatic-and extrahepatic-bile ducts. Although the incidence of CCA in Thailand is almost 100 times higher than in other regions (approximately 85 per 100,000), the incidence rate has globally increased in recent decades [1]. CCA is highly heterogeneous and has an aggressive malignancy with a complexed tumor microenvironment [2,3]. Even though surgical resection represents the curative procedure in patients presenting in the early stages, the majority of patients are diagnosed with an unresectable disease in the advanced stages. Although there have been advancements in therapeutic strategies including targeted therapy and immunotherapy, chemotherapy remains the primary treatment for most CCA patients with initially unresectable diseases [2,4]. Currently, gemcitabine or gemcitabine combined with cisplatin is a standard regimen for first-line chemotherapy The crystallographic structure of DNA topoisomerase II (PDB ID: 5GWK) was obtained from the RCSB Protein Data Bank. To prepare the protein for docking, a previously established procedure was followed. In summary, all non-protein molecules were removed from the structure, and missing hydrogen atoms and Kollman charges were added using AutoDockTools-1.5.7 software (The Scripps Research Institute, La Jolla, CA, USA). The prepared protein structures were saved in a PDBQT format to be used in subsequent molecular docking studies. Molecular Docking Molecular docking studies were conducted using AutoDock Vina with default parameters to analyze the interactions between etoposide, quercetin, kaempferol, and topoisomerase II. The grid boxes were set with dimensions of 40 × 40 × 40 points, a spacing of 1 Å, and a center grid box positioned at coordinates 23.307 × −38.584 × −59.568 (xyz). The conformation with the lowest binding energy was selected, and the protein-ligand interaction was evaluated using BIOVIA Discovery Studio 2021 (Dassault Systèmes, San Diego, CA, USA). Protein-Protein Interaction (PPI) Network and Functional and Pathway Enrichment Analysis A protein-protein interaction (PPI) network analysis was performed on the target genes using the STRING database (https://www.string-db.org/, accessed on 2 May 2023) [42]. In addition, to investigate the functional pathways that are more enriched in the set of the target genes in the PPI network than in the background, an enrichment analysis for Gene Ontology (GO), Reactome Pathway, and local network cluster (STRING) were obtained from the analysis section of the STRING database. The results are shown as a bubble chart, in which the false discovery rate (FDR) is represented by colors, the gene ratio is represented by bubble size, and the strength (enrichment effect) is represented in the X axis. Overall Survival of Target Gene Expression in CCA Patients The GSE76297 and GSE89749 datasets were downloaded from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/geo/, accessed on 8 May 2023). A comparative analysis of the 18 overlapping genes expressions was performed on 91 pairs of tumor and non-tumor tissues from the GSE76297 dataset using a paired-sample t-test presented in a box plot. A Kaplan-Meier and log-rank test were used to compare patient survival between G2/M groups using R statistical software (version 4.1.0) (The R Project for Statistical Computing, Indianapolis, IN, USA) with R's "survminer" package. Survival data were obtained from the GSE89749 dataset. The G2/M signature group, which includes CCNA2, CCNB1, CCNB2, CDC25B, CDC25C, CDK1, ABL1, AURKB, CHEK1, and PLK1, was used to stratify patients into two groups. Each gene was initially divided into low and high groups based on the median as the cutoff, resulting in a score of 0 and 1, respectively. Subsequently, the scores of all genes in the G2/M signature were summed, and patients were stratified into low (scores ranging from 0 to 4) and high (scores ranging from 5 to 10) groups. Cell Lines and Culture The HuCCT-1 cell line, derived from human Cholangiocarcinoma (CCA), was obtained from the Japanese Collection of Research Bioresources (JCRB) Cell Bank in Osaka, Japan. The RMCCA-1 cell line, also a human CCA cell line, was developed from Thai CCA patients [43]. Both HuCCT-1 and RMCCA-1 cells were cultured in HAM's F-12 medium (HyClone Laboratories, Logan, UT, USA) supplemented with 10% fetal bovine serum (Sigma, St Louis, MO, USA) and 1% Penicillin-Streptomycin (HyClone Laboratories, Logan, UT, USA). All cell lines were maintained in a humidified incubator at 37 • C with 5% CO 2 . Furthermore, rigorous testing confirmed that there was no mycoplasma contamination in any of the cell lines. Cell Cycle Analysis CCA cells were treated with quercetin or kaempferol as specified concentrations for 48 h. Following the treatment, cell-cycle analysis was conducted by employing propidium iodide (PI) staining to assess DNA content. Briefly, the cells were fixed with 70% ethanol and subsequently washed with Phosphate Buffered Saline (PBS; HyClone Laboratories, Logan, UT, USA). The cells were then resuspended in PBS containing 0.25% Triton X, along with RNase A (100 µg/mL) and PI (50 µg/mL) and incubated for 30 min. Finally, the stained cells were analyzed using flow cytometry for further examination. Data analysis was then performed using the FlowJo™ version 10 software (Becton Dickinson and Company (BD) Franklin Lakes, NJ, USA). Cell Death Induction and Detection by Annexin V/PI Staining For the treatment conditions involving Smac mimetic LCL-161 (5 µM for RMCCA-1 and 25 µM for HuCCT-1) and zVAD-fmk (20 µM), the inhibitors were pre-treated for a Nutrients 2023, 15, 3090 5 of 22 minimum of 2 h followed by treatment with quercetin or kaempferol (concentrations as indicated). Cell death was assessed through Annexin V-FITC and PI staining, followed by flow cytometry analysis. In brief, cells were washed and then suspended in Annexin V binding buffer containing recombinant Annexin V-FITC (ImmunoTools, Friesoythe, Germany) and PI (Invitrogen, Carlsbad, CA, USA). The stained cells were examined using a flow cytometer (Navios, Beckman Coulter, Indianapolis, IN, USA). A total of ten thousand events were collected for each sample, and the data were analyzed using Navios software. The combination index (CI) was calculated based on the Chou-Talalay method, where CI values of 1, <1, and >1 indicate an additive effect, synergism, and antagonism, respectively [44]. Western Blot Analysis The cells were washed twice with ice-cold PBS and lysed in RIPA buffer (Merck Millipore, Darmstadt, Germany) supplemented with a proteinase inhibitor cocktail (Roche, Mannheim, Germany) on ice for 30 min. The total protein concentrations were determined using the Bradford assay (Bio-Rad, Hercules, CA, USA). Total proteins (20-50 µg) were separated by 10-20% SDS-PAGE and transferred onto PVDF membranes. The membranes were blocked with a 5% blotting-grade blocker (Bio-Rad, Hercules, CA, USA) at room temperature for 1 h, followed by overnight incubation with primary antibodies at 4 • C. The primary antibodies used in this study were anti-RIPK1 (610459) from BD Biosciences (San Jose, CA, USA), anti-MLKL (ab184718) and anti-phosphorylated MLKL (ab187091) from Abcam (Cambridge, UK), and anti-RIPK3 (8457) and anti-β-Actin (4970) from Cell Signaling (Danvers, MA, USA). After incubation with primary antibodies, the blots were washed three times with TBS-T (Tris-buffered saline, 0.5% Tween 20) buffer and incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Danvers, MA, USA) at room temperature for 1 h. The proteins were visualized using enhanced chemiluminescence according to the manufacturer's instructions (Bio-Rad, Hercules, CA, USA) using Amersham ImageQuant 800 Western blot imaging systems. All the Western blots shown are representative of at least three independent experiments. Statistical Analysis All data were analyzed using either R statistical software (version 4.1.0) or SPSS (version 22.0, IBM Corp; Armonk, NY, USA). Bioinformatics analysis involved the use of Wilcoxon or Student's t-tests to assess group differences. The log-rank test was utilized to examine variations in survival between G2/M signature groups, with Kaplan-Meier curves used to visualize patient survival. Pearson's correlation coefficient (r) was employed for all correlation analyses. For in vitro experiments, the results were presented as the mean ± standard deviation (S.D.) of at least three independent experiments. Student's t-test was conducted to compare two groups. Statistical significance was defined as a p-value < 0.05, indicated by asterisks (* p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001). Molecular Docking of Quercetin or Kaempferol with DNA Topoisomerase II Previous research has demonstrated that flavonoids, including quercetin and kaempferol, with multiple phenolic rings can interact and inhibit DNA topoisomerase II similar to the chemotherapeutic drug etoposide, which can induce DNA double-strand breaks [35]. The induction of such DNA damage is crucial for its anti-cancer effects, as it disrupts the process of DNA replication and leads to cell-cycle arrest and eventual cell death. We initially investigated the potential binding interactions between quercetin, kaempferol, and DNA topoisomerase II using molecular docking analysis. Molecular docking analysis is a valuable tool in drug discovery and development, enabling researchers to predict and evaluate the binding interactions between potential therapeutic agents and target proteins. The results demonstrated that etoposide, a well-known molecule that targets DNA topoisomerase II, as well as the native molecules from the complex used in the analysis, had a binding energy of −8.4 kcal/mol, while quercetin and kaempferol had binding energies of −7 kcal/mol and −6.9 kcal/mol, respectively ( Figure 1A-C). These results suggest that quercetin and kaempferol have the potential to bind to DNA topoisomerase II. Identifying Cell Cycle-Related Gene Targets of Quercetin or Kaempferol Previous studies have demonstrated that flavonoids, including quercetin and kaempferol, cause cell-cycle arrest and inhibit cell proliferation in various types of cancer [30][31][32][33][34]. Therefore, to identify the target genes of quercetin and kaempferol related to cell-cycle regulation, the targets of quercetin and kaempferol were predicted from databases including SwissTar-getPrediction, BindingDB, PharmMapper, and Super-PRED. After merging the targets for both compounds, a total of 473 targets were obtained. From these targets, 18 genes were selected based on their intersection with cell cycle-related genes (ABL1, AURKB, CCNA2, CCNB1, CCNB2, CCNB3, CDC25B, CDC25C, CDK1, CDK2, CDK6, CDK7, CHEK1, GSK3B, HDAC2, HDAC8, PLK1, and TGFB2) (Figure 2A,B). To study the relationship between the selected target genes, a PPI network and functional enrichment analyses were performed using the STRING database. The PPI network demonstrated associations between 18 target genes ( Figure 2C). In the Gene Ontology enrichment analyses, positive regulation of the G2/M transition of the meiotic cell cycle was significantly enriched in the Biological Process ( Figure 3A). Phosphorylation of proteins involved in the G2/M transition by Cyclin A:Cdc2 complexes and G2/M DNA replication checkpoint were significantly enriched in the reactome pathway ( Figure 3B). Moreover, in the local network cluster (STRING), G2/M DNA replication checkpoint, and DNA topoisomerase type II (double strand cut, ATP-hydrolyzing) complex was significantly enriched ( Figure 3C). These results suggest that both quercetin and kaempferol might target cell cycle-related genes, particularly the G2/M cell-cycle phase. G2/M Cell Cycle Target Genes of Quercetin or Kaempferol Are Associated with Shorter Survival in CCA Patients To examine the relationship between the expression levels of quercetin a kaempferol target genes associated with cell-cycle regulation and overall survival in CC samples, we utilized data from the GSE76297 and GSE89749 cohorts. Initially, we assess the expression levels of these target genes in the CCA samples. Notably, our findings dicated that the majority of the quercetin and kaempferol target genes associated w cell-cycle regulation, out of the 18 overlapping genes, were significantly upregulated tumor tissues compared to non-tumor adjacent tissues ( Figure 4A). As these 18 overla ping genes were predominantly enriched in G2/M cell-cycle associated pathways, we fu ther constructed a G2/M gene signature by including six genes identified in our analy (CCNA2, CCNB1, CCNB2, CDC25B, CDC25C, and CDK1), in addition to four other G2/ hallmark genes from the MSigDB database (ABL1, AURKB, CHEK1, and PLK1). Our sults revealed a significant association between high expression levels of the G2/M sign ture gene set and poorer survival in CCA patients (p = 0.017, HR = 1.856) ( Figure 4B). The results suggest that the elevated expression of the G2/M gene signature, associated w unfavorable survival outcomes in CCA patients, might be potential targets for CCA th apy. G2/M Cell Cycle Target Genes of Quercetin or Kaempferol Are Associated with Shorter Survival in CCA Patients To examine the relationship between the expression levels of quercetin and kaempferol target genes associated with cell-cycle regulation and overall survival in CCA samples, we utilized data from the GSE76297 and GSE89749 cohorts. Initially, we assessed the expression levels of these target genes in the CCA samples. Notably, our findings indicated that the majority of the quercetin and kaempferol target genes associated with cell-cycle regulation, out of the 18 overlapping genes, were significantly upregulated in tumor tissues compared to non-tumor adjacent tissues ( Figure 4A). As these 18 overlapping genes were predominantly enriched in G2/M cell-cycle associated pathways, we further constructed a G2/M gene signature by including six genes identified in our analysis (CCNA2, CCNB1, CCNB2, CDC25B, CDC25C, and CDK1), in addition to four other G2/M hallmark genes from the MSigDB database (ABL1, AURKB, CHEK1, and PLK1). Our results revealed a significant association between high expression levels of the G2/M signature gene set and poorer survival in CCA patients (p = 0.017, HR = 1.856) ( Figure 4B). These results suggest that the elevated expression of the G2/M gene signature, associated with unfavorable survival outcomes in CCA patients, might be potential targets for CCA therapy. Quercetin or Kaempferol Cause G2/M Cell Cycle Arrest in CCA Cells Our bioinformatics analysis identified genes in G2/M cell-cycle associated pathways as targets of quercetin and kaempferol. To further investigate whether quercetin and kaempferol can induce G2/M cell-cycle arrest, we conducted in vitro experiments using CCA cell models, RMCCA-1 and HuCCT-1. The effects of quercetin and kaempferol on the cell cycle in CCA cells were examined. We analyzed the cell-cycle perturbations induced by quercetin and kaempferol, as depicted in the representative histograms (Figure Quercetin or Kaempferol Synergize with Smac Mimetics to Induce Cell Death in CCA Cells In our previous studies, we demonstrated the potential of Smac mimetics to synergize with different inducers and induce necroptosis in CCA cells [46,47], particularly in combination with the chemotherapeutic agent gemcitabine [8]. In this study, we aimed to investigate whether the Smac mimetic LCL-161 (currently under clinical trial evaluation) could synergize with quercetin or kaempferol to induce cell death in CCA cells. To evaluate this, we treated CCA cells to various concentrations of quercetin ( Figure 6A) and kaempferol ( Figure 6B) with and without LCL-161, in the presence of the pan-caspase inhibitors; zVAD-fmk. The results demonstrated a significant synergistic induction of cell death in both RMCCA-1 and HuCCT-1 CCA cells by quercetin and kaempferol when combined with Smac mimetic LCL-161, as indicated by the combination index (CI index) values below 1. To determine the highest synergistic combination, we selected the treatment with the lowest CI index and applied it to treat CCA cells for 48 and 72 h. Subsequently, we observed a time-dependent increase in CCA cell death upon treatment with the combination of quercetin or kaempferol with Smac mimetic LCL-161 (Figure 6C,D,G,H). To provide a normal comparative control, we also investigated the combination treatment in MMNK-1, a non-tumor cholangiocyte cell line. Interestingly, the combination of either quercetin or kaempferol with Smac mimetic LCL-161, in the presence of pan-caspase inhibitors, did not induce cell death in the MMNK-1 cell line ( Figure 7A,B). Overall, our findings indicate a synergistic effect between quercetin or kaempferol and Smac mimetic LCL-161 in inducing cell death specifically in CCA cells. This highlights the potential therapeutic value of combining these agents for targeted treatment of CCA patients. Combination of Quercetin or Keampferol and Smac Mimetic Induce Cell Death through Necroptosis in CCA Cells To validate necroptosis as the form of cell death induced by the combination of quercetin or kaempferol with Smac mimetic LCL-161, we employed multiple approaches to confirm necroptosis. Initially, we investigated the activation of phosphorylated MLKL (pMLKL), a specific marker of necroptosis, using Western blot analysis. The results demonstrated that the combination treatment of quercetin ( Figure 8A) or kaempferol ( Figure 8B) with Smac mimetic LCL-161 led to an increase in pMLKL expression in both RMCCA-1 and HuCCT-1 cells. Furthermore, we employed pharmacological inhibitors targeting key necroptotic proteins, including necrostatin-1 (Nec-1), GSK'872 (GSK), and necrosulfonamide (NSA), which are specific inhibitors of RIPK1, RIPK3, and MLKL, respectively. The results revealed that all inhibitors significantly inhibited cell death induced by the combination treatment of quercetin or kaempferol with Smac mimetic LCL-161 in both RMCCA-1 and HuCCT-1 cells ( Figure 8C,D). Moreover, we generated CCA cells lacking key necroptotic proteins using genetic inhibition. CRISPR/Cas9 technology was employed to generate RIPK1 and RIPK3 knockout CCA cells, while shRNA was used to achieve MLKL knockdown in CCA cells. The successful depletion of these proteins was confirmed ( Figure 8I). Our findings demonstrated that the loss of RIPK1 or RIPK3 in CCA cells significantly suppressed cell death induced by the combination treatment ( Figure 8E,F). Similarly, the absence of MLKL provided protection to CCA cells against cell death induction by the combination treatment of quercetin or kaempferol with Smac mimetic LCL-161 ( Figure 8G,H). Collectively, our comprehensive investigations utilizing various approaches consistently confirm that the combination of quercetin or kaempferol with Smac mimetic LCL-161 when caspases were inhibited, induces cell death in CCA cells through RIPK1/RIPK3/MLKL-mediated necroptosis. (E) and kaempferol (F). Combination treatment of quercetin (G) or kaempferol (H) with Smac mimetic LCL-161 in HuCCT-1 at 48 h and 72 h. Cell death was determined by Annexin V and PI staining and flow cytometry. Combination index (CI) was calculated. Data presented as mean ± S.D. of three independent experiments are shown; * p < 0.05, ** p < 0.01, LZ; LCL-161 + zVAD-fmk. Discussion Several studies have demonstrated the improvement of Smac mimetics as a monotherapy treatment by exploring the rationale combination of cytotoxic drugs and DNA-damaging agents in the context of chemotherapy with Smac mimetics. This study is the first to explore the novel potential combination of flavonoids, specifically quercetin and kaempferol, with Smac mimetics. Several key findings were identified in the present study. First, we identified the molecular targets related to G2/M cell-cycle regulators of quercetin and kaempferol. These target genes are associated with shorter overall survival of CCA patients, suggesting that G2/M cell-cycle regulators could be potential therapeutic targets for CCA therapy. Second, using an in silico molecular docking approach, we observed potential interactions between quercetin and kaempferol and DNA topoisomerase II. Third, we have shown that both quercetin and kaempferol can effectively arrest CCA cells in the G2/M cell-cycle phase. This effect is likely due to the ability of quercetin and kaempferol to target multiple G2/M cell-cycle regulators and DNA topoisomerase II. Fourth, we have demonstrated that the Smac mimetic LCL-161 sensitizes CCA cells to necroptosis when caspases are inhibited in an RIPK1/RIPK3/MLKL-dependent manner. This effect was demonstrated through experiments utilizing both pharmacological inhibitors and genetic approaches. Notably, the combination effect was observed at relatively lower cytotoxic doses of quercetin and kaempferol. Lastly, the synergistic combination of flavonoids (quercetin and kaempferol) and Smac mimetics specifically induced necroptosis in CCA cell lines, while the non-tumor cholangiocyte remained resistant. Our findings provide a novel therapeutic combination between flavonoids, specifically quercetin and kaempferol, with Smac mimetics. This innovative combination holds potential for the development of a necroptosis-based therapeutic approach, not only for CCA but also for other cancer types. Necroptosis can be considered as immunogenic cell death (ICD) due to its dual beneficial roles in killing tumor cells and inducing antitumor immunity [48]. Studies conducted in both in vitro and in vivo models across various types of cancer have shown that necroptosis represents a promising target for effective cancer treatments and opens up opportunities for overcoming apoptosis resistance [49][50][51][52][53][54][55]. Moreover, the combination of necroptosis activation and immune checkpoints inhibitors (ICIs) targeting the PD-1/PD-L1 pathway has been shown to improve treatment efficacy and survival in mouse models of melanoma and colorectal cancer compared to monotherapy [54]. Since the majority of CCA patients are initially diagnosed at advanced stages, and the tumor microenvironment of CCA is characterized by desmoplasia and immunosuppression [2,3], current therapeutic approaches have proven ineffective. Therefore, targeting necroptosis has the potential to be a promising therapeutic strategy for CCA patients. We have previously demonstrated that the activation of necroptosis, as indicated by the analysis of the specific marker phosphorylated MLKL (pMLKL) in human CCA tissues, is associated with a high infiltration of CD8+ T cells. This suggests that targeting necroptosis could be a promising therapeutic approach for CCA patients [47]. To further validate if necroptosis can be triggered in vitro in CCA cell models, we utilized two promising necroptosis inducers, gemcitabine or poly(I:C), a toll-like receptor 3 (TLR3) agonist, in combination with the Smac mimetic SM-164. Interestingly, the combination of both gemcitabine or poly(I:C) and the Smac mimetic SM-164 synergistically enhanced the induction of necroptosis in CCA cells expressing RIPK1/RIPK3/MLKL when caspases were inhibited [8,46]. Since poly(I:C) exhibited lower sensitization of necroptotic cell death than gemcitabine in CCA cells, gemcitabine becomes a more promising combination agent. However, due to the adverse side effect of gemcitabine and the development of drug resistance, in this current study, we aimed to rationally identify potential combination agents with Smac mimetics that have minimal side effects and target multiple genes. Natural compounds with functional properties similar to chemotherapeutic drugs, such as DNA-damaging effects, targeting cell-cycle or DNA repair pathways, or inhibiting cell proliferation are promising candidates. Initially, we identified these natural compounds through a literature search. Among them, natural flavonoids, particularly quercetin and kaempferol, have been extensively studied and gained importance as cytotoxic and cytostatic agents, exhibiting the aforementioned functional properties. However, the combination of flavonoids (quercetin and kaempferol) with Smac mimetics has not yet been evaluated both in CCA or other cancers. To identify potential targets related to the cell cycle regulators of quercetin and kaempferol, we collected a list of potential target genes of quercetin and kaempferol and overlapped them with a set of 157 cell cycle-related genes. A total of 18 overlapping genes were identified. Furthermore, the protein-protein interaction (PPI) network and functional pathway enrichment analysis of these 18 overlapping target genes revealed that most of the overlapping target genes were enriched in the G2/M cell cycle-associated pathways. Specifically, the following G2/M cell cycle-related genes including CDC25C and CDC25B, were enriched in the Gene Ontology (GO) pathway enrichment analysis; CDK1, CCNA2, CCNB1, and CCNB2 were enriched in the reactome pathway enrichment analysis; and finally, CDK1, CCNB1, and CCNB2 were enriched in the local network cluster (STRING) enrichment analysis. This study is the first to utilize network pharmacology to identify potential target genes of flavonoids (quercetin and kaempferol) in the G2/M cell cycle-related pathways. Although, we did not validate the expression changes of these target genes following quercetin and kaempferol treatment, in vitro functional experiments confirmed that both quercetin and kaempferol dramatically arrested CCA cells in the G2/M cell-cycle phase. This effect may be attributed to their targeting of multiple genes related to the G2/M cell-cycle phase. Our findings are consistent with previous studies that have demonstrated the ability of both quercetin and kaempferol to induce G2/M cell-cycle arrest, leading to apoptosis in cancer cells [30][31][32]34]. In contrast to previous studies, we selected concentrations of quercetin and kaempferol that exhibit cell proliferation inhibition while inducing minimal cell death for their combination with the Smac mimetic. Additionally, in this study, we utilized the Smac mimetic LCL-161, which has entered clinical trials for both monotherapy and combination therapy in the treatment of certain cancers [17]. Interestingly, both quercetin and kaempferol at a concentration of 25 µM, which induced minimal cell death, exhibited the highest synergistic combination with the Smac mimetic LCL-161 when caspases were inhibited in two different CCA cell lines. Further investigation is required to explore the precise underlying molecular mechanisms of how flavonoids (quercetin and kaempferol) and Smac mimetics synergistically enhance sensitivity to necroptotic cell death. Previous studies in various types of cancer have shown that treatment with Smac mimetic monotherapy can induce the expression and secretion of TNF-α via a non-canonical NF-κB signaling [15,16]. Although, the concentration of Smac LCL-161 used in this study can effectively induce both cIAP1 and cIAP2 degradation (Supplementary Figure S1), it only triggered minimal cell death. Since both quercetin and kaempferol can arrest CCA cells in the G2/M phase, inhibiting cell proliferation, this may allow the accumulation of pro-death signals received from Smac mimetic treatment, possibly TNF-α, to induce CCA cell death ( Figure 9). Therefore, the measurement of TNF-α secretion warrants further investigation. Additionally, the validation of the therapeutic potential of the combination treatment needs further investigation in an in vivo CCA tumor model. To examine whether the combination treatment is specific to CCA cells, we performed the same experiment setting in a non-tumor cholangiocyte, MMNK-1, as a representative of non-tumor cells. The combination treatment did not trigger cell death when caspases were inhibited; however, treatment with quercetin or kaempferol alone significantly induced cell death, most likely apoptosis, in a dose-dependent manner. The non-tumor cholangiocyte was more sensitive to a single treatment with quercetin than kaempferol. In contrast to CCA cell lines (RMCCA-1 and HuCCT-1) which express key necroptotic proteins, including RIPK1, RIPK3, and MLKL [8], MMNK-1 does not express RIPK3 and expresses a low level of MLKL. Therefore, the combination treatment in the presence of caspase inhibitors does not switch quercetin-or kaempferol-induced apoptosis to necroptosis. In fact, the presence of caspase inhibitors almost completely inhibited quercetin-or kaempferol-induced cell death, suggesting that the single treatment of quercetin or kaempferol induces caspasedependent cell death. Consistent with our previous studies, which demonstrated that the mRNA expression of RIPK3 and MLKL is lower in the normal bile duct compared to CCA tissues [8], we further validated these findings in tissue samples from CCA patients using immunohistochemistry (IHC). The IHC results showed a similar expression pattern for both RIPK3 and MLKL [47]. Nutrients 2023, 15, x FOR PEER REVIEW 18 of 22 Figure 9. A proposed mechanism of necroptosis activation through the synergistic combination of quercetin or kaempferol with Smac mimetic LCL-161. Quercetin and kaempferol are extensively studied flavonoids with potential anticancer properties. Through network pharmacology, both quercetin and kaempferol have shown the potential to interact with multiple targets, including DNA topoisomerase II and G2/M cell-cycle hallmark genes. This interaction can result in G2/M cellcycle arrest and inhibition of cell proliferation. In parallel, Smac mimetic LCL-161 degrades cIAP1 and cIAP2, activating a non-canonical NF-κB signaling pathway that generates pro-death signals, potentially including autocrine TNF-α. In the context of slow proliferation of CCA cells induced by quercetin or kaempferol, the accumulation of pro-death signals generated by Smac mimetic LCL-161 triggers necroptosis in the absence of cIAP1/2 and caspase activity. Necroptosis is initiated in the CCA cells expressing key components of necroptosis, including RIPK1, RIPK3, and MLKL. This process ultimately culminates in the release of damage-associated molecular patterns (DAMPs). This figure was created with BioRender.com (accessed on 15 June 2023). To examine whether the combination treatment is specific to CCA cells, we performed the same experiment setting in a non-tumor cholangiocyte, MMNK-1, as a representative of non-tumor cells. The combination treatment did not trigger cell death when caspases were inhibited; however, treatment with quercetin or kaempferol alone significantly induced cell death, most likely apoptosis, in a dose-dependent manner. The nontumor cholangiocyte was more sensitive to a single treatment with quercetin than kaempferol. In contrast to CCA cell lines (RMCCA-1 and HuCCT-1) which express key necroptotic proteins, including RIPK1, RIPK3, and MLKL [8], MMNK-1 does not express RIPK3 and expresses a low level of MLKL. Therefore, the combination treatment in the presence of caspase inhibitors does not switch quercetin-or kaempferol-induced apoptosis to necroptosis. In fact, the presence of caspase inhibitors almost completely inhibited quercetin-or kaempferol-induced cell death, suggesting that the single treatment of quercetin or kaempferol induces caspase-dependent cell death. Consistent with our previous Figure 9. A proposed mechanism of necroptosis activation through the synergistic combination of quercetin or kaempferol with Smac mimetic LCL-161. Quercetin and kaempferol are extensively studied flavonoids with potential anticancer properties. Through network pharmacology, both quercetin and kaempferol have shown the potential to interact with multiple targets, including DNA topoisomerase II and G2/M cell-cycle hallmark genes. This interaction can result in G2/M cell-cycle arrest and inhibition of cell proliferation. In parallel, Smac mimetic LCL-161 degrades cIAP1 and cIAP2, activating a non-canonical NF-κB signaling pathway that generates pro-death signals, potentially including autocrine TNF-α. In the context of slow proliferation of CCA cells induced by quercetin or kaempferol, the accumulation of pro-death signals generated by Smac mimetic LCL-161 triggers necroptosis in the absence of cIAP1/2 and caspase activity. Necroptosis is initiated in the CCA cells expressing key components of necroptosis, including RIPK1, RIPK3, and MLKL. This process ultimately culminates in the release of damage-associated molecular patterns (DAMPs). This figure was created with BioRender.com (accessed on 15 June 2023). Moreover, to evaluate the therapeutic significance of the target genes of quercetin and kaempferol in the G2/M gene signature, which includes CCNA2, CCNB1, CCNB2, CDC25B, CDC25C, CDK1, ABL1, AURKB, CHEK1, and PLK1, we used this G2/M gene signature to stratify patients into two groups. Interestingly, CCA patients with high expression of this G2/M gene signature were associated with unfavorable survival outcomes (p = 0.017), indicating that these genes might be druggable targets for CCA therapy. Although, when analyzing the expression of each gene with patients' survival, only PLK1 turned out to be significantly associated with patients' survival (p = 0.026) (Supplementary Figure S2). This finding indicates that the stronger association obtained from the G2/M gene signature might result from the interaction of these genes in the regulation of the G2/M cell cycle. In concordance with our study, previous studies have demonstrated a correlation between high PLK1 expression and unfavorable prognosis in CCA patients as well as the effectiveness of inhibiting PLK1 in CCA cells [56][57][58]. While PLK1 inhibitors such as BI2536 and GW843682X have been predicted to be effective drugs for CCA patients with a poor prognosis [59], PLK1 inhibitors such as NMS-1286937 (onvansertib) and BI2536, BI6727 (volasertib) have been evaluated in clinical trials and are generally well-tolerated, and their clinical efficacy is partially responsive with a monotherapy, particularly in cancers at advanced stages [60]. These results further support the potential therapeutic use of quercetin or kaempferol in targeting multiple genes in the G2/M cell cycle. Conclusions This is the first study to explore the novel potential combination of flavonoids, specifically quercetin and kaempferol, with Smac mimetics. This novel combination shows promise for the development of a necroptosis-based therapeutic approach, not only for CCA but also for other cancer types. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/nu15143090/s1, Figure S1: Smac mimetic LCL-161 triggers degradation of cIAP1 and cIAP2 in RMCCA-1 and HuCCT-1. RMCCA-1 and HuCCT-1 were treated with 5 µM or 25 µM Smac mimetic LCL-161, respectively for indicated time points. The expression of cIAP1 and cIAP2 was determined by Western blot analysis. β-actin served as loading control; Figure S2: The association between PLK1 expression and survival of CCA patients. A Kaplan-Meier curve was used to compare the survival time between the high and low PLK1 expression groups. The median was used as the cut-off to stratify patients into the high and low expression.
7,518
2023-07-01T00:00:00.000
[ "Biology" ]
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research. Introduction The promise of artificial intelligence (AI) systems to analyze and rapidly extract insights from large amounts of data have stimulated interest in applying AI to problems in complex domains involving high-stakes decision making. [1][2][3] In such domains, human experts are relied upon to form a final decision supported by the outputs of the AI, forming a human-AI team. Several studies have shown that the performance of such teams can be greater than the performance of the human or the AI alone, 4,5 suggesting that each member of the team is able to compensate for the other's weaknesses. For this to happen, the human must build an adequate mental model of the AI and its capabilities. Failing to build a suitable mental model will result in the human miscalibrating their level of trust in the AI, and the human-AI team will perform poorly. In this Perspective, we argue that AI systems can help human team-mates build suitable mental models by giving explanations of how their outputs were arrived at (providing interpretability) THE BIGGER PICTURE This article is about artificial intelligence (AI) used to inform high-stakes decisions, such as those arising in legal, healthcare, or military contexts. Users must have an understanding of the capabilities and limitations of an AI system when making high-stakes decisions. Usually this requires the user to interact with the system and learn over time how it behaves in different circumstances. We propose that longterm interaction would not be necessary for an AI system with the properties of interpretability and uncertainty awareness. Interpretability makes clear what the system ''knows'' while uncertainty awareness reveals what the system does not ''know.'' This allows the user to rapidly calibrate their trust in the system's outputs, spotting flaws in its reasoning or seeing when it is unsure. We illustrate these concepts in the context of a military coalition operation, where decision makers may be using AI systems with which they are unfamiliar and which are operating in rapidly changing environments. We review current research in these areas, considering both technical and human factors challenges, and propose a framework for future work based on Lasswell's communication model. and estimates of the uncertainty in their outputs. These two factors help the human to understand both what the AI ''knows'' and what the AI does not ''know.'' These requirements are motivated by the scenario of AI-supported decision making in future military coalition operations. 6 Here, we describe the coalition setting and how AI systems may be deployed in this setting to support human decision making. We use this to motivate our proposed requirements of interpretability and uncertainty awareness for robust AI-supported decision making. We discuss the technical challenges and human factors challenges posed by these requirements, and highlight promising recent work toward solving these problems. AI in Coalition Operations The context of our AI research is the Distributed Analytics and Information Science International Technology Alliance (DAIS-ITA) (https://dais-ita.org/), which takes future military coalition operations as the motivating setting. Coalitions may be formed quickly to respond to rapidly changing threats, and operations will be conducted jointly across five domains (land, sea, air, space, and cyber), 7 presenting a complex and highly dynamic environment for military decision makers to understand. To help make sense of the ongoing situation in a coalition operation, militaries will increasingly rely on AI technologies to obtain insights that can assist human decision makers. [8][9][10] The envisaged scenario poses several challenges for current AI techniques. 11 1. Although large amounts of data may be collected during rapidly evolving operations, there will not be enough time or resources to clean and label all of these data for (re) training models. 2. During the course of an operation the situation may change dramatically, meaning that data will not be generated from a static distribution but will drift over time. 3. Adversaries may attempt to manipulate data to confuse the coalition's AI systems and, thereby, the decision makers. 4. Due to the operational environment the network supporting the coalition may be slow and unreliable, meaning that access to large, central computing power is not guaranteed. AI services will therefore be distributed over lowpower devices at the edge of the network, communicating peer-to-peer. The set of services available to an analyst at any given time will change based on their physical location, the network state, and dynamic prioritization of tasks across the network. The first three points are about the nature of the data: only small amounts of data will be available for retraining during the course of the operation, and these data may be unreliable. The AI services will therefore be operating on out-of-distribution data, where guarantees cannot be made about their performance. The final point means that human analysts will be interacting with a variety of AI services with which they may be unfamiliar. The rapid formation and dynamic nature of the coalition operation may not allow humans to build up experience of the specific AI services through training prior to, or repeated use during, the operation. These four factors will adversely affect the overall performance of the human-AI team without mitigations to improve trust calibration. In the next section we describe the concept of trust calibration, how this affects human-AI team performance, and how it could be improved by developing interpretable and uncertainty-aware AI systems. We provide definitions of these and related terms (including our usage of ''AI'') in Table 1. Rapid Trust Calibration for Robust Human-AI Team Decision Making To obtain the greatest benefit from using decision-support AI, the human must have an appropriately calibrated level of trust Table 1. Glossary of Terms, Defined in Relation to Human-AI Teams AI artificial intelligence: the property of a computer or machine to display ''intelligent'' behavior more usually associated with humans or non-human animals, and the methods and technologies used to achieve this. In this article we focus largely on AI using machine learning to support human decision making AI service a stand-alone piece of software implementing a single AI functionality, e.g., IBM Watson Visual Recognition (https://cloud.ibm.com/catalog/services/visual-recognition, accessed April 28, 2020) AI system a system composed of one or more AI services. Each service in the system may be owned or operated by a different organization or coalition partner. Where unambiguous, we refer simply to ''an AI'' to mean an AI system Trust level the extent to which the human believes the AI's outputs are correct/useful for achieving their current goals in the current situation. While trust is a very broad and nuanced topic, [12][13][14] we restrict ourselves to this narrower definition to help focus our discussion Trustworthiness the degree to which the AI warrants trust from the human Trust calibration the process through which the human sets their trust level appropriately to the AI's trustworthiness Interpretable a property of the AI system that allows a human to understand the reasons for the system's output Explanation information provided by the AI system to the human that provides reasoning around why the system produced a specific output Aleatoric uncertainty uncertainty caused by inherent unpredictability in the system (e.g., the outcome of a coin toss or dice roll) Epistemic uncertainty uncertainty caused by a lack of knowledge, reducible by observing more data Adapted from H€ ullermeier and Waegeman, 15 Lee and See, 16 Nilsson, 17 and Tomsett et al. 18 ll OPEN ACCESS in the system. 16,19 Trust is well calibrated when the human sets their trust level appropriately to the AI's capabilities, accepting the output of a competent system but employing other resources or their own expertise to compensate for AI errors; conversely, poorly calibrated trust reduces team performance because the human trusts erroneous AI outputs or does not accept correct ones. 16,20 Bansal et al. 21 formalize this by measuring how well humans learn and respond to the AI's error boundary (the boundary separating inputs that are correctly classified versus those that cause the AI to make mistakes). However, AI systems dealing with high-dimensional data and/or many classes will have error boundaries that are hardly self-explanatory. In the coalition setting, the human may not have the opportunity to learn the error boundary: the AI services they use may differ from those they have been trained to use (e.g., if they belong to other coalition partners), and operate on data that differs from the training data, resulting in unpredictable error boundaries. When every decision is high-stakes, the human must be able to calibrate their trust in the AI quickly and adjust their trust level on a case-by-case basis. We refer to this process as rapid trust calibration. Rapid trust calibration can be posed as a problem of communication: the AI system must quickly communicate its abilities and limitations to the user. We therefore follow van der Bles et al. 22 24 proposed also considering the circumstances and the purpose of the communication. We include circumstances, as these will vary greatly even within the coalition context, and purpose, as it helps make explicit the goals of the communication. In the context of AI-supported decision making, the ''who'' in question is the AI system, the ''to whom'' is the human decision maker, and the ''purpose'' of the communication is to improve the human's decisions. The ''effect'' of the communication will depend on what is communicated, in what form, and under what circumstances, as well as the characteristics of the decision maker to whom it was transmitted. Structuring future research using this model will help both in narrowing down research questions and in identifying the research's applicability to different settings. We propose that for rapid trust calibration, what is communicated should include explanations for the AI's outputs (providing interpretability) and the AI's level of uncertainty. This suggestion is informed by the decision-making literature, which suggests that trust calibration requires understanding a system's capabilities (provided through interpretability), and the reliability of the system's outputs (provided through uncertainty estimates). 19 In the next sections we further justify this view, and provide a concrete example of how these two facets could enable rapid trust calibration in a coalition operation. We turn to the associated technical challenges in the Discussion section, as well as considering the effects of the form and circumstances of the communication and the characteristics of whom is being communicated with. Why Interpretability? Doshi-Velez and Kim 25 argue that interpretability is necessary when the AI and human agents have mismatched objectives. This is likely in practice, especially in com-plex decision scenarios: AI systems are trained to optimize a narrow set of objectives that can be conveyed mathematically, but their outputs are then used by the human to inform a decision that was never expressed in these objectives. Consider a vision model that has been trained to recognize different kinds of vehicles in images. This model may be used by an analyst to assess the threat level of an enemy force. The downstream decision informed by the model really needs to consider the capabilities of, and threats posed by, these vehicles; the specific category of the vehicles themselves is not directly relevant. However, the AI has no concept of vehicle capabilities: it has been trained to recognize them based only on image data. Vehicles with different capabilities may have similar visual features in the training data and thus be more frequently confused by the model. In this situation, appropriate explanations could help reveal this problem to the human by highlighting the relevant visual features, revealing the mismatch between the AI's interpretation of the image and the human's and allowing them to update their mental model of the AI's capabilities. 26 The training data itself, in addition to the mechanics of training, also contribute to the objective mismatch problem. We generally assume the training data to be adequately representative of the distribution we are trying to learn. For many problems and many kinds of data, this assumption does not hold. In the coalition setting, models may be trained on data gathered during previous operations, which are not adequately representative of the new scenario to which they are being applied. The data may be flawed in any number of unknown ways, 27 leading to unquantified biases in the models that are difficult to identify prior to deployment. Suitable explanations that identified these biases during operation would improve the human's mental model of the AI's abilities. Why Uncertainty? Interpretability gives the human access to what the AI system has learned, and how it uses that knowledge in producing outputs. Understanding what the AI does not know is also extremely important for creating a suitable mental model of the AI's capabilities. 21,28,29 To do this, the AI system must be able to estimate the uncertainty in its outputs. Uncertainty is often described as a single concept, although several authors have made attempts to categorize different kinds of uncertainty. 30,31 Weisberg 32 divides uncertainty into components of doubt and ambiguity; doubt may be quantified as a probability while ambiguity results from a lack of knowledge. Doubt and ambiguity roughly correspond to a distinction commonly made in the machine learning and statistics literature between aleatoric and epistemic uncertainty. Aleatoric uncertainty (doubt) represents uncertainty inherent in the system being modeled (e.g., through stochastic behavior) while epistemic uncertainty (ambiguity) is the uncertainty due to limited data or knowledge. 15,33,34 For example, an uncertainty-aware image classifier should exhibit high aleatoric uncertainty for images that are similar to those it was trained on, but that do not contain adequate distinguishing features for choosing between classes; it should estimate high epistemic uncertainty for images that look different from those in the training set (e.g., a noisy image, or an image of an unknown class of object). Aleatoric uncertainty is irreducible while epistemic uncertainty can be reduced by observing more data. Humans seem to think and talk about these kinds of uncertainty differently-using words like ''sure'' and ''confident'' to refer to epistemic, and ''chance'' or ''probability'' to refer to aleatoric uncertainty 35 -even if only subconsciously and despite their frequent conflation in mathematical modeling. 22,36 It is particularly important to understand epistemic uncertainty in the coalition scenario. 37 At the start of an operation, coalition partners will deploy AI systems trained on historical data. This is unlikely to adequately capture the data distributions present in a new setting because of differences in the environment and changes in adversaries' behaviors. Much of the actual input data to the AI during the coalition operation will therefore be out of distribution (not part of the distribution the AI was trained on), which will cause errors no matter how many data the system was trained on previously. 38 As an operation continues, models may be retrained on more relevant data, but the amount of data available will be limited (and possibly conflicting and of low quality). As the AI's knowledge will always be constrained by these factors, communicating its epistemic uncertainty is crucial for ensuring that the human is able to build a mental model of what the AI does not know. Example Scenario The following scenario, illustrated in Figure 1, demonstrates how both interpretability and uncertainty communication could improve human-AI team performance. Consider an analyst assessing the level of enemy activity over the area of operations who has access to various autonomous sensors and AI services deployed by the coalition in forward positions, including a camera feeding a neural network model that can identify different kinds of enemy vehicle. During their surveillance task, a vehicle is spotted and classified by the model. On examining the explanation for the classification, the analyst sees that the model has focused on the vehicle's camouflage pattern. As the analyst knows that the enemy uses several camouflage patterns and that these are not vehicle dependent (this might not have been known when the model was originally trained), they infer that the model may be mistaken in this case (see Figure 1C). They have therefore been able to calibrate their trust appropriately and have updated their mental model of the AI's capabilities. During the same surveillance operation, another vehicle is classified by the model with high epistemic uncertainty ( Figure 1D). Unknown to the analyst, the enemy has developed a new camouflage pattern and has started deploying these vehicles in the area of operations. As this pattern has not appeared in the model's training data, it reports high epistemic uncertainty, thus alerting the analyst that they should not trust its classification output. In this case, providing only an explanation could be misleading: the input image is out of distribution, so the region of latent space it is mapped to is not meaningful, potentially resulting in confusing or meaningless explanations. Although this example is somewhat contrived and overly simplified, it helps illustrate how interpretability and uncertainty awareness contribute toward rapid trust calibration. We can also transfer this simplified scenario more easily to other domains. In medical imaging diagnostics, for example, appropriate interpretability would allow a radiologist to assess how well the AI system has aligned with their own expert knowledge, enabling them to identify the model's biases for each new case. Epistemic uncertainty would allow them to quickly identify gaps in the AI's training-inevitable when models are deployed at different locations with diverse patient populations. Discussion Technical Challenges: Who Communicates What Before interpretability and uncertainty estimates can be used to improve human-AI decision making, we need reliable methods for creating both. This poses difficult technical challenges that have yet to be fully solved. Interpretability. One solution is to use models that are intrinsically interpretable so that accurate explanations can be produced naturally from the model structure. Some authors have suggested that this approach is the only acceptable solution for high-stakes decision making due to both technical and conceptual limitations in trying to create explanations for uninterpretable models. 39 Indeed, much current research into producing ''post hoc'' explanations 40 of (uninterpretable) neural network outputs has resulted in techniques that are difficult to The plot on the left shows a 2D projection of the latent feature space of the classifier, with inputs from two different classes of vehicle depicted as magenta triangles (class 1) and black circles (class 2). Example inputs for these two classes are shown on the right of the figure (A and B). The human (ground truth) decision boundary is the dotted black line, and the classifier's learned decision boundary is the solid black line: regions where the classifier will make errors are shaded (gray for class 1 inputs mistaken for class 2, magenta for class 2 inputs mistaken for class 1). A and B are far away from the decision boundary but well within the learned data distribution, so should be classified with low epistemic uncertainty. (C) An input that confuses the classifier, because it has learned to rely on camouflage as a feature to distinguish between vehicle types. (D) An input that is far from the learned distribution, because vehicles with this camouflage pattern were not in the training data: it should be classified with high epistemic uncertainty. OPEN ACCESS validate, 41 with some failing basic sanity checks. 42 This would preclude the use of neural network models for high-stakes decision support. However, their ability to automatically learn features from lowlevel data means that neural networks perform well on domains for which features are difficult to engineer by hand, e.g., learning from images, audio, video, sensor streams, and natural language. These are exactly the kinds of data sources we are interested in using during coalition operations, as well as other high-stakes domains such as medicine and autonomous driving. Combining neural networks' powerful representational capacity with techniques that improve their inherent interpretability is an active research area, with a variety of approaches showing promise. [43][44][45] Uncertainty Quantification. Quantifying epistemic uncertainty requires the model to have a means of accurately estimating how far away new inputs are from the data distribution it was trained on. A common approach is to use Bayesian methods, whereby epistemic uncertainty is captured as uncertainty in the model parameters 33 or as uncertainty in function space using, for example, Gaussian processes. 46 Another promising approach is that of evidential learning, 47,48 whereby inputs are mapped to the parameters of a Dirichlet distribution over classes. Smaller parameter values represent less evidence for a class, producing a broader distribution representing greater epistemic uncertainty. This approach also benefits from a direct mapping to the framework of subjective logic. 49 Subjective logic has many appealing properties for AI applications in the coalition setting, allowing aleatoric and epistemic uncertainty to be considered during logical reasoning operations as well as providing a framework for incorporating subjective evidence from sources with different levels of trust. 50 These methods all have associated problems that require further research for them to be overcome. Bayesian methods rely on sampling approaches that increase their computational cost at inference time while Gaussian processes present issues when scaling to high-dimensional problems. 51 The uncertainty estimates are dependent both on the specifics of the approximations and on the prior probability distributions used. The evidential learning approach learns a generative model to create out-ofdistribution samples so that the classifier can be explicitly taught the input regions it should be uncertain about, 48 but this intro- duces complications in the training process. The evaluation of epistemic uncertainty estimates is also challenging: they are fundamentally subjective 22 with cases of high epistemic uncertainty being largely driven by the prior, so that defining metrics to assess the validity of these estimates is conceptually difficult. Explanations of Uncertainty, and Uncertainty in Explanations. Creating explanations for the causes of model uncertainty, and estimating the uncertainty in explanations of outputs, are relatively underexplored areas. Epistemic uncertainty could arise because an input is unlike the training data in any feature or because it contains a set of known features in a previously unseen combination. Distinguishing between these cases may be helpful for the decision maker, potentially pointing toward different lines of further inquiry. These kinds of explanations have only recently begun to be explored. [52][53][54] Explanations may also have some uncertainty attached to them, especially if they summarize the model's reasoning trace. As far as we are aware, only one study has investigated uncertainty in explanations: Merrick and Taly 55 calculated the variance of Shapley values, which are a commonly used method to estimate feature importance. 56 This is also an underexplored research area, yet one that could have important implications for assessing explanation reliability. Human Factors Challenges: What Form, What Circumstances, to Whom However good the technical solutions for interpretability and uncertainty awareness become, they will be useless unless they can be made accessible and useful to humans. AI and data science researchers must engage and collaborate with human computer interaction (HCI), psychology, and social science researchers to find the best approaches for facilitating rapid trust calibration. Automation Bias and Algorithm Aversion. Automation bias is a well-studied phenomenon that hinders trust calibration. 57,58 It occurs when humans accept computer outputs in place of their own thinking and judgment, leading them to place too much trust in algorithmic outputs. Various studies have looked at different factors affecting automation bias, including the cognitive load of the user, 58 the accountability of the user in the decision process, 59,60 and their level of expertise and training. 61 Conversely, algorithm aversion occurs when humans disregard algorithms that actually perform better than humans, thus affecting trust calibration in the opposite direction to automation bias. 62 This effect has been studied most in the context of forecasting tasks, whereby humans tend to lose trust in an algorithm's advice very rapidly in response to errors; 63 by contrast, trust in other humans who make the same errors reduces more slowly. 64 Other experiments have produced conflicting results, suggesting that only expert forecasters are ll OPEN ACCESS PATTER 1, July 10, 2020 5 susceptible to algorithm aversion while lay users are more likely to trust algorithmic advice. 65 The possible influences of automation bias or algorithm aversion on AI used for decision support are unclear. Some results regarding the tendency of explanations to cause humans to be overly trusting of conventional decision aids seem to transfer to AI-based aids, 66,67 although the effects will be dependent on the particular characteristics of the explanations provided. 68 There are many different kinds of explanation that an AI system could supply, [69][70][71] so future research on the impact of different kinds of explanation on trust calibration should be guided by knowledge gained in the social sciences on how humans understand explanations. 72,73 Providing uncertainty estimates along with explanations may also improve trust calibration, but research remains to be done in this area. In particular, humans are not naturally competent at reasoning with probabilities, as described in the next section. Communication of Uncertainty. Van der Bles et al. 22 surveyed epistemic uncertainty communication about facts, numbers, and science, but found no systematic studies of how epistemic uncertainty affects decision making (noting that many studies do not distinguish epistemic from aleatoric uncertainty). However, many papers have looked at how humans understand probabilistic information, including most famously those by Kahneman and Tversky. [74][75][76] This work demonstrated that humans are not good at reasoning with probabilities, regularly committing errors such as the base-rate fallacy. 77 Research since has suggested that some such errors can be mitigated by presenting probabilities in a form closer to humans' natural mental representations of them as frequencies of events. 78 Combined with the observation that people naturally describe aleatoric and epistemic uncertainties differently, 35 this suggests that finding suitable forms to present probabilistic uncertainty information to users could allow them to use this information to improve their trust calibration in an AI system. Some studies have found that particular non-probabilistic representations of uncertainty or confidence can lead to improved trust calibration in specific settings, 79,80 but further work is needed to understand the best way to represent different kinds of uncertainty under different circumstances and how best to combine the characteristics of interpretability and uncertainty awareness. Suggestions for Researchers and Practitioners The discussion above leads us to the following suggestions for future research into these topics, as well as recommendations for data science practitioners working with decision-support AI today. Researchers. Interpretability and uncertainty awareness are currently very active topics in AI research, particularly in the deep-learning community where standard methods provide neither of these properties. [81][82][83][84][85] This research still lacks a deeper appreciation of how humans, with various levels of background knowledge and differing roles and goals, interpret different explanations and uncertainty information. Although important studies from the HCI community have probed these questions, 67,86,87 more collaborative work between AI and HCI researchers, as well as statisticians and others experienced in communicating about uncertainty, will be crucial for focusing technical research toward developing methods that are actually useful for different human stakeholders. 88 We suggest that re-searchers from these fields use Lasswell's communication model 23,24 outlined above as a common reference to help frame their discussions. Data Science Practitioners. Although further research is necessary to establish best practices for building interpretable, uncertainty-aware AI systems, data scientists and developers can start incorporating these ideas into the AI decision-support systems they build. Explanation is important, but the provision of explanatory mechanisms in AI systems needs to be driven by clear requirements (in software engineering terms) specific to the various classes of user/stakeholder. 18 We suggest that developers focus their efforts on enabling rapid trust calibration by framing user requirements in terms of (1) explanations for the AI's outputs (for interpretability) and (2) communication of the AI's level of aleatoric and epistemic uncertainty, and ensuring close collaboration with all relevant stakeholders to ensure appropriate communication of these factors. Again, Lasswell's communication model 23,24 may prove helpful for framing these collaborations. Conclusion AI holds great promise for use in decision support. To fulfill its potential, we must create AI systems that help humans to understand their strengths and weaknesses, allowing rapid trust calibration. This is particularly important in military operations, where AI services are likely to encounter out-of-distribution data, and operators will not have time to build up adequate mental models of the AI's capabilities through training or interaction. In this Perspective, we have proposed building AI services that are both interpretable and uncertainty-aware, illustrating how these two features together could facilitate rapid trust calibration. We suggest using the framework provided by Lasswell's communication model to structure future research efforts. Although we have focused on one-way communication from AI to human, our long-term goal is to enable bidirectional communication so that the human-AI team can form a shared conceptualization of the problem space they are tackling (see Figure 2). This approach has been studied in classical (''good old-fashioned'') AI, leading to the creation of ontology technologies culminating in the Semantic Web; 89 our prior work in this area focused on controlled natural language as a medium for human-machine collaboration, allowing natural and artificial agents to operate on the same linguistically expressed information. 90 The recent breakthroughs in AI, founded on subsymbolic models, are compatible with these approaches only if the AI's internal representations can be externalized in communicable terms, and those same terms can be used by the human to inform the AI's internal representations. This creates a system that is both explainable and tellable: we can provide it with new knowledge directly in human-understandable terms. This not only has the potential to benefit the human team-member's trust calibration 91 but also allows the AI to assess its teammate's knowledge and biases, and thus calibrate its trust in the human, potentially allowing it to alter its communication strategy to account for the human's flaws. To create tellable systems, we see promise in approaches that combine elements of symbolic AI with successful subsymbolic approaches to allow humans and machines to operate on shared conceptualizations of the world. 92,93 How this can best be achieved is currently a key open problem in AI. 94 ll OPEN ACCESS
7,168
2020-07-01T00:00:00.000
[ "Computer Science" ]
Some identities for the generalized Fibonacci polynomials by the Q(x) matrix In this note, we obtain some identities for the generalized Fibonacci polynomial by using the Q(x) matrix. These identities including the Cassini identity and Honsberger formula can be applied to some polynomial sequences, such as Fibonacci polynomials, Lucas polynomials, Pell polynomials, Pell-Lucas polynomials, Fermat polynomials, Fermat-Lucas polynomials, and so on. Introduction A second order polynomial sequence F n (x) is said to be the Fibonacci polynomial if for n ≥ 2 and x ∈ R, F n (x) = xF n−1 (x) + F n−2 (x) with F 0 (x) = 0 and F 1 (x) = 1. The Fibonacci polynomial and other polynomials attracted a lot of attention over the last several decades (see, for instance, [3,4,7,8,9,12]). Recently, the generalized Fibonacci polynomial is introduced and studied intensely by many authors [1,2,5,6], which is a generalization of the Fibonacci polynomial. Indeed, a polynomial sequence G n (x) in [5,6] is called the generalized Fibonacci polynomial if for n ≥ 2, G n (x) = c(x)G n−1 (x) + d(x)G n−2 (x) with initial conditions G 0 (x) and G 1 (x), where c(x) and d(x) are fixed non-zero polynomials in Q [x]. It should be noted that there is no unique generalization of Fibonacci polynomials. Following the similar definitions in [6], in this note, F n (x) is said to be the Fibonacci type polynomial if for n ≥ 2, F 0 (x) = 0, F 1 (x) = a and F n (x) = c(x)F n−1 (x) + d(x)F n−2 (x) where a ∈ R \ {0}. If for n ≥ 2, L 0 (x) = q, L 1 (x) = b(x) and L n (x) = c(x)L n−1 (x) + d(x)L n−2 (x), then the polynomial sequence L n (x) is called the Lucas type polynomial, where q ∈ R \ {0} and b(x) is a fixed non-zero polynomial in Q [x]. Naturally, both F n (x) and L n (x) are the generalized Fibonacci polynomials. We note that if we assume F 1 (x) = a = 1, then F n (x) is the Fibonacci type polynomial given in [6]. In addition, the definition of L n (x) is the same with that of Flórez et al [6] if |q| = 1 or 2, and c(x) = 2 q b(x). In other words, our definitions of F n (x) and L n (x) are generalizations of those in [6]. Since the investigation of identities for polynomial sequences F n (x) and L n (x) received less attention than their numerical sequences, Flórez et al [6] collected and proved many identities for both F n (x) and L n (x) by applying their Binet formulas mostly, when certain special initial conditions were satisfied for F n (x) and L n (x). These identities can be applied to Fibonacci polynomials, Lucas polynomials, Pell polynomials, Pell-Lucas polynomials, Fermat polynomials, Fermat-Lucas polynomials, Chebyshev first kind polynomials, Chebyshev second kind polynomials, Jacobsthal polynomials, Jacobsthal-Lucas polynomials, and Morgan-Voyce polynomials. Indeed, all polynomial sequences in the upper part of Table 1 below are the Fibonacci type polynomials. On the other hand, those in the lower part of Table 1 are the Lucas type polynomials. Table 1 is the rearrangement of [6, Table 1]. Table 1. Polynomial Initial value Initial value Recursive Formula In the note, by using the so called the Q(x) matrix of Fibonacci type polynomials rather than the Binet formulas, we will obtain some new identities or recover some well-known ones including the Cassini identity and Honsberger formula for F n (x) and L n (x). In Section 2, we will present the results for the Fibonacci type polynomial F n (x). Relying on Section 2, the identities of the Lucas type polynomial L n (x) will be demonstrated in Section 3. Fibonacci type polynomials In this section, we will provide and prove some identities for the Fibonacci type polynomial F n (x) by applying the Fibonacci type Q(x) matrix. The original Fibonacci Q matrix was introduced by Charles H. King in his master thesis (cf. [10]), and given by The Fibonacci Q matrix is connected to the Fibonacci sequence F n , which is defined as below Indeed, it is noted in [7] that Using this relation above, some familiar identities can be obtained. For instance, det F n+1 F n F n F n−1 = det 1 1 1 0 n implies the Cassini identity F n+1 F n−1 − F 2 n = (−1) n . Also, using this equality Q n+m = Q n Q m , one can deduce the Honsberger formula. In the following, we will apply some similar idea of Q matrix from the numerical cases [11] to the Fibonacci type polynomials. For n ≥ 2 and x ∈ R, the Fibonacci type polynomial F n (x) is defined by Here we define the Fibonacci type Q(x) matrix by We note that if F n (x) = P n (x) is the Pell polynomial as defined in Table 1, then which appeared in [9]. In addition, we observe that On the other hand, Hence we have the following result. Proof. Let n = 1. Then Assume the equality holds for n = k. Then we have By induction, the result follows. The Cassini identity of the Fibonacci type polynomial F n (x) can be obtained below by Theorem 2.1. Corollary 2.2. Let F n (x) be the Fibonacci type polynomial. Then for each n ∈ N, Proof. By Theorem 2.1, we have . By Corollary 2.2, we recover the Cassini identity in [4], Example 2.4. Let F n (x) be the Pell polynomial P n (x) as defined in Table 1. By Corollary 2.2, is the Jacobsthal polynomial as defined in Table 1. By Corollary 2.2, one can obtain the Cassini identity for the Jacobsthal polynomial below By Corollary 2.2, we have the result below. Corollary 2.6. Let F n (x) be the Fibonacci type polynomial. Then for each n ∈ N, Proof. By By applying Q n+m (x) = Q n (x)Q m (x), we give the Honsberger's formula for the the Fibonacci type polynomials below. Corollary 2.7. Let F n (x) be the Fibonacci type polynomial. Then for each n, m ∈ N, . Using Q n−m (x) = Q n (x)Q −m (x) for n ≥ m, we next will prove the d'Ocagne identity for F n (x). Here we need to assume d(x) = 0 for each x ∈ R so that Q(x) is invertible. Moreover, note that Corollary 2.11. Let F n (x) be the Fibonacci type polynomial, and let d(x) = 0 for each x ∈ R. Then for each n, m ∈ N with n ≥ m, . Hence considering the (1, 2) entry of the first matrix in the equality above, Example 2.12. Let F n (x) be the Fibonacci polynomial F n (x) as defined in Table 1. By Corollary 2.11, which is the d'Ocagne identity in [4,Corollary 8], and the identity (47) of [6, Proposition 3]. We note that Using this equality, one can obtain the following expression of F n (x). Theorem 2.13. Let F n (x) be the Fibonacci type polynomial. Then for each n, p ∈ N, . Lucas type polynomials Based on the results of Fibonacci type polynomials, some identities of Lucas type polynomials will be demonstrated in this section. Throughout this section, we assume L n (x) and F n (x) have the same recursive formula with L 0 (x) = F 1 (x), that is, for n ≥ 2, where a ∈ R \ {0}. By applying Theorem 2.1, one can connect L n (x) with F n (x) below. Theorem 3.1. Let F n (x) and L n (x) be the Fibonacci type polynomial and Lucas type polynomial respectively with L 0 (x) = F 1 (x) = a. Then for each n ∈ N, Proof. First, we will prove L n (x) = b(x) a F n (x) + d(x)F n−1 (x) holds for each n ∈ N. Let n = 1. Then Let n = 2. Then Assume this equality hods for n = k − 1 and k. Let n = k + 1. Then On the other hand, we have One has the result by these two equalities Next, we will demonstrate the relation between Lucas type polynomials and the Fibonacci type Q(x) matrix . Theorem 3.2. Let L n (x) be the Lucas type polynomial. Then for each n ∈ N, Proof. By Theorem 2.1 and Theorem 3.1, we have Using Theorem 3.2, one has the Cassini identity for the Lucas type polynomial L n (x). Proof. By Theorem 3.2, we have Example 3.4. Let a = 2, b(x) = 2x, c(x) = 2x, d(x) = 1 in Eq. (2). Then L n (x) = D n (x) is the Pell-Lucas polynomial as defined in Table 1. By Corollary 3.3, the Cassini identity for the Pell-Lucas polynomial D n (x) is given by By Corollary 3.3, we have the result below. Corollary 3.5. Let L n (x) be the Lucas type polynomial. Then for each n ∈ N, Proof. By we have Using Q 2 (x) = c(x)Q(x) + d(x)I again, we have the expression of L n (x). Theorem 3.6. Let L n (x) be the Lucas type polynomial. Then for each n, p ∈ N, Proof. By Theorem 3.2, we have By considering the (2, 2) entry of the first matrix in the above equality, we have Example 3.7. Let L n (x) be the Morgan-Voyce polynomial C n (x) in which a = 2, b(x) = x + 2, c(x) = x + 2, d(x) = −1 in Eq. (2). By Theorem 3.6, we have Finally, we end up this note by providing an identity in which F n (x) and L n (x) are involved. Proposition 3.8. Let F n (x) and L n (x) be the Fibonacci type polynomial and Lucas type polynomial respectively with L 0 (x) = F 1 (x) = a. Then for each n, m ∈ N, . Then by the (2, 2) entry of the first matrix in the above equality, we have for each n, m ∈ N. Example 3.9. Let F n (x) and L n (x) be the Jacobsthal polynomial J n (x) and the Jacobsthal-Lucas polynomial Λ n (x) respectively, as defined in Table 1. Then Λ 0 (x) = J 1 (x) = 1 which satisfies the condition in Proposition 3.8. Hence we have the following equality for J n (x) and Λ n (x): Proof. By Theorem 3.2 and Q n−m (x) = Q n (x)Q −m (x), we have . Then considering the (2, 2) entry of the first matrix in the above equality, we have a(−d(x)) m L n−m (x) = L n (x)F m+1 (x) − L n+1 (x)F m (x).
2,559.2
2020-12-31T00:00:00.000
[ "Mathematics" ]
Returns to scale in food preparation and the Deaton–Paxson puzzle We consider returns to scale in food preparation as a potential resolution of a puzzle raised by Deaton and Paxson (Journal of Political Economy, 106(5), 897–930, 1998). We clarify the conditions under which returns to scale in food preparation can resolve the puzzle. The key requirement is that foods are heterogeneous in time costs. We then show that detailed food expenditure and time use data are consistent with larger households shifting to more time intensive foods. Introduction In an influential paper, Deaton and Paxson (1998) raise an important puzzle in understanding returns to scale in household consumption. They note that, holding per capita resources constant, returns to scale (in at least some goods) imply that larger households are better off, and so should consume more of private goods such as food. However, they document in a range of data sets that larger households have lower per capita food expenditures (holding per capita resources constant). Gan and Vernon (2003) suggest that returns to scale in food consumptionparticularly in food preparation-may resolve this puzzle. However, Paxson (1998, 2003) emphasize that the presence of returns to scale in food preparation strengthens rather than resolves their puzzle. 1 The latter assertion is true of models in which "food" is a single commodity, or at least homogeneous with respect to preparation time. In contrast, if food is a composite of goods that differ in their preparation times, returns to scale in food preparation are a potential explanation of the Deaton-Paxson Puzzle. The first contribution of this article is to develop a slightly richer model that retains the Barten-type demographic effects of Deaton and Paxson's analysis but adds explicit home production (of food) and two types of food (which differ in their preparation times). Thus it serves to illustrate the case in which returns to scale in food preparation might explain the Deaton-Paxson Puzzle, and contrasts with the case in which food is homogeneous with respect to time costs. Deaton and Paxson recognize this possibility in their original paper, but do not consider it a likely explanation (Deaton and Paxson 1998, p. 922). The second contribution of this article is to compare some of the predictions of a model with foods that are heterogeneous in preparation times to Canadian data from detailed food expenditure diaries and detailed time use diaries. We find the data are consistent with key predictions of our model. Compared to singles, couples spend a smaller share of their food budget on prepared food, and, strikingly, couples spend more time per capita on food preparation than singles. With respect to the Deaton-Paxson puzzle, we find that a version of the puzzle remains. However, we argue that the type of test for returns to scale that Deaton and Paxson propose may be difficult to implement (at least for developed countries) because of substitutions within broad food categories. In the next section we set out the background to our analysis by reviewing the puzzle posed by Deaton and Paxson (1998), and the implications of adding home production to their model while maintaining the assumption that food is a homogeneous good. In Section 3, we contrast this analysis with a model with two types of food that differ in their preparation times. Section 4 presents our empirical evidence. Section 5 discusses alternative explanations for the patterns we see in the data, and implications of our findings. 1 A number of explanations for this puzzle have been put forward, but none has proved satisfactory. Bulk discounts could mean that quantities consumed are higher in larger households, even though expenditures are lower (Abdulai 2003). However, Gibson and Kim (2016) show that careful estimates of bulk-discounts schedules based on transaction level data from Papua New Guinea are much too small to explain the Deaton-Paxson puzzle. In a developing country (the U.K.), Griffith et al. (2009) find evidence of larger bulk discounts, but report that demographic variables (including household size) explain less than 1% of the variation across households in savings from bulk purchasing. Gibson and Kim (2007) suggest that recall error in food expenditure data that is correlated with household size could explain the puzzle. That explanation has not been supported by subsequent analysis in Gibson et al. (2015) and Brzozowski et al. (2016). See Deaton (2010) for a more recent statement of the puzzle. For an excellent recent survey of the broader literature on home production of food, see Davis (2014). The Deaton-Paxson puzzle Consider a household with n members (adults only) who enjoy two goods: a private good f (food) and a composite of other goods x which is subject to some scale economies. To focus on returns to scale, we follow Deaton and Paxson (1998) in assuming a unitary model of household preferences. The household's problem is: st: p f f n þ p x x n ¼ y n The expression n θ captures returns to scale in the (composite) good x. When n ¼ 1, n θ ¼ 1 θ ¼ 1; when n ¼ 2, n θ ¼ 2 θ <2, and so on. 2,3 The p i are market prices and y is household income (or total expenditure). Note that the budget constraint can be rewritten in terms of individual consumption, per capita income, and shadow prices: The key insight of Barten-type models (Barten 1964) is that demographics have price-like effects. Here, as household size increases, the price of the private good (food) is unaffected. 4 In contrast, the resource cost (shadow price) of the good subject to returns to scale (x*) falls: ∂p * x ∂n <0. This will have both income and substitution effects. For the public good, these operate in the same direction (the household is richer, and the relative price of public goods is lower). Thus as n increases, x* (individual consumption of x) should unambiguously rise. However, note that x* is not observed by the econometrician, as it depends on the returns to scale parameter θ. In contrast, individual food consumption, f *, is observed (as it depends only on total food and household size.) The income and substitution effects for food have opposite sign (the household is better off, but the relative price of food has risen), but Deaton and Paxson (1998) posit that since there are few substitutes for food, the income effect should dominate. 5 Thus, holding resources (income or total outlay) per capita constant, larger households should have higher per capita consumption of food and, given common market prices, higher per capita food expenditures. Deaton and Paxson examine expenditure data from a range of countries and find the opposite result: larger households have lower per capita food expenditures holding per capita income constant. They consider and reject a number of possible explanations for this puzzle. Food preparation with homogeneous time costs Beside food, Deaton and Paxson (1998) also examine household expenditures on some other private goods. They find that "the coefficients on household size are generally positive for clothing and entertainment", 6 which implies that food differs from other private goods in some crucial way. Gan and Vernon (2003) suggest that returns to scale in food consumption would help resolve the puzzle and speculate that returns to scale in the time cost of food preparation might be the source of returns to scale in food consumption. Deaton and Paxson (2003) respond that (direct) returns to scale in food consumption could certainly help to explain the puzzle but note that returns to scale in the time required for food preparation actually deepen, rather than resolve, the puzzle. To see why, consider the following simple extension to the model, adding home production (of food): max nu f n ; x n θ ; l n s:t: where i is quantity of ingredients purchased, w is wage rate, T is the total time endowment, t is time spent on food preparation, l is leisure time, and T is total time endowment. Optimization implies: t ¼ i n 1Àγ and: f ¼ i so that the problem can be rewritten: max nu i n ; x n θ ; l n s:t: : As before the budget constraint rewritten in terms of individual consumption, per capita income (now full income), and shadow prices: : Now the shadow prices (full costs) of both food and other goods (x*) fall with household size, leading to larger income effects. Moreover, the direction of substitution effects, if any, depend on the relative size of the returns to scale parameters θ and γ, and could favour food. As household size increases (holding resources per capita constant), per capita food consumption-and hence per capita quantities of ingredients purchased-should increase. As Deaton and Paxson note, budget data record market expenditures on ingredients (food). That is, they record p i i, not the full cost, p * i i. However, this is actually useful because (assuming common market prices) market expenditures are proportional to quantities, so per capita market expenditures should rise with household size (holding per capita resources constant). Thus the Deaton-Paxson puzzle remains-and is deepened because income effects should be greater here than in the simpler model. Food preparation with heterogeneous time costs Models in which foods differ in their time cost have quite different implications. The simplest model that illustrates this point assumes that there are just two kinds of food, with the most extreme heterogeneity in time costs of preparation. Prepared or "cooked" food, c, is purchased "ready-to-eat" and requires no preparation time. Alternatively, ingredients i can be purchased and combined with time to produce regular food, r. We assume the same home production technology is as in the model of the previous section. We do not assume that prepared food and home cooking are perfect substitutes. Thus in this model the household's problem is: l n s:t: : The production function implies t ¼ i n 1Àγ and r ¼ i. Thus the problem can be written: max nuðf ðc*; i*Þ; x*; l*Þ s:t: : When household size increases, the shadow prices of ingredients (regular food), i*ð¼ r*Þ, and other goods, x*, fall. We follow Deaton and Paxson in assuming that substitution effects between food and other goods are negligible (so the change in p * x affects food purchases only through income effects). The income and substitution effects on food purchased can be summarized as follows. For ingredients, the income and substitution effect operate in the same direction. A larger household is better off, and faces a lower relative (shadow) price of home cooked food. For prepared foods, the income and substitution effects operate in opposite directions, so that the total effect is ambiguous. There are three key predictions. First, as household size increases there should be a substitution from ready-to-eat or prepared foods towards ingredients. This is important because it means that, across household size, (per capita) market expenditures on all foods are not proportional to (per capita) food quantities. Market expenditures (per capita) are: p c c n þ p i i n If p i <p c (as seems reasonable) then substitution from c to i could lead market expenditures to fall, even if per capita quantities of food were constant or rising. Thus this kind of compositional effect could explain the Deaton-Paxson puzzle. Another way to think about this point is that in this model, the "market price" of food (which in this model is a weighted average) is not constant across household sizes, because households of different size purchase different food baskets. Thus broad expenditure patterns across household sizes are not necessarily informative about quantity patterns. The second key prediction is that in this model, per capita quantities of the most time intensive food should rise with household size (holding per capita resources constant). This is because of both income and substitution effects. This prediction is in some sense the analogue of the prediction that Deaton and Paxson examine in their original (1998) paper. Finally, we can use the production condition t ¼ i n 1Àγ À Á to eliminate ingredients rather than preparation time. This leads to an unambiguous prediction that t* ¼ t n γ should rise with household size. Of course, t* is not observed in the data because (except for singles) it depends on the return of scale parameter γ. Only t (or t n ) is observed. However, note that if returns to scale are operating ð0<γ<1Þ and per capita time ( t n ) rises with household size, then effective time per capita (t* ¼ t n γ ) must rise with household size because t n γ > t n when n≥2. Thus the observation that per capita time spent on food preparation rises with household size would support the modeland suggest significant substitution of home cooking for prepared foods in response to differences in shadow prices. 7 We now turn to an empirical examination of the predictions of our model. Some empirical evidence We next compare predictions of the model described in the previous section to two cross-sectional data sources. The first is the 1992 and 1996 Canadian Food Expenditure Survey (FOODEX), a detailed two-week diary of household food expenditures, which distinguishes several hundred types of food purchased from stores. We have divided those types of foods into "ingredients" (foods requiring substantial preparation) and prepared or "ready-to-eat" foods. 8 The second data set is the detailed time use diaries that are part of the 1998 Canadian General Social Survey (GSS), and in particular the information on time spent on food preparation in that data. In our empirical analysis we focus on adult-only households: singles and couples (without children). We further restrict the sample to individuals aged 25-55 and working full time (i.e., both members of a couple must satisfy these criteria for the household to be included). Our FOODEX sample contains 1201 singles and 956 couple households. The GSS sample includes 861 singles and 550 couple households. We focus on fully employed adult households for two reasons. First, as Deaton and Paxson note, children likely have lower food needs than adults, so that food demand may fall as the fraction of children in the household rises (holding per capita income and household size constant.) Thus to look for evidence of returns to scale we should vary household size while holding the fraction of children in the household constant. Our sample of adult only households does this. In our Canadian data, there is limited scope to independently vary household size and the fraction of children at larger household sizes. Second, as explained below, the limitations of our data make it convenient to focus on households with small (market) labour supply elasticities. A downside of our sample choice is that when they compare households of size 1 and 2 only, Deaton and Paxson do not find their puzzle in all the countries they consider. 9 However, we will demonstrate below that the Deaton-Paxson puzzle is evident in our sample. In both data sets there are slightly more men than women among singles. In all our calculations, we use weights to undo this discrepancy (so that there is no difference in "average" gender between the couples and singles.) 10 We examine expenditure patterns in the FOODEX data with parametric (OLS) regressions. These relate expenditure shares and ratios of expenditures to the logarithm of income per capita and a couple dummy, as well as additional controls for age, gender and education of the household head. We also include year, season and region fixed effects. 11 Similar results are obtained by running nonparametric regressions of shares on log per capita income separately for singles and couples. 12 Note that we condition on total market income, whereas in a home production model like that developed in the previous section the correct conditioning variable is full income (the sum of time endowments, each multiplied by the relevant wage). Unfortunately neither data set contains information on wages. 13 The assumption that maps our theory into our empirical work is that the individuals we study (young, childless, singles and couples, working full time) have inelastic supply of market time (so that we can treat market income as essentially exogenous.) We believe that for this sample, this is a reasonable assumption, and this is the second reason we focus on younger, adult-only households. We begin in Table 1 by examining food shares. Holding per capita income constant, the estimated shares of couples are lower and the difference from singles is statistically significant (the coefficient on the couple dummy in Column 1 of Table 1). Thus the Deaton and Paxson puzzle is apparent in our data. We now turn to an analysis of prepared foods and ingredients. Holding per capita resources constant, food expenditures of couple households are significantly shifted towards ingredients (and away from prepared foods). The difference between couples and singles is statistically significant (Column 2 of Table 1). These are the substitution patterns within food predicted by our extended model. In addition to prepared food purchased for consumption at home, we also examine expenditures on take-out fast-food. Take-out fast-food can be considered food at home with little preparation time (perhaps even less than the prepared foods purchased in stores). 14 Column 3 of Table 1, illustrates that, holding per capita resources constant, food expenditures of couple households are significantly shifted away from fast-food and towards ingredients, which is again consistent with our first prediction. These types of substitutions suggest the possibility of substantial returns to scale in food preparation, and substantial responses to the resulting differences in shadow prices across household types. They also mean that food at home is not a homogeneous commodity, and that across household sizes, expenditures are not proportional to quantities. If larger households are substituting towards foods that are cheaper on the market but require greater time inputs, then market expenditures may fall, while quantities do not fall, or even rise. Some evidence on this is provided in Tables 2 and 3, which focus particularly on meat purchases. Meat is a useful category of food for this analysis because it is frequently bought in both prepared and ingredient form. The FOODEX data collects both expenditures and quantities, so that we can examine quantities directly. We can also calculate unit values, which are expenditure divided by quantitysimilar to a price. 15 Table 2 indicates that, for both singles and couples, the average unit value ($ per kg) of prepared meat is higher than for unprepared meat (an ingredient)-by about 25%. These differences are both economically and statistically significant, and thus confirm, at least for meat, our assumption (above) that p i <p c . Table 3 reports a further set of regressions on the logarithm of per capita income and a couple dummy (with additional controls as in Table 1). The first two columns consider the share prepared meat in total meat purchases either by expenditure (Column 1) or quantity (Column 2). In both columns we show that couples allocate a significantly lower share of their meat spending to prepared meat. Given that prepared meat is more expensive than unprepared meat (as suggested by Table 2), this implies couples pay a lower average price for all meat. This is confirmed in Column 3, though the effect is not statistically significant at conventional levels. Again, to explain the Deaton-Paxson puzzle returns to scale in the home production of food must generate a lower average market price for larger households, so that quantities can be higher (as predicted by theory) even though their expenditures are lower (as observed in the data). 16 An alternative way to look at this is to examine total meat expenditures and quantities directly. This is reported on columns 4 and 5 of Table 3. Couples spend significantly less on meat per capita than singles (column 4), even though with returns to scale they are better off. This echoes the original Deaton-Paxson puzzle. However, column 5 reveals that difference in quantities is much smaller (and not statistically significant). This again suggests that part of the explanation for the Deaton-Paxson puzzle could be that quantities are not proportional expenditures across household sizes. Returning to ingredients and prepared foods more generally, the second key prediction of our model is that per capita quantities of the most time intensive goodin our case ingredients-should rise with household size (holding per capita income constant). This is because of both the income and substitution effects of the change in shadow prices brought about by increasing household size. Assuming that market prices of ingredients are constant across household size, this means that, at a given level of per capita resources, couple households should spend a larger share of their budget on ingredients. This is not what we observe in our data. Table 1 indicates that couples spend a lower share of their budget on ingredients. Thus a version of the Deaton and Paxson puzzle remains. Of course, this observation can be explained by the same argument that we have applied to total food expenditures. 'Ingredients' in turn are a composite good comprising many types of food with different preparation times, and substitutions between them may mean that market expenditures on ingredients are not proportional to quantities. Couples may pay a lower 'average' price because of such compositional effects. However, it is difficult to provide affirmative evidence of this hypothesis (largely because it is not clear what further dis-aggregation of food expenditures would be most appropriate). Based on a pooled sample of 959 singles and 884 childless couples with positive meat purchases from the 1992 and 1996 Canadian Food Expenditure Surveys. All members are aged 25-55 and working full time. In all calculations the data are weighted to equalize the proportion of each gender amongst singles Table 3 Meat unit values, expenditures and quantities (1) (3) Our extended model can be solved for the time input rather than ingredients, and then gives the unambiguous prediction that effective time per person on food preparation should rise with household size. A potential empirical problem is that effective time is not observed (as it depends on the returns to scale parameter). Nevertheless, Table 4 summarizes food preparation times from the 1998 GSS Time Use Survey. 17 The key point is that couples spend more time per person on meal preparation than singles (the mean is almost 6 min higher; this rises to more than 8 min per day when we add regression controls) so that the total household food preparation time of couples is more than double the food preparation time of singles. We find similar results for food preparation (which includes both meal preparation and shopping) and food preparation plus clean up. As noted above, if per capita time rises moving from singles to couples, then effective time must also rise. Thus the time use data are also broadly consistent with the predictions of our model. The fact that couples spend more than twice as much (total) time on food preparation as singles suggests very significant substitution responses to the variation in shadow prices across household types. Discussion In this article we have considered returns to scale in home production of food as a possible explanation of the Deaton-Paxson puzzle. As Deaton and Paxson have (2) is sum of the dependent variables in Columns (3) and (4). The dependent variable in Column (1) is the sum of the dependent variable in Column (2) and post-meal clean up Full results are presented in the appendix suggested, this requires that foods are heterogeneous with respect to time costs. To illustrate this idea we developed develop a slightly richer model that retains the Barten-type demographic effects of Deaton and Paxson's analysis but adds explicit home production (of food) and two types of food (which differ in their preparation times). Per capita food quantities can be higher in larger households even though their per capita expenditures are lower (as observed in the data) if larger households pay a lower effective market price per unit of food. This would be the case if returns to scale in the time input to home production induced larger households to substitute towards foods that have lower market prices but are more time intensive. We then compared predictions from this model to detailed Canadian data on food expenditures and on time use. The data align with the predictions of the model in a number of respects. First, using detailed food expenditure data we show that larger households' food baskets are significantly shifted away from prepared and ready-toeat foods and towards foods requiring preparation time ('ingredients'). Second, we provide evidence from time use data that per capita food preparation time is significantly greater for working couples than for working singles. On the other hand, in our extended model with two kinds of foods, market expenditures on the more time intensive type of food should increase with household size, holding per capita resources constant. However, in our data, market expenditures on 'ingredients' do not increase with household size (holding per capita resources constant). Deaton and Paxson (1998) noted that if food expenditure rose with household size as theory suggests, this would provide a Robarth-type measure of the returns to scale in the household. But as food expenditure does not rise with household size (holding per capita income constant) they could not draw an inference about the size of returns to scale in the household. We are in a similar position. Of course, the failure of ingredient expenditures to rise with household size could be explained by the same argument we applied to total food expenditures: 'Ingredients' in turn are a composite good comprising many types of food with different preparation times, and substitutions between them mean that market expenditures on ingredients are not proportional to quantities. Larger families may pay a lower 'average' price because of such compositional effects. If such substitution patterns are important at finer levels of dis-aggregation, Deaton and Paxson's strategy for testing for returns to scale may be very difficult to implement. Further caveats to our analysis are as follows. First, there are other mechanisms that could generate lower effective market prices for larger households (and so explain the Deaton-Paxson puzzle). A greater scope for larger households to take advantage of bulk discounts would do so, but, as noted in the Introduction, the best empirical evidence does not support this explanation for the Deaton-Paxson puzzle. Less food waste in larger households would also mean they face a lower effective market price for food, but we have no direct evidence for or against this possibility in our Canadian data. As Deaton and Paxson (1998) note, this is unlikely to be an explanation for the puzzle in poorer countries. Second, the kinds of substitution patterns we have considered (between prepared foods and ingredients, or more generally, between foods with different preparation times) may well be much less important in developing countries, or among those living at subsistence levels. Evidence from developing countries was an important part of Deaton and Paxson's original empirical analysis, and so the mechanism we study is unlikely to be a complete explanation for the puzzle they raise. Finally, there are other mechanisms that could generate the patterns we see in our data. In particular, it could be that preferences differ between singles and coupled individuals, either because of selection (those who enjoy home cooking partner), or because preferences are contingent on circumstances (it's more fun to cook when you have someone to cook for.) Our model and interpretation of the data have assumed that preferences are independent of couple status and it is only choice sets that vary with household size. This is a common assumption in the literature (see for example, Browning et al. 2013), and it is often difficult to make progress without it. Nevertheless, it is important to acknowledge that our data cannot rule out selection on preferences or the contingency of preferences. Moreover, recent empirical work by Brugler (2016) shows that consumption preferences systematically differ between never-married and divorced singles, which strongly suggests selection into marriage. Despite these caveats, our conclusion is that, among potential explanations for the Deaton-Paxson puzzle, one should not rule out returns to scale in home production of food with heterogeneity in the time intensity of different foods.
6,584.2
2017-12-02T00:00:00.000
[ "Economics" ]
Monitoring-induced entanglement entropy and sampling complexity The dynamics of open quantum systems is generally described by a master equation, which describes the loss of information into the environment. By using a simple model of uncoupled emitters, we illustrate how the recovery of this information depends on the monitoring scheme applied to register the decay clicks. The dissipative dynamics, in this case, is described by pure-state stochastic trajectories, and we examine different unravelings of the same master equation. More precisely, we demonstrate how registering the sequence of clicks from spontaneously emitted photons through a linear optical interferometer induces entanglement in the trajectory states. Since this model consists of an array of single-photon emitters, we show a direct equivalence with Fock-state boson sampling and link the hardness of sampling the outcomes of the quantum jumps with the scaling of trajectory entanglement. The coupling of a quantum system to an environment generally leads to decoherence and, under certain conditions, can be modeled by a Markovian master equation that could generically result in a mixed (non-pure) density matrix [1].An alternative but equivalent approach describes the "unraveling" of the same density matrix in terms of pure-state stochastic wave-function trajectories [2][3][4][5].Interestingly, for a given master equation, the unraveling in terms of stochastic trajectories is not unique.For example, note that a Lindblad master equation, is invariant under any transformation c i → j U ij c j , where U is a unitary matrix and γ is the decoherence rate.Here, c j are the jump operators that describe dissipative coupling to the environment (see [6]).In particular, this implies that any observable O = Tr(ρO) preserves its expectation value, independent of the choice of U .In the unraveling picture, on the other hand, the unitary U is of direct importance for the stochastic quantum states, as can be understood by evaluating the effect of a quantum jump c i |ψ .Nevertheless, averaging expectation values over different trajectory states will converge back to the U -independent result from the master equation, E ψ ψ|O|ψ = Tr(ρO), where E ψ is the expectation over all individual trajectories |ψ .This is in contrast with the case of nonlinear quantities, such as bipartite entanglement entropy, which may show an unraveling dependence. Physically, the specific choice of unraveling of a master equation is determined by the physical observable that is monitored in a dissipative process [7][8][9][10][11][12], e.g.detecting the decay of a two-level system by observing the emitted single photon.Remarkably, such stochastic quantum trajectories were observed in several pioneering experiments in trapped-ion systems [13][14][15][16] and circuit quantum electrodynamics (circuit-QED) [17].Moreover, it has been shown that monitoring such trajectories can be used to manipulate stochastic quantum systems [18][19][20][21][22], with potential applications in quantum error correction [23,24].Furthermore, from a theoretical perspective, monitoring may have a profound impact on the stochastic trajectory states when it competes with coherent processes.Specifically, it was shown that a scaling transition for averaged trajectory entanglement entropy can occur [25][26][27][28].In these works, dissipation was studied in the context of a measurement-induced phase transition [29,30], and the master equation associated with the dissipative dynamics was changing across the phase transition.This implies that the effect of the monitoring protocol itself and the corresponding choice of unraveling remain largely unexplored for the scaling of entanglement entropy in the stochastic trajectory states. In this Letter, we consider different monitoring schemes that correspond to different unravelings of the same master equation and analyze the associated impact on stochastic quantum dynamics.We consider an array of uncoupled single-photon emitters whose decay can be monitored by detected photons.A linear optical network (LON) is positioned between the emitters and the detectors, as shown in Fig. 1(a), so that the new jump operators correspond to a LON-determined linear combination of the decay jump operators.As the sequence of jump clicks is recorded, a buildup and decay of entanglement entropy is generated in the state of the emitters -see Fig. 1(b).When the LON unitary is Haar random (see e.g.[31]), the averaged entanglement entropy reaches a maximum over time that has volume-law scal- ing, as shown in the inset of Fig. 1(b).Moreover, since a series of single-photon emissions is recorded, we analytically verify a direct equivalence between sampling the outcomes of the decay jumps and the Fock-state boson sampling problem [32], as we also numerically demonstrate in Fig. 1(c).Finally, we illustrate in Fig. 2 that the depth of the LON determines the scaling of maximal trajectory entanglement entropy over time, ranging from area law for constant depth to volume law when the depth is proportional to the number of emitters.Given the connection of our system to Fock-state boson sampling, we relate the scaling of maximal trajectory entanglement entropy to the hardness of classically sampling the jump-outcome probabilities: polynomial vs. superpolynomial time, respectively [33,34].Utilizing the setup described above, we therefore establish clear connections between the invariance properties of the master equation, the scaling of the associated trajectory entanglement entropy, and the sampling complexity of jump outcomes. The model.-Our setup consists of a chain of N twolevel systems that emit photons via de-excitation and are monitored through the output arms of a LON, represented by an N × N unitary U . We start from a state with M two-level systems in the excited state |↑ and N − M in the ground state |↓ , i.e. |ψ 0 (M, N ) ≡ | ↑ 1 . . .↑ M ↓ M +1 . . .↓ M −N , and assume a uniform rate γ for the excited emitters to spontaneously emit a photon and relax to the ground state, as depicted in Fig. 1(a). It is assumed that τ d 1/(M γ), with τ d comprising the time for a photon to traverse the LON and the detector dead time.A jump click recorded in output arm i of the LON U now corresponds to applying the jump operator with σ − j = (σ x j − iσ y j )/2 the decay operator of emitter j and σ x,y,z j the Pauli (x, y, z)-operator acting on site j.As was emphasized earlier and shown in more detail in Ref. [6], the Lindblad master equation, given by , is invariant under unitary mixing of the jump operators ( 2).On the level of the master equation, the dynamics of the (uncoupled) emitters is a simple classically mixed state, for which the single-emitter density matrix entries evolve for each emitter independently as Stochastic quantum trajectories.-Acrucial element in this work is the explicit monitoring and recording of the jumps c i (2).The stochastic dynamics resulting from registering the photon clicks in the output arms of U can be simulated with pure-state trajectories [2][3][4].Given a state |ψ(t) , we evaluate the probability for jump c i to occur in a short time interval ∆t as . The probability p jump (t) = i p i (t) determines whether a jump happens at time t or not.If a jump happens, then c i is selected with probability ∝ p i (t), and we evaluate |ψ(t + ∆t) = c i |ψ(t) .If there is no jump, the system evolves for time ∆t under the effective non-Hermitian Hamiltonian H eff = − iγ 2 j c † j c j .In both scenarios, the state is renormalized after each time step.In the limit ∆t → 0, averaging O over sampled trajectory states is equivalent to computing O via the master equation (1). Note that H eff only depends on the number of excited emitters and that |ψ(t) is an eigenstate of N exc between jumps if we start from |ψ 0 (N, M ) .This means that, after renormalization, the evolution between jumps does not change the stochastic state |ψ(t) . For the rest of the work, we will therefore discard the explicit time dimension and express the evolution in terms of the jump sequence (m 1 , . . ., m M ), with m k representing the kth click in output arm 1 ≤ m k ≤ N and 1 ≤ k ≤ M .This sequence can be obtained reliably when τ d 1/(M γ), since the photon clicks are now registered with an accuracy significantly higher than the duration of emission (the temporal extent of the photonic wavepacket). Connection to remote entanglement of two emitters.-Tointuitively explain the idea and illustrate the underlying correspondence with bosonic statistics, we start with the simple case of two excited emitters and a 2 × 2 LON with |a| 2 + |b| 2 = 1, quantifying the mixing between the modes, and φ the relative phase shift.Setting a = b = 1/ √ 2 and φ = π, corresponding to a 50 : 50 beam splitter, gives two new jumps c s = 1 the symmetric and antisymmetric jump, respectively.In case a symmetric click is observed, the symmetric jump c s is applied to the initial state |↑↑ , giving the symmetric Bell state This state can only decay another time with the same symmetric jump c s , as seen immediately by evaluating the probabilities P i ∝ ψ s |c † i c i |ψ s , with i = (a, s).The same story holds for the antisymmetric jump c a , and, therefore, upon monitoring the output arms of the beam splitter, either the jump sequence (m s , m s ) or (m a , m a ) is detected, each with probability 1 2 , and never the sequence (m s , m a ) or (m a , m s ).This is equivalent to the celebrated Hong-Ou-Mandel effect for two indistinguishable photons, incident on the two input arms of a 50 : 50 beam splitter [35].In our case, however, the indistinguishable photonic wavepackets are detected after a time much shorter than the duration of emission.As a result, an intermediate maximally entangled (anti)symmetric Bell state between the two emitters is established to convey the interference between the emitted photons.A similar procedure was considered to generate entanglement between cold atoms in a lattice configuration [36] and experimentally implemented to entangle two distant trapped ions [37].The effect can also be viewed as superradiant emission [38]. Correspondence with boson sampling-We now generalize the system to N emitters, of which M are excited, and an N × N unitary U , representing the LON with monitored output arms-see Fig. 1(a).After having registered all M clicks, an observer knows that all emitters have reached the ground state |ψ = |↓↓ . . . .The probability of detecting the M clicks in the Markovian sequence m ≡ (m 1 , m 2 , . . ., m M ) can be evaluated as (see [6]) Here, Per is the permanent of an M × M matrix A, with S M the symmetric group, i.e. the summation is performed over the M !possible permutations of the numbers 1, . . ., M .U T is the M ×M matrix constructed from U by taking the first M columns and repeating the ith row n i times, where n i is the number of times detector i appears in the sequence m.|Per U T | 2 arises from gathering all terms that give unit (nonzero) expectation value in the second line of Eq. ( 4).Expression (4) can also be obtained with multi-boson correlation sampling, i.e. by evaluating the M th-order temporal correlation function of the photonic quantum state at the output ports of the LON [39]. We see that P ( m) is the same for all m that give rise to a given n = (n 1 , . . ., n N ).Therefore, the probability of registering clicks n with i n i = M is obtained simply by multiplying the expression (4) by the number of sequences m that give rise to this n, so that The jump outcome probabilities P ( n) in Eq. ( 5) are exactly the ones found for Fock-state (conventional) boson sampling when M indistinguishable photons are sampled after passing through an N × N interferometer [32,40], as verified in Fig. 1(c).When U is drawn from the Haar measure and N = O(M 2 ), it has been proven that sampling from the output distribution is classically hard (takes superpolynomial time) unless the Polynomial Hierarchy collapses to the third level.This follows from the #P-hardness of classically computing the output probabilities in Eq. ( 5). Experimentally, Fock-state boson sampling has been implemented for small numbers of photons, well within the classically simulable regime [41][42][43].Gaussian boson sampling [44], using squeezed states instead of single photons as input, can be scaled up further, leading to one of the first claims of experimental quantum advantage [45].Interestingly, by engineering long-range interactions, Fock-state boson sampling was also proven to be equivalent to sampling spin measurement outcomes after a short Hamiltonian time evolution [46,47]. Trajectory entanglement entropy.-Ourprimary interest lies in evaluating nonlinear properties of the stochastic trajectory states of the emitters.For this, we focus on the averaged trajectory entanglement entropy of a subsystem of size l < N , after having registered 0 ≤ k ≤ M clicks in the output arms of a network U , evaluated as with N s the number of samples taken and ψ By sampling stochastic trajectories using matrixproduct states (MPS) [48], we show in Fig. 1(b) that when U is drawn from the Haar measure, a volume-law scaling for entanglement entropy is observed, as seen in the inset.In this case, each new jump c i (2) generally has a nonzero overlap with any σ − j and will induce longrange entanglement between all emitters in the chain.Yet, the initial growth of entanglement is upper bounded by S (U ) M (N/2, k = 1) ≤ log 2, independent of N , which is obtained from the concavity of entanglement entropy [49] (see Ref. 3) [see Fig. 2(a)].For a sufficiently deep LON, one can show that sampling instances from the LON converge to drawing the N × N unitaries from the Haar measure [50]. Each instance in the sample set is obtained by (i) sampling a U (N, D) and (ii) sampling a quantum trajectory, thus yielding a jump sequence m k and the corresponding stochastic series of (pure) states |ψ N (k) , with 0 ≤ k ≤ N the number of registered jump clicks.After repeating this procedure N s times, we obtain a set of sampled trajectories, and the averaged entanglement entropies S (D) N (l, k) for subsystem size l can be evaluated, yielding the entanglement of the trajectories averaged over unitaries U (N, D). Previously, a number of works have investigated the entanglement entropy of the M -photon wavefunction for Fock-state boson sampling in an N -mode LON.In the Haar regime, the photonic wavefunction shows volumelaw scaling of entanglement entropy when exiting the LON [51,52].In this chain of two-level emitters, on the other hand, the spontaneously emitted photons themselves are short-lived (stemming from the Born-Markov approximation of the quantum trajectory approach) and we study the buildup and decay of entanglement entropy between the emitters induced by registering and applying the jumps c j (2).Additionally, this also marks a significant difference with the measurement-induced phase transition studied in circuit models [29,30] since no projective measurements are preformed on the emitters. In Fig. 2(b)-(c), we first study the scaling of entan- N (l, kmax(D)) for N = 100, selected after kmax, when the maximal entropy is reached, saturate in the bulk (we find that kmax is independent of l).does not scale with system size for fixed D. This is further confirmed in Fig. 2(c) for subsystem scaling for the case N = 100, where it is seen that S (D) N (k, l) converges to a finite value in the bulk.Note that, for any k, the maximal S (D) N (k, l) is always reached for l = N/2.Intuitively, after detecting a click from a jump c j when D = const, an observer can pinpoint a subset of adjacent emitters of size 2D where the decay could have originated from, independent of N .Therefore, registering a click can only generate local entanglement in the chain.LONs of fixed depth D N are represented by a unitary U that is formulated as a banded matrix of width 2D.Interestingly, there exist polynomial-time algorithms to efficiently evaluate Per U T of banded matrices, which encode output probabilities of outcomes with few or no collisions via Eq.( 5) [33,34,55].The efficient evaluation of the output probabilities is in line with our result: the area law of entanglement entropy ensures that the output configurations P ( n) can be efficiently sampled using MPS of fixed maximal bond dimension to represent the quantum state of the emitters after k clicks [48]. As shown in Fig. 2(d)-(e), the situation drastically changes when the network depth D scales linearly with system size: D = pN .In Fig. 2(d), we show the maximal averaged entanglement entropy S (pN ) N,max , which now has a clear linear dependence on system size N , thus establishing a volume law.The simulation quickly gets out of reach for efficient simulation with MPS of a given maximal bond dimension χ max (set to χ max = 700).Also, the entanglement profiles of subsystem size l, shown in Fig. 2(e), acquire a strong dependence on subsystem size l when p is increased, which we identify as volume-law for the scaling for subsystem entanglement entropy.As p increases, the entanglement entropy approaches the value obtained by sampling U from the N × N Haar measure (black dashed line in Fig. 2(d)-(e)). In order to secure the classical sampling hardness, the original proof for Fock-state boson sampling requires that N = O(M 2 ) to ensure collision-free samples [32].While we are not in that regime, to our knowledge no efficient classical algorithm is known to sample the jump outcomes if N = M and D ∝ N .In our unraveling picture, we face a correlation in complexity: the entanglement entropy between the emitters in the trajectory states has volume-law scaling and quickly surpasses the limit of efficient simulation with MPS. On the contrary, when the trajectory-averaged entanglement entropy scales as an area law, the sample complexity (the number of trajectory states required in order to accurately sample the density matrix) may be expected to increase exponentially.This is captured by the scaling of the (classical) Shannon entropy of the distribution over quantum trajectory states.Hence, there is a trade-off between sample complexity of trajectories and the complexity of simulating each trajectory.It might be possible to practically exploit this trade-off in a classical algorithm, see Ref. [6] for a more detailed explanation. Conclusions and outlook.-Itwas illustrated that changing the unraveling of a straightforward, uncoupled master equation of emitters may cause drastic changes in both the entanglement of stochastic trajectory states and the sampling hardness of jump outcomes.Moreover, changing the unraveling is immediately related to an observer monitoring the decay clicks in the output arms of a LON, resulting in the unitary mixing of the decay jumps.Sampling the jump outcomes in the established monitoring scheme is equivalent to the problem of Fock-state boson sampling.Finally, a connection was established between the scaling of entanglement entropy between emitters and the classical hardness of sampling the jump outcomes. While we have reported different scaling behavior for the trajectory entanglement entropy, we have not yet seen a conclusive signature of a scaling transition for the trajectory entanglement entropy across a critical point, such as presented in, e.g.[26,28].For example, one can investigate fermionic or Gaussian models to access larger systems for the scaling analysis. Note added.-Whilefinalizing these results, we became aware of a recent work, where an entanglement scal-ing transition was reported in a homodyne monitoring scheme [56].where H(λ) = − j λ j ln λ j is the (classical) Shannon entropy of the distribution {λ j } that characterizes the mixture [2,60]. In the main text, we numerically studied S. The entropy of the trajectory-averaged state is easy to obtain from the fact that, at the level of the master equation, each emitter remains in the excited state with a probability p(t) = e −γt .The entropy of l emitters is then S(ρ l ; t) = l × p(t) log p(t) + (1 − p(t)) log (1 − p(t)) . (S12) This entropy satisfies a volume law, S(ρ l ) ∝ l.Using Eq. (S11), we find that, for the classical Shannon entropy of the mixture, This implies that whenever we find that S follows an area law, or scales slower than S(ρ l ) with l, the classical entropy H(λ) characterizing the unravelling, must compensate for this and scale with the volume of the system, H(λ) ∼ O(l). For numerical purposes, the number of distinct area-law trajectories in an unraveling needed to sample a master equation leading to a volume-law density matrix should scale exponentially to satisfy bound (S13).On the other hand, using volume-law trajectories, one might reach sufficient statistical accuracy after obtaining a set of samples with polynomial (or even constant) size.We plan to investigate this issue further in a follow-up work, with the goal of discovering optimal unravelings that have a balance between quantum entanglement (hardness of classically computing a given trajectory) and the number of samples needed (hardness of classical sampling) to acquire sufficient statistical accuracy. Figure 1 . Figure 1.(a) A schematic illustration of the setup, consisting of a chain of N two-level emitters, M of which are initially in the |↑ state, with the remaining N − M in the |↓ state.The quantum jumps from the spontaneous emissions in the chain are monitored through the output ports of a linear optical network represented by an N × N unitary U , giving new jump operators ci.(b) The case N = M = 22 and U sampled from the N × N Haar measure: half-chain entropy for some stochastic trajectories (red) and the averaged value (blue).The inset shows the volume-law scaling of the maximal averaged entanglement entropy Smax.(c) After registering M clicks, the jump outcome probabilities are given by Fock-state boson sampling from Eq. (5).A comparison for N = 7, M = 4, giving 210 possible outcomes, and a Haarrandom U , sampled with 10 000 quantum trajectories from the associated unraveling. ∝ c m k . . .c m1 |ψ 0 (M, N ) , i.e. the state after some sequence m of k detected jumps c mj (2).Furthermore, S(l)[|ψ ] = −Tr ρ A log ρ A is the von Neumann entanglement entropy of state |ψ , with ρ A = Tr B |ψ ψ| the reduced density matrix of subsystem A, containing l adjacent sites starting from the boundary, and B containing the remaining N − l sites. From a photonic perspective, an equivalent state ψ (U ) M (k) i can be obtained by subtracting k single photons from the M -photon wavefunction at the output ports (m 1 , . . .m k ) from U and sending the remaining M − k photons back through U . [6] for details).LON and the sampling procedure.-Inwhat follows, we restrict to the case N = M , i.e. all M emitters are initialized in the excited state |ψ N (k = 0) = |↑↑ . . . .The N ×N unitary U (N, D) that encodes the quantum jumps is implemented through a LON that consists of D staggered layers of Haar random 2×2 unitaries, each of which can be written as Eq. ( Figure 2 . Figure 2. The entanglement generated in the chain of emitters is studied by monitoring the decays through a LON.(a) Schematic of the setup, where N emitters are excited and monitored through a D-layered LON consisting of staggered layers of 2 × 2 Haar random unitaries from Eq. (3).(b)-(c) A network of constant depth D shows area-law scaling: (b) Increasing N , S (D) max remains stable, and (c) Entanglement profiles S (D) (d)-(e) Taking D to scale with system size as D = pN gives a volume law for entropy, converging to the result of an N × N Haar-random unitary for large p.(d) Scaling of S (D) max with system size show linear growth, and (e) the profiles S (D) N (l, kmax(p)) for N = 22 show a strong dependence on subsystem size l.glement entropy by monitoring outputs of a LON with fixed depth D. The largest achieved averaged entanglement entropy S (D) N,max ≡ max k,l S (D) N (k, l) shows an area-law behavior.In Fig. 2(b), it is seen that S (D) N,max Figure Figure S1.A comparison between the entanglement generated when U is sampled from the N × N Haar measure; the analytical upper bound (S10) is almost saturated.(a) Evolution of half-chain (maximal) entanglement entropy after detection of k clicks, for various N .(b) Entanglement subsystem profile after registering one jump click, same color codes as panel (a).
5,784.8
2022-01-29T00:00:00.000
[ "Physics" ]
Prospects of gravitational waves in the minimal left-right symmetric model The left-right symmetric model (LRSM) is a well-motivated framework to restore parity and implement seesaw mechanisms for the tiny neutrino masses at or above the TeV-scale, and has a very rich phenomenology at both the high-energy and high-precision frontiers. In this paper we examine the phase transition and resultant gravitational waves (GWs) in the minimal version of LRSM. Taking into account all the theoretical and experimental constraints on LRSM, we identify the parameter regions with strong first-order phase transition and detectable GWs in the future experiments. It turns out in a sizeable region of the parameter space, GWs can be generated in the phase transition with the strength of $10^{-17}$ to $10^{-12}$ at the frequency of 0.1 to 10 Hz, which can be detected by BBO and DECIGO. Furthermore, GWs in the LRSM favor a relatively light $SU(2)_R$-breaking scalar $H_3^0$, which is largely complementary to the direct searches of a long-lived neutral scalar at the high-energy colliders. It is found that the other heavy scalars and the right-handed neutrinos in the LRSM also play an important part for GW signal production in the phase transition. Introduction The discovery of a Higgs boson at the Large Hadron Collider (LHC) heralds the completion of the standard model (SM) [1,2] and a great hope for the discovery of new physics. Obviously, the completion of the SM naturally leads to the quest of microscopic structure to its next chapter, which will be further searched by the LHC [3]. In the long list of questions which might be the key to the next chapter, a few are interesting and crucial. For example, what is the dynamics for the electroweak (EW) symmetry breaking, what is the origin of mass of neutrinos [4], how are the parity and CP symmetries broken, and what is nature of dark matter and dark energy [5], etc. To answer these questions has been motivating various new physics models beyond the SM (BSM) at the TeV scale. In the history of early universe, from the Planck time to today, phase transitions might have occurred when the symmetries at different energy scales are broken. For example, the symmetry breaking of grand unified theory (GUT) and supersymmetry (SUSY) breaking can induce the corresponding phase transitions at the GUT scale and SUSY breaking scale. For new physics beyond the SM, new dynamics and a larger symmetry are usually introduced at the TeV region or a higher-energy scale. Such new physics models are of transitions in the LRSM and the resultant features of the corresponding GWs. Compared with the recent and former study [102], the new things of this paper lie in the following aspects: (i) we have implemented the correct EW vacuum conditions [103] and set α 2 = 0 (α 2 is a quartic coupling in the scalar potential Eq. (2.2)), (ii) we have taken into account more recent LHC experimental bounds, which are collected in Table 1 and Fig. 1, (iii) we have found more general parameter space where the strong FOPT can occur and detectable GWs can be produced, and (iv) we have also explored the complementarity of GW probes of LRSM and the direct searches of the heavy (or light long-lived) particles in the LRSM at the high-energy colliders, and examined how the self couplings of SM Higgs can be affected in the LRSM. With all the theoretical and experimental limits taken into consideration, it is found that the strong FOPT at the right-handed scale v R in the LRSM favors relatively small quartic and neutrino Yukawa couplings, which corresponds to relatively light BSM scalars and right-handed neutrinos (RHNs), as seen in Figs. 2, 3 and 9. The scatter plot in Fig. 5 reveals that the phase transition in the LRSM can generate GW signals with the strength of 10 −17 to 10 −12 , with a frequency ranging from 0.1 to 10 Hz, which can be probed by the experiments BBO and DECIGO, or even by ALIA and MAGIS. The GW spectra for five benchmark points (BPs) are demonstrated in Fig. 8, which reveals that the GW signal strength and frequency are very sensitive to the value of ρ 1 . Although some other quartic and neutrino Yukawa couplings are very important for the GW production, the quartic coupling ρ 1 plays the most crucial role and it also determines the mass of SU (2) R -breaking scalar H 0 3 . In the parameter space where it does not mix with other scalars, the scalar H 0 3 couples only to the heavy scalars, gauge bosons and RHNs in the LRSM [104], which makes it effectively a singlet-like particle, and thus the experimental limits on it are very weak [105,106]. As presented in Fig. 10, the GW probe of H 0 3 is largely complementary to the direct searches of H 0 3 at the high-energy colliders [104] as well as the searches of H 0 3 as a long-lived particle (LLP) at the high-energy frontier [105,106]. In addition, in a sizeable region of parameter space, the strong FOPT and GWs are sensitive to a large quartic coupling λ hhhh of the SM-like Higgs, which is potentially accessible at a future high-energy muon collider [107]. The rest of the paper is organized as follows. In Section 2 we briefly review the minimal LRSM and summarize the main existing experimental and theoretical constraints on the BSM particles in this model. Phase transition are explored in Section 3, and the GW production is presented in Section 4. Section 5 focuses on the complementarity of the GW probes of LRSM and the collider signals of LRSM. After some discussions, we conclude in Section 6. For the sake of completeness, the masses and thermal self-energies are collected in Appendix A, and the conditions for vacuum stability and correct vacuum are put in Appendix B. Left-right symmetric model The basic idea of LRSMs is to extend the EW sector of SU (2) L × U (1) Y of the SM gauge group to be left-right symmetric, i.e. SU (2) L × SU (2) R × U (1) B−L . Various LRSMs have been proposed to understand the parity symmetry and CP breaking of the SM, the origin of masses of matters or even DM candidates and the matter-antimatter asymmetry of the universe. The main differences between these LRSMs could be in the gauge structure, the scalar fields, the matter contents, and/or the seesaw mechanisms. The most popular, or say conventional, LRSM is the version with a Higgs bidoublet Φ, a left-handed triplet ∆ L and a right-handed triplet ∆ R [99][100][101] When the right-handed triplet ∆ R acquires a vacuum expectation value (VEV) v R , the gauge symmetry SU (2) L × SU (2) R × U (1) B−L in the LRSM is broken to the SM gauge group SU (2) L × U (1) Y . Two triplets ∆ L and ∆ R are introduced to give Majorana masses to the active neutrinos and RHNs, respectively, which enables the type-I [108][109][110][111][112] and type-II [113][114][115][116][117] seesaw mechanisms for the tiny neutrino masses. The SU (2) R × U (1) B−L symmetry can also be broken only by a right-handed doublet H R [118,119]. In this case, heavy vector-like fermions have to be introduced to generate the SM quark and lepton masses via seesaw mechanism (see also [120]). There are also LRSM scenarios with inverse seesaw [121,122], linear seesaw [123,124], or extended seesaw [125][126][127][128] in the literature. Cold DM is not included in the conventional LRSM (a light RHN can only be a warm DM candidate [129]), but it is easy to add a fermion or boson multiplet, where the lightest neutral component is naturally stabilized by the residual Z 2 symmetry from U (1) B−L breaking [130,131]. Alternatively, based on the gauge group SU the hypercharge in the SM and Y R the "right-handed" counter partner), heavy RHNs can be the cold DM candidate [132][133][134]. In this work, we focus on the minimal LRSM with one bidoublet Φ and two triplets ∆ L and ∆ R in the scalar sector. The most general scalar potential in the LRSM can be written as [135] 2) whereΦ = σ 2 Φ * σ 2 (with σ 2 the second Pauli matrix). Required by left-right symmetry, all the quartic couplings in the potential above are real parameters. The CP violating phase δ associated with α 2 is shown explicitly. At the zero temperature, the neutral components of the scalar fields can develop nonzero VEVs, i.e. where θ κ and θ L are CP violating phases. The two bidoublet VEVs are related to the EW In light of the hierarchy of top and bottom quark masses m b m t in the SM, it is a reasonable assumption that κ 2 κ 1 [135]. There are three key energy scales in the LRSM, i.e. the right-handed scale v R , the EW scale v EW and the scale v L which is relevant to tiny active neutrino masses via type-II seesaw. Furthermore, from the first-order derivative of the scalar potential (2.2), v L is related to the EW and right-handed VEVs via [113,135,136] where ξ = κ 2 /κ 1 . Due to the tiny masses of active neutrinos, it is a good approximation to set v L = 0, therefore we will set β i = 0 so as to simplify our discussions below. With v L = 0, there are only two energy scales in the LRSM, i.e. the EW scale v EW and the right-handed scale v R . In light of the hierarchy structure v EW v R , a two-step phase transition is supposed to occur in the LRSM. In the early universe, the temperature is so high T v R that the symmetry SU (2) L × SU (2) R × U (1) B−L is restored. As the universe keeps expanding, the temperature decreases. When the temperature is lower than a critical temperature but much higher than EW scale, i.e. v EW T ∼ v R , ∆ 0 R develops a non-vanishing VEV and the gauge symmetry When the temperature becomes lower than the EW scale T ∼ v EW , Φ 0 1,2 obtain their VEVs and the symmetry is further broken into the electromagnetic (EM) group U (1) EM . After symmetry breaking at the v R scale, we can rewrite the bidoublet Φ in terms of two SU (2) L doublets, i.e. Φ = iσ 2 H * 1 |H 2 . Then the bidoublet relevant terms in the potential (2.2) can be recast in terms of H 1, 2 : where the mass terms are respectively Although the potential in Eq. (2.5) seems to be very similar to that in a general 2HDM [137], there are still some obvious differences: In presence of the scale v R , all the states predominately from the heavy doublet H 2 are at the v R scale, and their masses are degenerate at the leading-order, which is clearly distinct from the 2HDMs, where all the scalars in the 2HDMs are at the EW scale, and the BSM scalar masses depend on different quartic couplings [137]. In the LRSM, the BSM particles include the heavy W R and Z R bosons, three RHNs N i (with i = 1, 2, 3), neutral CP-even scalar H 0 1 and CP-odd A 0 1 , singly-charged scalar H ± 1 predominately from the bidoublet Φ, neutral CP-even scalar H 0 2 and CP-odd A 0 2 , singlycharged scalar H ± 2 and doubly-charged scalar H ±± 1 mostly from the left-handed triplet ∆ L , and the neutral CP-even scalar H 0 3 and doubly-charged scalar H ±± 2 mostly from the righthanded triplet ∆ R . Thorough studies of the scalar sector of LRSM at future high-energy colliders can be found e.g. in Ref. [103-106, 135, 138-157]. In this paper, we assume that the gauge coupling g R for SU (2) R can be different from the gauge coupling g L for SU (2) L , which might originate from renormalization group running effects such as in the D-parity breaking LRSM versions [158]. Theoretical Constraints For completeness, we collect all the theoretical constraints on the gauge and scalar sectors of the LRSM in the literature, which will be taken into consideration in the calculations of phase transition and GW production below. • Perturbativity limits: In some versions of the LRSM, the right-handed gauge coupling g R can be different from g L [158]. As the gauge couplings have the relationship (with g BL the gauge coupling for U (1) B−L ) (2.9) the gauge couplings g R and g BL can not be either too large or too small if we want them to be perturbative. Renormalization group running these gauge couplings up to a higher energy scale put more stringent limits on them. Perturbativity up to the GUT scale requires that the ratio r g ≡ g R /g L to satisfy [159] 1 0.65 < r g < 1.60 . (2.10) Furthermore, as the masses α 3 /2v R of H 0 1 , A 0 1 and H ± 1 (cf. Table 5 in Appendix A) are severely constrained by the neutral meson mixings (see Section 2.2 and Table 1), perturbativity also implies an lower bound on the v R scale [159]: v R 10 TeV . (2.11) For v R below this value, α 3 is so large that all the quartic and gauge couplings will hit the Landau pole very quickly before reaching the GUT or Planck scale [151,154,160,161]. • Unitarity conditions: The parameters in the potential (2.2) should satisfy the unitarity conditions [154] when we consider the scattering amplitudes of the scalar fields at the high-energy scale √ s µ 2 i (for simplicity we neglect here the effects of all the scalar masses). In other words, the partial wave amplitudes should not violate the bound of unitarity so as to guarantee that the probability is conserved. The tree-level unitarity conditions turn out to be [154] • Vacuum stability conditions: The vacuum stability conditions require that [154,161,162] (see also [163]) • Correct vacuum criteria: After the spontaneous symmetry breaking, all the scalar fields have to form some specific structure in the phase space such that we reside in the correct vacuum, i.e. the vacuum with the lowest VEV in the potential [103,156]. For completeness, the correct vacuum criteria have been collected in Appendix B, which are obtained with the assumption α 2 = 0. Therefore, we will set α 2 = 0 throughout this paper. In the limit of κ 2 κ 1 v R , in Eq. (2.7), the quadratic coefficient of H 2 term is proportional to α 3 v 2 R /2, thus the heavy doublet scalars H 0 1 , A 0 1 , H ± 2 will obtain a mass of α 3 /2v R at the leading-order. To get the correct EW vacuum, a necessary condition is m 2 This yields an upper bound of ξ. Approximately, we have Experimental constraints All the current LHC limits on the BSM particles in the LRSM are collected in Table 1 and also depicted in Fig. 1. Here are more details: • At the LHC, the W R boson in the LRSM can be produced via the right-handed charged quark currents. After its production, it can decay predominately into two quark jets (including thetb channel) and RHNs plus a charged lepton, i.e. W R → jj,tb, N ( * ) i α (with α = e, µ, τ ). If the RHNs are lighter than the W R boson, as a result of the Majorana nature of RHNs, the same-sign dilepton plus jets W R → N → α β jj constitute a smoking-gun signal of the W R boson [164]. Assuming g R = g L , the current most stringent LHC data require that the W R mass m W R > (3.8 − 5) TeV for a RHN mass 100 GeV < m N < 1.8 TeV [165,166]. The dijet [167,168] and tb [169,170] limits are relatively weaker, which are respectively 4 TeV and 3.4 TeV. The strongest W R limit of (3.8 − 5) TeV is presented in Fig. 1. • The most stringent limits on the Z R boson is from the dilepton data pp → Z R → + − . The current dilepton limit on a sequential Z boson is 5.1 TeV [171]. Following e.g. Ref. [159], one can rescale the production cross section times branching fraction σ(pp → Z → + − ) for the sequential Z model, which leads to the LHC dilepton limit of 4.82 TeV on the Z R boson in the LRSM. This is shown in Fig. 1 as the Z R limit. There are also dijet searches of the Z boson, however the corresponding limits are relatively weaker [167,168]. • At the leading order, the scalars H 0 2 , A 0 2 , H ± 2 and H ±± 1 from the left-handed triplet ∆ L have the same mass [142] (see Table 5). The doubly-charged scalar H ±± 1 can decay into either same-sign dilepton or same-sign W bosons, i.e. H ±± 1 → ± α ± β , W ± W ± , which constitute the most promising channels to probe ∆ L at the LHC, and the branching fractions BR(H ±± 1 → ± α ± β ) and BR(H ±± 1 → W ± W ± ) depend on the Yukawa coupling f L and the left-handed triplet VEV v L . Assuming H ±± 1 decays predominately into electrons and muons, the current LHC limits are around 770 to 870 GeV, depending on the flavor structure [172]. In the di-tauon channel H ±± 1 → τ ± τ ± , the LHC limit is relatively weaker, i.e. 535 GeV [173]. 2 If the doubly-charged scalar H ±± 1 decays predominately into same-sign W bosons, the LHC limits are much weaker, around 200 to 220 GeV [174]. There are also some searches of singly-charged scalar H ± 2 → τ ± ν at the LHC [175][176][177]. However these searches assume H ± 2 is produced from its interaction with top and bottom quarks, therefore these limits are not applicable to H ± 2 in the LRSM which does not couple directly to the SM quarks. The strongest same-sign dilepton limits of (530 − 870) GeV on H ±± 1 (and also on other scalars from ∆ L ) is shown in Fig. 1. 2 As the singly-charged scalar H ± 2 and doubly-charged scalar H ±± 1 are mass degenerate at the leading order in the LRSM, here we have adopted the combined LHC limit from the pair production pp → H ++ 1 H −− 1 and the associate production pp → H ±± 1 H ∓ 2 . In these two channels, the separate channels are respectively 396 GeV and 479 GeV [173]. • As the W R boson is very heavy, the TeV-scale right-handed doubly-charged scalar H ±± 2 decays only into same-sign dileptons. The couplings of H ±± 2 to the photon and Z boson have opposite signs, therefore the production cross section of H ±± 2 at the LHC is smaller than that for the left-handed doubly-charged scalar H ±± 1 . Rescaling the LHC13 cross section of H ±± 1 by a factor of 1/2.4, The same-sign dilepton limits on H ±± 2 turn out to be 271 to 760 GeV for all the six combinations ee, eµ, µµ, eτ, µτ, τ τ of lepton flavors, which is presented in Fig. 1. • The scalars H 0 1 , A 0 1 and H ± 1 from the bidoublet Φ are degenerate in mass at the leading order. H 0 1 and A 0 1 has tree-level flavor-changing neutral-current (FCNC) couplings to the SM quarks, and contribute to K − K, B d − B d and B s − B s mixings significantly. As a result, their masses are required to be at least (10 − 25) TeV, depending on the nature of left-right symmetry (either generalized parity or generalized charge conjugation), the hadronic uncertainties [142,[178][179][180] and the potentially large QCD corrections [181]. The stringent FCNC limits on the heavy bidoublet scalars is shown in Fig. 1. • The neutral scalar H 0 3 from the right-handed triplet ∆ R is hadrophobic, i.e. it does not couples directly to the SM quarks in the Lagrangian. It can be produced at the LHC and future higher energy colliders either in the scalar portal through coupling to the SM Higgs (and the heavy scalars H 0 1 and A 0 1 ), or in the gauge portal via coupling to the W R and Z R bosons. Therefore the direct LHC limits are very weak [105,106]. However, when it is sufficiently light, say at the GeV-scale, H 0 3 can be produced from (invisible) decay of the SM Higgs or even from the meson decays [105,106]. More details can be found in Section 5.2. • The RHNs in the LRSM can be either very light, e.g. at the keV scale to be a warm DM [129] candidate, or very heavy at the v R scale, and there are almost no laboratory limits on their masses, although their mixings with the active neutrinos are tightly constrained in some regions of the parameter space [182]. For simplicity, in the following sections we will set the masses of RHNs to be free parameters and neglect their mixings with the active neutrinos. To be complete, the masses of 100 GeV scale SM particles, i.e. the SM Higgs h, the top quark t and the W and Z bosons, are depicted in Fig. 1 as horizontal black lines. See Fig. 7 for complementarity of GW prospects of the BSM particle masses and the current experimental limit. One-loop effective potential To study phase transitions in the LRSM, we consider the effective potential at finite temperature, which includes contributions of the one-loop corrections and daisy resummations. Table 1, indicated by the blue and pink arrows, with the heights of the horizontal lines denoting the ranges of experimental limits. The horizontal black lines are the masses of SM Higgs h, top quark t, and W , Z bosons. Renormalized in the MS scheme, the effective potential can be cast into the following form [183] is the Coleman-Weinberg one-loop effective potential [184], and V T =0 1 and V D are the thermal contributions at finite temperature. The V T =0 1 term includes only the one-loop contributions, and V D denotes the high-order contributions from daisy diagrams. In Eq. (3.1) the sum runs over all the particles in the model. The scalar mass matrices m 2 i (κ i , v R ) in the LRSM can be found in Ref. [135], and the corresponding thermal self-energies Π i (T ) are provided in Appendix A. As for the fermions, we consider only the third generation quarks and three RHNs. In the LRSM their masses are respectively with y t, b the Yukawa couplings for top and bottom quarks in the SM, M N the RHN masses and y N the corresponding Yukawa coupling. In the following study, for the sake of simplicity, we will assume three RHNs are mass degenerate and does not have any mixings among them. The degrees of freedom g i and constants C i in Eq. (3.1) are given by 3 2 ), for fermions , (3, 5 6 ), for gauge bosons , with λ = 1 (2) for Weyl (Dirac) fermions, and the functions J − (J + ) for bosons (fermions) are defined as In the limit of small x 2 = m 2 /T 2 , we can use the approximations [183]: In this paper we focus on the phase transition at the v R scale, thus as an approximation all the effects of SM components on the symmetry breaking SU can be neglected. Neglecting the daisy contributions, the effective potential V eff can be written down explicitly in the following form [6]: where D, T 0 , E and ρ T can be expressed by the model parameters as where M X is the mass for the particle X, and µ is the renormalization scale. Since there are lots of scalars in the LRSM, we deliberately separate their contributions from the vector bosons and RHNs. The contributions of scalars for each of the terms in Eq. (3.9) to (3.12) can be written in terms of the scalar masses via (3.13) (3.14) It should be pointed out that all the masses in Eqs. (3.13) to (3.16) depend upon the right-handed VEV v R instead of v. It is observed that the RHNs can also contribute to the symmetry breaking SU (2) R × U (1) B−L → U (1) Y via affecting the parameters D, T 0 and ρ T , while the parameter E receives only contributions from the scalars and gauge bosons. As seen in Eqs. (3.12) and (3.16), the parameter ρ T receive not only tree-level contribution from the quartic coupling ρ 1 which corresponds to the H 0 Table 5), but also loop-level contributions from the heavy scalars, gauge bosons and RHNs in the LRSM. In particular, when the quartic coupling ρ 1 is small, or equivalently the scalar H 0 3 is much smaller than the v R scale, which is the parameter space of interest for phase transition and GW production in the LRSM (cf. Figs. 2, 3 and 8), the loop-level contributions in Eq. (3.12) might dominate ρ T . Furthermore, ρ T depends also on the gauge coupling g R via the heavy gauge boson masses M W R and M Z R . To have strong FOPT, the cubic terms proportional to −ET v 3 are crucial. In the limit of E → 0, the phase transition is of second-order. In the SM, the effective coefficient E of φ 3 term is dominated by the gauge boson contributions, while in the LRSM, it receives contributions from both the scalars and gauge bosons, As a result of the large degree of freedom in the scalar sector of LRSM, it is remarkable that the scalar contributions to E can even be much larger. The order parameter describing the FOPT is given by v c /T c , where v c is the non-vanishing location of the minimum at the critical temperature T c at which the effective potential V eff has two degenerate minima. In the EW baryogenesis [185][186][187], to avoid the washout effects in the broken phase within the bubble wall, a strong FOPT is typically required to satisfy the following condition Therefore, it is justified to neglect the contributions of SM particles to the phase transition at the right-handed scale v R , since their masses m SM are at most close to v EW and their contributions are suppressed due to their tiny couplings to the right-handed triplet. For given v R and heavy particle masses in the LRSM, the two key parameters T c and v c can be obtained from the effective potential (3.1) by requiring the two conditions V ef f (T c ; v c ) = V ef f (T c ; 0) and v c = 0. In the numerical evaluations, we change the temperature from a sufficiently highly energy scale, say v R , toward lower values around the EW scale. A reasonable critical temperature T c for the phase transition SU (2) R × U (1) B−L → ×U (1) Y is assumed to be within this range. The dependence of v c /T c on the parameters in the LRSM is exemplified in Fig. 2, where in the numerical calculations we have included all the contributions in Eq. (3.1). Taking into account all the theoretical and experimental constraints in Section 2, we first consider scenarios with the simplifications λ 2 = λ 3 = λ 4 = α 1 = α 2 = 0. In order to identify the parameter space where the phase transition is of first-order, we calculate v c /T c at the critical temperature T c with different values of the quartic couplings ρ 1 , ρ 2 , ρ 3 − 2ρ 1 , α 3 . When we calculate the dependence of v c /T c on two of the quartic couplings, all others are fixed in the way that their corresponding scalar masses equal the W R mass, and the gauge coupling g R = g L . To be concrete, we have set the renormalization scale µ to be the v R scale in Eq. (3.1). The corresponding results are shown in the first three panels of Fig. 2. The dependence of v c /T c on the couplings ρ 1 and α 3 , ρ 3 − 2ρ 1 and ρ 1 , and ρ 2 and Table 5), the dependence of v c /T c on the quartic couplings in Fig. 2 can also be understood effectively as the dependence of v c /T c on the mass−v R ratios Through the gauge boson masses M W R and M Z R , the parameter v c /T c depends also on the gauge coupling ratio r g , or equivalently on the right-handed gauge coupling g R . This is shown in the lower right panel in Fig. 2; as seen in this figure, the v c /T c limit on ρ 1 has a moderate or weak dependence on r g , depending on the value of ρ 1 . Given the information on v c /T c in Fig. 2, a few more comments are now in order: • As seen in Fig. 2, a strong FOPT in the LRSM require a relatively small quartic coupling ρ 1 0.07 for the parameter space we are considering, which is qualitatively similar to the SM case where a light Higgs boson (say M h < 80 GeV) is needed in order to have a first-order EW phase transition [189]. It turns out that a small ρ 1 (and resultantly light H 0 3 ) is not only crucial for the prospects of GWs in future experiments (cf. Fig. 8), but also triggers rich phenomenology for the searches of LLPs at the high-energy colliders and dedicated detectors [105,106]. • The phase transition at the v R scale occurs when the neutral component ∆ 0 R of the right-handed triplet ∆ R develops a non-vanishing VEV v R . As a result, the strong FOPT is more sensitive to the mass of H 0 3 , or equivalently to the value of ρ 1 , than other heavy scalar masses. This is also clearly demonstrated in the plots of Fig. 2. As seen in the upper left, upper right and lower left panels, the quartic coupling α 3 , ρ 3 − 2ρ 1 and α 3 can reach up to order one, while ρ 1 0.1 in the Fig. 2. • Although the quartic couplings α 3 , ρ 3 and ρ 3 − 2ρ 1 is less constrained by the FOPT than the critical coupling ρ 1 , as seen in the first three panels of Fig. 2, if either of these couplings is sufficiently large, it will invalidate the strong FOPT at the v R scale, no matter how small ρ 1 is. Meanwhile, the white areas in the plots of Fig. 2 indicate that in these regions the perturbation method starts to break down and theoretical predictions become more difficult. In Fig. 2 we have fixed some parameter in the LRSM and vary two of them. To see more details of the correlation of v c /T c and the parameters in the LRSM, we take a more thorough scan of the parameter space of the LRSM. To be specific, we adopt the following ranges: and apply all the theoretical and experimental constraints in Section 2. Here follows some comments: • We have chosen ξ = κ 2 /κ 1 = 0.001 in order to satisfy the theoretical constraint in Eq. (2.15). • We have chosen α 2 = 0 in order to meet the requirement of the correct vacuum conditions given in Eq. (B.1). • It is known from Fig. 2 that the strongly FOPT need a small ρ 1 , therefore we have chosen ρ 1 < 0.5. • ρ 3 − 2ρ 1 has set to be larger than zero, as it corresponds to the masses of the lefthanded triplet scalars (see Table 5). • The quartic coupling α 1 is not a free parameter here, as it is related to λ 1 and the SM coupling λ via Eq. (5.1). As α 2 /4ρ 1 is always positive, it turn out that the quartic coupling λ 1 ≥ λ 0.13. The resultant scatter plots of v c /T c are presented in Fig. 3 as functions of the parameters ρ 1 and α 3 . The data points of strong FOPT with v c /T c > 1 are shown in red while those with v c /T c < 1 are in blue. When we set v R = 10 TeV and take the FCNC limit of M H 0 1 > 15 TeV [142], the quartic coupling α 3 should meet the condition The region shaded by the light pink in the left panel of Fig. 3 is excluded by such conditions. It is found that only a small amount of the data points can survive and have strong FOPT. When the v R scale is higher, say v R = 20 TeV, the quartic coupling α 3 is significantly smaller, i.e. α 3 > 1.13. The region denoted by the light pink shaded region in the right panel of Fig. 3 is excluded. Then there will be more points that can have a strong FOPT with v c /T c > 1, as clearly shown in the right panel of Fig. 3. Gravitational waves The thermal stochastic GWs can be generated by three physics processes in phase transition [190]: collisions of bubbles, sound waves (SWs) in the plasma after the bubble collision, and the MHD turbulence forming after the bubble collision. For non-runaway scenarios, GWs are dominated by the latter two sources [190], and the corresponding GW spectrum can be approximated as The SW contribution has the form of [25] where f is the frequency, g * and H * are respectively the number of relativistic degrees of freedom in the plasma and the Hubble parameter at the temperature T * , v w is the bubble wall velocity, α describes the strength of phase transition, β/H * measures the rate of the phase transition, and is the fraction of vacuum energy that is converted to bulk motion. The peak frequency f SW is approximated by The MHD turbulence contribution is [30,191] where κ MHD 0.05κ v is the fraction of vacuum energy that is transformed into the MHD turbulence, h * is the inverse Hubble time at the GW production (red-shifted to today), and is given by Hz , (4.6) and the peak frequency is (4.7) As shown in the formula above, the gravitational wave spectrum from FOPTs are generally characterized by two parameters related to the phase transition, namely α and β [31]. The parameter α is defined as the ratio of the vacuum energy density * released at the phase transition temperature T * to the energy density of the universe in the radiation era, i.e. where * is the latent heat and can be expressed as The ∆V eff denotes the difference of potential energy between the false vacuum and true vacuum, i.e. ∆V eff = −V eff (0, T ) + V eff (v, T ), which can be simply determined by T * and the parameters of LRSM. The parameter β describes the rate of variation of the bubble nucleation rate during phase transition, and its inverse describes the duration of phase transition. To describe rate of the phase transition, a dimensionless parameter β H * is defined from the following equation where S 3 denotes the three-dimensional Euclidean action of a critical bubble. The T * denotes the temperature when the phase transition is ended and can be determined by requiring that the probability for nucleating one bubble per horizon volume equals 1, i.e. The parameters α and β set respectively the strength and time variation of GWs during the phase transition, and their typical values in the LRSM are shown respectively in the left and right panels of Fig. 4. As demonstrated by the data points, the value of α varies roughly from 0.001 to 0.1, and β/H * can range from 10 2 to 10 4 . In the numerical calculations, all the data points in Fig. 4 have strong FOPT. Assuming the bubble wall velocity v w ∼ 1, the corresponding GW signals of the data points in Fig. 4 are shown in Fig. 5. The correlation of the ratio v c /T c and GW signal peaks are presented in the left panel. We can read from Fig. 4 and the left panel of Fig. 5 that with large v c /T c the value α is typically larger, thus yielding stronger GW signals. The GW strength and frequency peaks are shown in the right panel of Fig. 5. The potential sensitivities of LISA [35,36], TianQin [33], Taiji [34], ALIA [37], MAGIS [38], BBO [40], DECIGO [39], ET [42], and CE [41] are also depicted in the right panel of Fig. 5. As seen in this figure, the frequency peak in the LRSM can range from 10 −1 to 10 2 Hz. Furthermore, there are some data points of the LRSM with frequencies in the range of roughly from 0.1 to 10 Hz and GW strength larger than 10 −17 , which can be detected in the future by BBO and DECIGO, or even by ALIA and MAGIS. Figure 5. GW peaks for the data points in Fig. 4, as function of v c /T c (left) and frequency f (right). Also shown in the right panel are the prospects of LISA [35,36], TianQin [33], Taiji [34], ALIA [37], MAGIS [38], BBO [40], DECIGO [39], ET [42], and CE [41]. can reach up to few times 10 TeV, with their lower mass limits roughly round the experimental constraints in Section 2.3 (see also Table 1 and Table 2. The resultant v c , T c , T * and the parameters α and β/H * are also shown in Table 2. The GW spectra h 2 Ω as function of the frequency f for the five BPs are presented in Fig. 8. There are a few comments on the five BPs. • It is clear in Fig. 8 and M N but different M H 0 3 can be probed in the future by BBO and DECIGO, and even by ALIA and MAGIS. It seems that the H 0 3 mass M H 0 3 , or equivalently the quartic coupling ρ 1 , is crucial for the GWs in the LRSM. The BPs (like BP5) with a heavier H 0 3 , or equivalently larger ρ 1 , tends to generate a small α and large β, and thus produce weaker GW signals with a larger frequency. This is consistent with the findings in Ref. [102]. The BPs BP1 and BP2 with H 0 3 mass below TeV-scale can produce GWs of order 10 −13 with frequency at around 0.1 Hz, far above the prospects of BBO and DECIGO. The BP4 with a 2 TeV H 0 3 can only produce GWs of order 10 −16 with frequency peaked at 1 Hz, which can be marginally detected by BBO and DECIGO. Figure 8. The same as in the right panel of Fig. 5, but for the five BPs in Table 2. Table 2. Five BPs studied in this paper. Parameters not shown in the table are set to be v R = 10 TeV, ξ = 10 −3 , λ 1 = 0.13, α 1 = α 2 = λ 2 = λ 3 = λ 4 = 0. Their GW spectra are shown in Fig. 8. It is also noticed that all these BPs are non-runaway scenarios in term of the criteria defined in Eq. (25) of [190]. The suppression factor Υ in the last row is defined in Eq. (6.1) [196,197]. heavier in BP5 than in BP4, while all other parameters are the same. As seen in Fig. 8, the GW signal in BP5 is so weak that it can escape the detection of all the planned GW experiments in the figure. This reveals that the masses M H 0 2 , M H ±± 2 and M N , or equivalently the couplings ρ 3 − 2ρ 1 , ρ 2 and y N , are also important for GW production in the LRSM. More data points in the numerical calculations reveal that the coupling α 3 is also very important for the GW signals in the LRSM. Complementarity of GW signal and collider searches of LRSM In spite of the large number of BSM scalars, fermions and gauge bosons in the LRSM and the larger number of quartic couplings in the potential (2.2), it is phenomenologically meaningful to examine the role of some couplings, or equivalently the BSM particle masses, in the strong FOPT and the subsequent GW production in the early universe, as well as the potential correlations of GWs to the direct laboratory searches of these particles and the SM precision data at the high-energy colliders. In this section, we will elaborate on (i) the effects of the quartic coupling λ 1 in the scalar potential (2.2) which corresponds to the self-coupling λ in the SM, and (ii) the complementarity of GW signal, the collider searches of (light) H 0 3 and the heavy (or light) RHNs in the LRSM. Self-couplings of SM-like Higgs boson in the LRSM It is interesting to examine how the self-coupling λ of the SM-like Higgs boson h can be affected by the BSM scalars in the LRSM. The SM-like Higgs mass square, the trilinear coupling λ hhh and the quartic coupling λ hhhh in the SM and LRSM are collected in Table 3. Comparing the mass square of h in the SM and LRSM, we can approximately identify the following relation among the SM and LRSM quartic couplings [104,151] As seen in the third column of Fig. 3, the trilinear coupling λ hhh of the SM-like Higgs in the LRSM only differs from the SM value by a small amount of ξ ∼ 10 −3 [104,151]. On the contrary, the quartic coupling λ hhhh in the LRSM might be significantly different from the SM prediction: as shown in the last column of In other words, at the leading-order of the approximations of v R v EW κ 1 κ 2 , the difference of quartic coupling of SM-like Higgs boson in the SM and LRSM is dominated by the α 2 1 /16ρ 1 term. As the FOPT and GW in the LRSM favor a small ρ 1 coupling, the difference in Eq. (5.2) tends to be significant for sufficiently large α 1 . Adopting the parameter ranges in Eq. (3.19) and taking into account the theoretical and experimental limits in Section 2, the scatter plots of the quartic coupling λ hhhh and the couplings ρ 1 , α 1 and y N are shown respectively in the left, middle and right panels of Figure 9. Scatter plots of λ 1 /λ and ρ 1 (left), α 1 (middle) and y N (right), with the blue points have v c /T c < 1 and the red ones v c /T c > 1. Fig. 9, where the data points with strong FOPT v c /T c > 1 is shown in red, while those with v c /T c < 1 are in blue. It is very clear in Fig. 9 that the deviation of the quartic scalar coupling λ 1 from the SM value λ is always positive and can be very large, even up to the order of 10, as expected in Table 3 and Eq. (5.2). We can also read from the left and middle panels of Fig. 9 that a large deviation of the quartic coupling of SM-like Higgs need a relatively small ρ 1 and/or large α 1 . As given in Eq. (3.12), a large y N tends to decrease ρ T , thus increasing the value of v c /T c . However, if y N is too large, say y N 1.5, a negative ρ T will be obtained which leads to a non-stable vacuum. Thus, the phase transition and GW in the LRSM favor a Yukawa coupling y N ∼ O(0.1) to O(1). On the experimental side, the combined results of di-Higgs searches can be found e.g. in Refs. [198,199]. Data from LHC 13 TeV with a luminosity of 36 fb −1 only set a weak constraint λ hhh /λ SM hhh ∈ (−5, 12). The LHC 14 TeV with an integrated luminosity of 3 ab −1 can probe the trilinear coupling of SM Higgs within the range of λ hhh /λ SM hhh ∈ (0.7, 1.3) [200], while the future 100 TeV collider with a luminosity of 30 ab −1 can help to improve the sensitivity up to λ hhh /λ SM hhh ∈ (0.9, 1.1) [201]. However, this is not precise enough to see the deviation of trilinear coupling in the LRSM, which is of order 10 −3 or smaller. Although the quartic coupling measurements can not be greatly improved at hadronic colliders [201,202], a future muon collider with the center-of-mass energy of 14 TeV and a luminosity of 33 ab −1 can probe a deviation of the quartic Higgs self-coupling at the level of 50% [107]. This can probe a sizable region of parameter space in Fig. 9. probe a mass range of roughly 100 GeV up to 3 TeV, while the searches of a long-lived H 0 3 at the high-energy colliders can cover the mass range of 10 GeV down to 100 MeV. As a new avenue to probe the phase transition in the LRSM, GWs are sensitive to a wide mass range of H 0 3 , from the 10 GeV scale up to 10 TeV, which is largely complementary to the searches of (light) H 0 3 at the high-energy colliders. Note that one of the important decay modes of H 0 3 is the RHN channel, i.e. H 0 3 → N N , which will induce the strikingly clean signal of same-sign dilepton plus jets [104,148,153]. The heavy RHNs can also be produced through their gauge couplings to the W R and Z R bosons, e.g. the smoking-gun Keung-Senjanović signal pp → W R → N ± → ± ± jj at the high-energy pp colliders [164]. If the RHNs are very light, say below 100 GeV scale, the decay widths of RHNs will be highly suppressed by W R mass, which makes the RHNs longlived [205,206]. The light long-lived RHNs can be searched directly at the high-energy colliders via displaced vertex, or even from meson decays [207][208][209][210]. The prospects of RHNs at the high-energy colliders and in meson decays depend largely on the heavy scalar or gauge boson masses (see also [211,212]). However, it is worth pointing out that, as seen in Fig. 6, GWs are sensitive to the RHN masses in the range of 200 GeV up to 40 TeV, which is largely complementary to the direct searches of (light) RHNs at the high-energy frontier. Discussions and Conclusion Before the conclusion we would like to comment on some open questions in the phase transition and GW production in the LRSM: • In the calculations we have assumed that at the epoch of phase transition the bubbles expanding in the plasma can reach a relativistic terminal velocity, i.e. the nonrunaway scenarios, where the velocity of bubble wall is taken to be v w 1 in our analysis, which corresponds to the denotation case [213]. A recent numerical analysis [214] has revealed that the SW contribution might be suppressed by a factor of 10 −3 in the deflagration case when α > 0.1 where the reheated droplet can suppress the formation of GW signals. While there is no such a huge suppression for the denotation case with α < 0.1, our results could still be valid, although the GW signals might be suppressed by a factor two or three. The bubble wave velocity, in principle, can be computed from the parameters of a given model, as demonstrated in [215][216][217]. Furthermore, according to the recent calculations in Ref. [196], it is found that the finite lifetime of SWs can lead to a suppression factor Υ, which can be parameterized in the following form [197] We have calculated the Υ factors for the five BPs in Table 2, and listed it in the last row of this table. It is observed that the GW signals in these BPs might be suppressed by up to a factor of 6 to 10. It might be interesting to explore how the trilinear couplings expressions model parameters of LRSM can affect the bubble wall velocity and the effects of the suppression factor Υ, which will be a topic for our future study. • It is remarkable that for the scalar H 0 3 , which is mainly the CP-even neutral component of the right-handed triplet ∆ R , both the theoretical and experimental constraints on it are very weak. As a result, its mass could span a wide range, say from below GeV-scale up to tens of TeV. In the case that all other new particles in the LRSM are heavier than 5 TeV but with a relatively light H 0 3 below the TeV-scale (for instance the BPs BP1 and BP2 in Table 2), at the scale below 1 TeV, the scalar potential of LRSM given in Eq. (2.2) can be reduced to the effective model with the SM extended by a real singlet S, where the scalar potential has the following form: The trilinear and quartic couplings in Eq. (6.2) can be written as functions of the right-handed VEV v R and the quartic couplings in the LRSM, which are collected in Table 4. Obviously, when α 1 is switched off, H 0 3 will not affect the EW phase transition directly, and the EW phase transition should be of second-order as in the SM. When α 1 is switched on, it might be interesting to examine whether the light H 0 3 can affect the phase transitions at both the v R scale and the EW scale. When it is possible, a multi-step strong FOPT could be expected [218]. To summarize, in this paper we have studied the prospects of GW signals from phase transition in the minimal LRSM with a bidoublet Φ, a left-handed triplet ∆ L and a righthanded triplet ∆ R , which is a well-motivated framework to restore parity and accommodate the seesaw mechanisms for tiny neutrino masses at the TeV-scale. We have considered the theoretical limits on the LRSM from perturbativity, unitarity, vacuum stability and correct vacuum criteria, as well as the experimental constraints on the heavy gauge bosons and the BSM scalars. The experimental limits are collected in Table 1 and Fig. 1. With these theoretical and experimental constraints taken into account, we have analyzed the parameter space of strong FOPT and the resultant GWs in the LRSM. As demonstrated in Figs. 2, 3 and 9, the strong FOPT at the v R scale favors relatively small quartic and Yukawa couplings, which corresponds to relatively light BSM scalars and RHNs. The GWs for some BPs in the LRSM in Fig. 5 reveal that the phase transition in the LRSM can generate the GW signals of 10 −17 to 10 −12 , with a frequency ranging from 0.1 to 10 Hz, which can be probed by the experiments BBO and DECIGO, or even by ALIA and MAGIS. Setting v R = 10 TeV, as seen in Fig. 6, the GWs are sensitive to the following mass ranges: • The heavy bidoublet scalars H 0 1 , A 0 1 , H ± 1 , the scalars H 0 2 , A 0 2 , H ± 2 and H ±± 1 from the left-handed triplet ∆ L , and the doubly-charged scalar H ±± 2 from the right-handed triplet ∆ R , with masses up to tens of TeV, with the lower bounds of their masses roughly set by the experimental limits in Fig. 1. • The scalar H 0 3 with mass in the range of roughly from 20 GeV up to 10 TeV. As presented in Fig. 10, the GW prospects of H 0 3 are largely complementary to the direct searches of heavy H 0 3 at the LHC and future 100 TeV colliders, and the searches of light H 0 3 from displaced vertex signals at the LHC, future higher energy colliders, and the LLP experiments such as MATHUSLA. • The RHNs with masses from roughly 300 GeV up to 40 TeV. The GW sensitivity of M N is also largely complementary to the direct searches of prompt signals and displaced vertices from RHNs at the high-energy colliders, as well as the production of RHNs from meson decays. The GW spectra in Fig. 8 for the BPs in Table 2 shows that the quartic coupling ρ 1 is crucially important for both the frequency and strength of the GW signals in the LRSM, while other couplings such as ρ 2 , ρ 3 − 2ρ 1 , α 3 and y N are also important. In addition, the precision measurement of the quartic coupling of the SM Higgs at a future muon collider can probe a sizable region of the parameter space in LRSM, which can have strong FOPT and observable GW signals, as exemplified in Fig. 9. Table 5. Physical Higgs states and their masses when v L κ 2 κ 1 v R [142]. Here ξ = κ 2 /κ 1 = v EW /v R κ 1 /v R . h is the SM Higgs field. physical states mass squared The corresponding mass matrix elements can be found e.g. in Ref. [135]. In the basis of For the singly charged fields, in the basis of {φ ± 1 , φ ± 2 , δ ± L , δ ± R }, the thermal self-energy is the same as that for the real neutral components, i.e. Π H ± = Π H 0 . For the doubly-charged scalars, in the basis of {∆ ±± L , ∆ ±± R }, the corresponding self-energy is given by For the neutral gauge bosons, in the basis of {W 3 L , W 3 R , B}, the self-energy matrix reads while for the singly-charged gauge bosons, in the basis of {W ± L , W ± R }, the self-energy matrix is B Conditions for vacuum stability and correct vacuum The sufficient but not necessary conditions for vacuum stability and correct vacuum in the LRSM are worked out in [103] and listed below (simple analytic formula can only be obtained in the condition α 2 = 0): where the condition structure "p ⇐ q" means p needs to be checked if and only if the condition q is true. In this paper, we have chosen λ 2,3,4 = 0, which corresponds to the case of η = σ = 0.
13,091.4
2020-12-26T00:00:00.000
[ "Physics" ]
Student Interest and Engagement in Mathematics After the First Year of Secondary Education ABSTRACT INTRODUCTION Insights into student mathematical knowledge and engagement at important stages in their mathematical trajectories can inform enriched, enduring outcomes for students as they continue to navigate through the education system (Cox & Kennedy, 2008;Deieso & Fraser, 2019;Galton et al., 1999;Smyth et al., 2004).This paper investigates the mathematical motivation and engagement levels of a large sample of students as they proceed into their second year of secondary education, a mathematics juncture often neglected in the literature yet one that has a substantial bearing on future student progress in mathematics (Deieso & Fraser, 2019;Vorderman et al., 2011).The research presented in this paper formed part of a larger study on student transition in mathematics from primary to secondary education in Ireland, which demonstrated that students scored significantly lower in mathematics on the same test instrument at the end of the first year in secondary education compared to their performance one year earlier at the end of primary education, despite an additional year of mathematics instruction (Ryan, 2018).On average, students' raw scores decreased by 7% from sixth class (final year of primary school) to first year of secondary education despite an additional year of instruction and extensive overlap of the both syllabi.In addition to academic performance, affective constructs were also examined in the study on transition in mathematics from primary to secondary school. The affective domain relates to three main constituents: beliefs, attitudes, and emotions as well as related concepts that include confidence in learning mathematics, self-concept, self-efficacy, mathematics anxiety, causal attribution, effort and ability attributions, learned helplessness, and motivation (McLeod, 1992).Affective issues are central to teaching and learning (Casey & Fernandez-Rio, 2019).Attitudes, beliefs and emotions are important considerations when investigating students' interest in and response to mathematics (OECD, 2013a).Positive emotions towards mathematics provide a better platform for the learning of mathematics than negative emotions.Students who exhibit positive attitudes, emotions and beliefs towards mathematics predispose them to using mathematics in everyday contexts (OECD, 2013a).An important function of mathematics education thus is to cultivate attitudes, beliefs, and emotions in a way that not only encourages students to utilize and apply mathematics they currently know, but to inspire them to continue studying mathematics for personal, academic, and social gain (Al-Mutawah & Fateel, 2018;OECD, 2013a). RELEVANT LITERATURE Student Engagement and Motivation in Mathematics Student engagement in mathematics is multifaceted and has affective, cognitive, and behavioral constructs and the level of engagement affects the quality of the learning outcome (Al-Mutawah & Fateel, 2018;Fredricks et al., 2004;Kong et al., 2003).When students believe that they will experience success in mathematics, they are more likely to engage in and enjoy the subject (Middleton & Spanias, 1999;Putwain et al., 2018).Cultivating student engagement is as important as the design of the curriculum itself (Kong et al., 2003).Evidence of motivation to engage in mathematics can be observed early in a child's education.Students in Kindergarten and first grade are motivated to engage and they relate success to a mix of effort and ability but this changes and by the middle grades, these students attribute success in mathematics to ability rather than effort (Kloosterman & Gorman, 1990).Student engagement in mathematics and attitude is directly related to the supportiveness of the teacher and the classroom environment (Ivowi, 2001;Lazarides et al., 2018;Middleton & Spanias, 1999;Valås & Søvik, 1994).According to Meece et al. (1990), students' performance expectancies predicted future performance in assessment, while student value perceptions predicted student intention to engage in a course involving mathematics in the future (OECD, 2013a).Motivation is what determines a person's drive and persistence towards the participation in or completion of a task and is categorized as either extrinsic, if it is determined by an external factor such as material gains, or intrinsic if a task is pursued or completed for one's own personal satisfaction (Fischer et al., 2019).The teacher plays a key role in the development of intrinsic motivation and this can be achieved by the teacher highlighting the usefulness and importance of the mathematical concepts being taught, aided by the use of real-life problem solving (Cronbach & Meehl, 1955;Ivowi, 2001;Middleton & Spanias, 1999).Lazarides et al. (2018) and Middleton and Spanias (1999) stress that motivation, while generally stable, is not simply the result of mathematical ability but can be changed through appropriate intervention strategies such as instructional practices and can lead to students enjoying and valuing the subject. Mathematics Behaviors, Intentions, and Subjective Norms Students who engage in mathematics related behaviors including taking part in mathematics competitions, mathematics clubs, chess and computer programming are more likely to enjoy and value the subject (OECD, 2013b).Mathematics related intentions indicate the likelihood of students pursuing mathematics or mathematics related disciplines in higher (tertiary, post-secondary) education (OECD, 2013b).Subjective norms are the beliefs a student holds about themselves that are formed based on perceived societal/peer pressure (Kul & Celik, 2018).Students' subjective norms have a direct impact on behavior so if the people who are important to the student see mathematics as important, it is likely that the student also will see the value and importance of mathematics.The Program for International Student Assessment (PISA) seeks to compare the value, equity and effectiveness of schools across 70 participating countries.Information about the highest performing school systems may then allow governments to adapt their education policies and practices to improve national performance (OECD, 2013b).PISA is also concerned with equipping each student with the necessary skills to reach their potential, enter the workforce, and participate fully in society (OECD, 2013b).PISA examines the skills of 15 year olds in reading, mathematics, science and problem solving and the questions require students to apply their knowledge and understanding to both familiar and unfamiliar contexts (OECD, 2013b).PISA 2012 results show that Irish students are less likely to intend to have a career in mathematics or intend to study mathematics compared to students across all OECD countries (Perkins et al., 2013).In addition, Irish students are among the third lowest group, across all OECD countries, to participate in mathematics related activities such as mathematics clubs and competitions (Perkins et al., 2013).Research has shown that participation in mathematics club leads to improved performance in standardized tests in concepts, application and computation (Sherman & Catapano, 2011).Results from PISA 2012 on attitude have revealed gender differences in attitude towards mathematics and engagement with mathematics (Perkins et al., 2013).The theory of reasoned action posits that if an individual has a positive attitude towards the suggested behavior and if they perceive that people important to them would want them to perform the behavior, then this will increase their motivation to perform the behavior and consequently, the likelihood of the behavior being performed (Fishbein & Ajzen, 1977).Irish students have shown higher levels of subjective norms relating to mathematics than students from other OECD countries (Perkins et al., 2013).Therefore, the opinions on mathematics of the significant others in the lives of the students have a particularly strong influence on the importance Irish students attribute to the subject.Fishbein and Ajzen (1977) have shown that high levels of subjective norms increase motivation towards a desired behavior and the prospect of that behavior being exhibited.Parents' interest in the school and parents' level of satisfaction in mathematics is linked to students' belief in their own ability (Gladstone et al., 2018;Surgenor et al., 2006). Self-Beliefs Student beliefs are a component of the affective domain (McLeod, 1992).They shape behavior and have significant consequences (Pitsia et al., 2017;Schoenfeld, 1992).Students' beliefs are evident in the classroom in the way students ask questions, answer questions, and how they approach and work on problems (Spangler, 1992).According to Schoenfeld (1992), students' beliefs about mathematics are generated through their experience in the mathematics classroom.These beliefs determine how students cope with uncertainty and manage contradiction and conflict (Fleener, 1996).According to Brassell et al. (1980), when the specific areas of self-concept and anxiety are taken in isolation, it can be seen that these areas impact performance.A moderate to strong correlation exists between mathematics self-beliefs and performance in mathematics (Perkins et al., 2013).This highlights the importance of promoting self-efficacy, self-concept, and minimizing mathematical anxiety. The self-concept and self-efficacy components have been shown to have considerable effect on student performance, perseverance, motivation, and career choices (Bandura, 1977(Bandura, , 1982;;Hackett & Betz, 1989;Kenney-Benson et al., 2006;Pajares & Miller, 1994;Stankov et al., 2014;Zimmerman, 2000).Mathematics selfefficacy, mathematics self-concept, and mathematics anxiety affect how students gauge their own performance, set learning objectives and the learning strategies they employ (Pitsia et al., 2017;Thomson et al., 2013).Mathematics self-efficacy affects students' motivation and determination on a mathematics task and refers to students' self-belief in their ability to learn mathematics (Thomson et al., 2013).If students believe in their own ability, they will devote time to learning strategies that will help them achieve success in mathematics (Karakolidis et al., 2016;Pitsia et al., 2017;Thomson et al., 2013).Mathematical anxiety occurs when students feel intimidated by the mathematical task and feelings of helplessness emerge. Self-efficacy relates to how students believe they will succeed in a particular task and how students believe they will succeed are strong predictors of performance (Bandura, 1986;Pitsia et al., 2017).Mathematical selfefficacy is centered on student confidence in achieving success in mathematics given a specific task, while mathematical self-concept is a judgement of a student's success but not confined to that particular task (Pajares & Miller, 1994).Therefore, a student with low mathematical self-concept may show high mathematical self-efficacy in a particular topic in mathematics.Blumenfeld et al. (1982) found that a positive self-concept of ability was linked to frequency of positive academic feedback.In particular, students' efficacy beliefs are lower in classes with frequent teacher criticism (Parsons et al., 1982).Findings from an undergraduate study carried out by Pajares and Miller (1994) show that student mathematical self-efficacy is an important predictor of future performance.Indeed, they argue that self-efficacy assessments should be introduced early in the education of a student to inform interventions that may alter self-efficacy beliefs.Selfbelief is a predictor of performance: "It should come as no surprise that what people believe they can do predicts what they can actually do and affects how they feel about themselves as does of that task" (Pajares & Miller, 1994, p. 200). According to Bandura (1977), perceived self-efficacy affects participation in a situation or activity.Low perceived levels of self-efficacy lead to an individual avoiding the activity.Low levels of perceived self-efficacy also affect the coping efforts and persistence of the individual once the activity or situation has commenced (Bandura, 1977(Bandura, , 1982)).According to Bandura (1977), a student who engages in a safe activity that they consider threatening gains corrective experiences that in turn strengthen their levels of self-efficacy.A student who does not engage or engages only for a short time holds on to their fears and low expectations.Selfefficacy is a powerful predictor of student motivation, learning, persistence, achievement and career choices (Bandura, 1977(Bandura, , 1982;;Hackett & Betz, 1989;Kenney-Benson et al., 2006;Pajares & Miller, 1994;Pitsia et al., 2017;Zimmerman, 2000) and the importance of self-belief in achievement is evident by its inclusion in international assessments such as PISA (Khine et al., 2020;Parker et al., 2014).Research has shown that selfefficacy plays a key role in successful problem solving and students' beliefs about themselves are key predictors of performance (Kenney-Benson et al., 2006;Pajares & Miller, 1994). Mathematics self-efficacy affects both performance and career choice (Hackett & Betz, 1989;Khasawneh et al., 2021;Pajares & Miller, 1994).In a US study of 280 university science students, Larson et al. (2015) found that university graduation status was significantly linked to mathematics/science self-efficacy from the first semester.Measures of mathematics aptitude and prior achievement contributed significantly less to graduation status than mathematics/science self-efficacy.In their longitudinal Australian study, Parker et al. (2014) reported that mathematics self-efficacy was a significant predictor of entry to university and mathematics self-concept was a significant predictor for the undertaking of STEM disciplines in higher education.Mathematics self-efficacy levels also determine the choice of college majors (Czocher et al., 2020;Evans et al., 2020;Hackett & Betz, 1989).Understanding how self-efficacy beliefs are developed is important as despite successful experiences and established mathematical skills, some students show an extreme lack of confidence in their ability (Pajares & Miller, 1994).According to Pajares and Miller (1994), many students may avoid math-related courses and careers because of inaccurate perceptions of their ability. Given the far reaching effects of mathematics self-efficacy, Hackett and Betz (1989) urge teachers of mathematics to recognize the importance of self-efficacy and consequently "pay as much attention to self-evaluations of competence as to actual performance" (p.271). Self-efficacy affects persistence.Students who perceive that a task is beyond their capability are less likely to spend time and exert effort to complete the task (Bandura, 1982;Czocher et al., 2020;Pajares & Miller, 1994).Deficits in self-efficacy is a likely contributor to the low number of women pursuing STEM related disciplines and careers (O'Brien et al., 1999).Gender gaps in mathematics self-efficacy increase with age from early adolescence (Huang, 2013).Research shows ambiguous findings in relation to gender and self-efficacy (Kenney-Benson et al., 2006;Lapan et al., 1996;Pajares, 2005).In a longitudinal study in the US consisting of 101 university students, Lapan et al. (1996) found that mathematics self-efficacy plays an important role "in the developmental process through which women either embrace or reject math/science college majors" (p.289). They also found that females displayed lower levels of self-efficacy than males.However, the US study by Kenney-Benson et al. (2006) contradicts these findings and this study found no evidence of higher self-efficacy levels in males in their longitudinal study of 518 students in 5th grade and then again in 7th grade.On the other hand, in their meta-analysis featuring 187 studies, Huang (2013) found males exhibiting higher levels of self-efficacy than females. According to Pajares (2005), there is no evidence of gender differences in self-efficacy in primary school but female confidence in their ability is undermined as they enter secondary school, and the perception of mathematics as a male domain is established.Similarly, by middle school Fennema and Hart (1994) found that girls tend to exhibit less confidence in their mathematical ability.Huang (2013) found no significant gender differences in mathematics self-efficacy levels in groups of students aged 6 to 10 years of age and 11 to 14 years of age.However, in all groups over 14 years of age, there is a statistically significant difference between female and male levels of self-efficacy with males exhibiting higher levels of mathematics selfefficacy.There may be an unwelcome price to pay for accurate self-evaluation because realistic or underconfident evaluation may affect participation and persistence in mathematics.According to Pajares (1996), "accurate self-perceptions may enable students to more accurately assess their problem-solving strategies, but the danger of ''realistic'' self-appraisals is that they may be purchased at the cost of lower optimism and lower levels of self-efficacy's primary functions-effort, persistence, and perseverance" (p.340). Anxiety affects affective engagement in mathematics (Kong et al., 2003).The mathematics anxiety construct is defined by Middleton and Spanias (1999) as a perception that mathematics is difficult and a tendency to steer clear of the subject.Students' willingness to engage in mathematics is weakened by experiencing failure in mathematics, by believing that failure is caused by lack of ability and also by learned helplessness (Kong et al., 2003;Middleton & Spanias, 1999).According to Fennema and Sherman (1976), high levels of anxiety in mathematics is associated with low levels of student confidence in mathematics.In addition, high levels of anxiety are related to lower levels of performance in mathematics (Perkins et al., 2013).Students who display medium and low levels of anxiety about mathematics attained a mean score in PISA 2003 testing that was significantly higher than those with high levels of anxiety (Perkins et al., 2013).Much of the interest in studying anxiety in mathematics has established gender differences in mathematics anxiety with female students displaying higher levels of mathematics anxiety than male students (Ashcraft & Faust, 1994;Baloglu & Kocak, 2006;Ho et al., 2000).Irish students, in particular females, have increased levels of anxiety relating to mathematics compared to OECD averages (Perkins et al., 2013). Work Ethic, Self-Responsibility for Failure, & Openness to Problem Solving in Mathematics Self-responsibility for failure in mathematics requires students to imagine that they had performed badly on a series of short mathematics tests and to consider the possible explanations.PISA's self-responsibility index is based on these responses and students with a high value blame themselves for poor performance while students who record a low value on this index attribute failure to other people or factors.Typically, female students tend to attribute failure in mathematics to a lack of ability but they are not inclined to attribute success to ability (Degol et al., 2018;Middleton & Spanias, 1999).Successful students are often those who attribute their successes to ability whereas unsuccessful students are those who attribute failure to lack of ability (Middleton & Spanias, 1999).Consequently, students who believe that mathematics ability is not fixed and can be increased through effort experience more success in the subject and spend more time studying mathematics (Middleton & Spanias, 1999).The self-responsibility index shows that Irish students are more likely to blame external factors rather than themselves for failure in mathematics compared to other OECD countries (Perkins et al., 2013).However, there is a negative correlation between self-responsibility for failure and successful performance in mathematics (Perkins et al., 2013).Interestingly, it is the students who blame other factors than themselves for failure in mathematics who have higher results in mathematics than students who blame themselves for failure (Perkins et al., 2013).Students' openness to problem solving and work ethic are two further constructs measured by PISA.The objective is to investigate how students approach problem solving and if it is something they enjoy.Work ethic enquires how hard they work in class, at their homework and in preparation for examinations. Research Question 1. Using the questionnaire data, what can we learn about students' engagement and interest in mathematics, their motivation to study mathematics, their mathematics self-beliefs, and their perseverance in learning mathematics? 2. In which of the affective areas measured is there a difference between male and female students? The research questions were pursued through an action plan based on multiple subsidiary questions related to each area leading to consideration of implications of findings, and recommendations for improving performance. RESEARCH DESIGN Sample The authors adopted the design used in PISA assessment (OECD, 2014) to acquire their sample of first year students, attending school in the Republic of Ireland in 2015.The sample involved all 723 second level schools in receipt of funding from the Department of Education and Skills.All 723 schools were stratified by school types, namely: secondary, vocational, community and comprehensive1 .This division provided new subframes for sampling.Approximately 61,196 first year students were included in the sampling frame.The researcher selected participating schools using probability proportional to size systematic sampling. The projected sample size for the main study was 382 students, assuming a 95% confidence level and a 5% margin of error.This preliminary sample accounted for 20 schools and 11 of the 20 schools agreed to take part in the study.Nine replacement schools were then selected and of these replacement schools, four agreed to take part.One of these four replacement schools subsequently refused to participate which left the research team with 14 participating schools.As the sampling frame was ordered by school type, this ensured that the replacement schools were the same school type as the original school selected. Within each of the 14 schools, there were multiple mixed ability first year classes.A single first year mathematics class from each school was chosen using a simple random sampling.All students from the selected class were included in the main sample.Due to absenteeism, 304 out of the 323 first year students who made up the sample completed the attitudinal questionnaire in May 2016.The students were given two hours on a regular school day to complete the survey instruments.The university associated with the authors granted ethical approval for the research (Code: 2015_09_01_S&E). Student Questionnaire to Assess Attitude The student questionnaire used in this study was extracted from the PISA 2012 student questionnaire and examined students' engagement with mathematics by assessing their interest in mathematics, motivation to study mathematics and perseverance in learning mathematics.It assesses the students' self-beliefs and student anxiety which both influence performance.It also looked at learning strategies employed by students and how these strategies affect motivation, self-beliefs and academic success.Students' characteristics, selfresponsibility for failure in mathematics and openness to problem solving, are examined.Predicted future engagement with mathematics is assessed through questions focusing on mathematics-related behaviors, mathematics-related intentions and students' subjective norms.The PISA 2012 student questionnaire draws on the theory of planned behavior model proposed by Ajzen (1991) to predict work ethic, intention, study behavior and mathematics performance.It achieves this by using questions based on students' attitudes, subjective norms and perceptions of control (OECD, 2013b).Attitude towards mathematics is examined through the PISA 2012 student questionnaire by analyzing students' interest in mathematics and students' willingness to engage in mathematics.Interest in mathematics is investigated by looking at present and future engagement in mathematics.Students' opinion of the usefulness of mathematics, students' interest in the mathematics they are studying in school, their intention to embark on further study or pursue a career that will require mathematics are all examined through the student questionnaire (OECD, 2013b).Students' willingness to engage in mathematics is gauged through questions on the attitudes, emotions and beliefs that predispose students to successfully employ the mathematics they have learned.If students are confident in their mathematics ability they are more likely to engage in mathematical activity outside of school.Mathematics anxiety, enjoyment of mathematics, confidence, self-efficacy, self-concept, student experience in class, student experience in tests, and the opportunity to learn are all examined in the student questionnaire to assess students' willingness to engage in mathematics (OECD, 2013b). Students' engagement with mathematics is assessed under three domains-intrinsic motivations, extrinsic motivation, and perseverance.Students' beliefs about themselves were examined under mathematics selfefficacy, mathematics self-concept and mathematics anxiety.The student questionnaire examines memorization, elaboration, and control strategies as well as how students employ these strategies to process, integrate and apply their mathematical knowledge.Self-responsibility for failure in mathematics was measured by asking students to imagine that they had performed badly on a series of short mathematics tests.The students were then asked to attribute the cause to a list of possible explanations.A selfresponsibility index was constructed based on these responses and students with a high value blame themselves for poor performance while students who record a low value on this index attribute failure to other people or factors.Students who engage in mathematics related behaviors are more likely to enjoy and value the subject (OECD, 2013b).Mathematics related intentions indicate the likelihood of students pursuing mathematics or mathematics related disciplines in higher education (OECD, 2013b).Students' subjective norms have a direct impact on behavior so if the people who are important to the student see mathematics as important, it is likely that the student also will see the value and importance of mathematics. Limitations of Study The researcher relied on the cooperating mathematics teacher in each of the sample schools to administer the test.It was not possible to ensure that all students had the same experience of the test environment.It was not possible to ensure that the cooperating teacher adhered rigorously to the instructions for the testing and the administration of the questionnaire.Furthermore, the authors were constrained by the main study in relation to the sample used and also by the willingness of schools to participate in the study.Even though the sample was randomly selected, a smaller sample size than originally desired may mean that that findings of the study are not generalizable to the population of entire students. Profile of Student Groups 304 first year students completed the student questionnaire.The mean age of participants was 13.47 years, which is the expected age for students at the end of their first year of secondary education in Ireland.The proportion of male students in the sample is higher than expected and this is due to the fact that four out of the 14 schools selected for the sample were all-boys' schools (Table 1). About learning mathematics The mean and standard deviation for the responses on 62 questions in relation to learning mathematics were calculated.Low mean scores indicate high level of student agreement on positively worded statements and low level of student agreement on negatively worded statements.The lowest mean score of 1.39 is attributed to the statement "my parents believe it's important for me to study mathematics" with 98% of respondents agreeing or strongly agreeing with the statement.The second lowest mean score is attributed to the statement "if I put in enough effort I can succeed in mathematics" with a mean score of 1.42.This corresponds to 96% of participants agreeing or strongly agreeing with the statement.The highest mean score of 3.9 is attributed to question "I participate in a mathematics club" with only 2% of respondents answering, always or almost always.64% of students in this sample plan to take additional mathematics courses after school finishes and 73% are willing to study harder in their mathematics class than is required.69% plan on The mean and standard deviation for the responses on 22 questions in relation to problem solving experiences were calculated.The lowest mean score of 2.20 is attributed to the statement "When confronted with a problem I give up easily" with 69% of respondents identifying with the response "not much like me" or "not at all like me".The highest mean score of 3.8 is attributed to the statement "I like to solve complex problems" with 13% of respondents responding "very much like me".The lowest mean score of 1.86 is attributed to the statement "I think about what might have caused the problem and what I can do to solve it" with 83% of respondents likely to take this approach.The highest mean score of 2.82 is attributed to the statement "I read the manual" with 36% of respondents likely to take that approach (Table 3). Motivation to Learn Mathematics Students' motivation to learning mathematics was assessed using three clusters of items: 1. intrinsic motivation to learn mathematics (Table 4), 2. instrumental motivation to learn mathematics (Table 5), and 3. perseverance in learning mathematics (Table 6 and Table 7).Students in this sample show an eagerness to learn and show high levels of intrinsic motivation, instrumental motivation and perseverance.Similar levels of intrinsic motivation exist for both male and female students in this study. Male students record higher perseverance levels than female students in all measures of perseverance in this study. Over one quarter of male students are likely to identify as giving up easily when confronted with a problem, compared to almost 40% of female students. In summary, intrinsic motivation, instrumental motivation, and perseverance results combine to show students who are motivated and determined to succeed in the subject.Almost all students (96.7%) believe that making an effort in mathematics is worth it because it will help in the work that they will do later on.Three in every four students are interested in the things they learn in mathematics while 58.6% of students remain interested in the tasks that they start.Males showed higher levels of instrumental motivation, intrinsic motivation and perseverance than females on every question. Students' Learning Strategies The student questionnaire examines memorization, elaboration, and control strategies as well as how students employ these strategies to process, integrate and apply their mathematical knowledge.Elaboration strategies may involve integration of new and existing knowledge and the application of the new knowledge to other situations.It necessitates an understanding of the new material.Control strategies relate to how students manage how they learn.Students need to self-assess what they have learned already and what they need to learn.This study showed students favoring a traditional approach to learning mathematics.42.1% of students' study for a mathematics test by learning as much as they can off by heart.45.7% try to figure out which ideas that they still have not understood properly when studying mathematics.61.4% study mathematics by working out exactly what they need to learn.53.1% go through examples again and again in order to solve a mathematics problem. Mathematics Self-Beliefs Students' mathematics self-beliefs were assessed using three clusters of items: Intrinsic motivation Participants who agree or strongly agree I enjoy reading about mathematics. 54.3% (female=50.0%,male=57.1%)I am interested in the things I learn in mathematics. Instrumental motivation Participants who agree or strongly agree Making an effort in mathematics is worth it because it will help me in the work that I will do later on.96.7% (female=95.9%,male=97.3%)Learning mathematics is worthwhile for me because it will improve my career prospects and chances.95.0% (female=91.7%,male=97.3%) Mathematics is an important subject for me because I need it for what I want to study later on.88.1% (female=84.3%, male=90.6%)I will learn many things in mathematics that will help me get a job. 95.0% (female=93.3%, male=96.2%)Table 6.Perseverance questionnaire results-1 Perseverance Participants who agree or strongly agree I remain interested in the tasks that I start. 49.3% (female=44.3%, male=52.5%)When confronted with a problem I do more than is expected of me. Self-efficacy, self-concept, and anxiety affect engagement with mathematics in the short and long-term.Overall, the self-concept and self-efficacy indicators show students as having positive self-belief.In this study, students are, in general, confident in applying their mathematical skills to real-life situations.80.5% of students are confident or very confident in understanding graphs presented in newspapers and 79.5% of students are confident or very confident in their ability to work out how long it would take to get from one place to another from a train timetable.Students were less confident in working out the petrol consumption of a car (49.7%) and finding the actual distance between two places on a map with a 1:10000 scale (38.0%).The self-efficacy questionnaire results reveal considerable differences in male and female levels of confidence relating to specific tasks in mathematics.It also reveals that one third of students are not confident in applying a 30% discount to an item.Only 61.2% (female=50.4%,male=68.3%)identify as confident or very confident with the statement: "calculating how many square meters of tile you need to cover a floor". In terms of self-concept, two thirds of students believe they learn mathematics quickly and 76.5% identify as getting good grades in mathematics.57.9% of students in this sample believe that mathematics is one of their best subjects.The self-concept questionnaire results reveal considerable gender differences with male students showing considerably higher levels of self-concept than their female counterparts.35.5% of females compared to only 19.2% of males agree or strongly agree with the statement "I am just not good at mathematics". However, anxiety indicators show less favorable results.One fifth of students get very tense when doing their mathematics homework and one fifth of students feel helpless when doing a mathematics problem.Almost half of all the students in the sample worry that they will get poor grades in mathematics and 45.5% of students worry that that they will experience difficulties in mathematics class.All students, but particularly female students, experience high levels of anxiety in relation to mathematics, worrying about homework, solving problems, class material and grades.45.5% (female=56.2%,male=38.5%)agree or strongly agree with the statement "I often worry that it will be difficult for me in mathematics classes" and 49.0% (female=59.5%,male=42.0%)agree or strongly agree with the statement "I worry that I will get poor grades in mathematics". Students' Work Ethic, Openness to Problem Solving, & Attributions of Failure in Mathematics Students, at the end of first year of secondary education, show a strong work ethic indicating high levels of preparedness and engagement.95.9% of respondents answered always or almost always to the statement: "I listen in mathematics class".93.0% (female=87.2%,male=96.7%) of students agree or strongly agree with the statement "I have my homework finished in time for mathematics class".89.3% (female=89.7%,male=89.0%) of students agree or strongly agree with the statement "I work hard on my mathematics homework". Less than one third of students identify as enjoying solving complex problems which are problems requiring application of knowledge and can often be presented in an unfamiliar context.Also in relation to problem solving, approximately half of all the students in this sample identify as being able to handle a lot of information, being quick to understand things, seeking explanations for things and easily linking facts together.47.1% of students believe they can easily link facts together and the lowest score of 32.0% is the percentage of students who like to solve complex problems.Male students are more open to problem solving than female students.11.1% more males than females like to solve complex problems, 11.7% more males than females believe they can easily link facts together and 10.7% more males than females identify as being quick to understand things.49.5% of students are likely or very likely to attribute the test failure to the course material being too hard.44.6% of students are likely or very likely to attribute failure in tests to themselves-"I'm not very good at solving mathematics problems".We know from research literature that higher achievement results are linked to students who blame other factors than themselves for their failure in mathematics (Perkins et al., 2013). Similar to the findings of Middleton and Spanias (1999), this study shows that female students are more likely to attribute failure to lack of ability. Students' Mathematics Related Behaviors, Students' Intentions to Study Mathematics Further, & Students' Subjective Norms Three clusters of items were considered for this section: 1. mathematics-related behaviors, 2. mathematics-related intention, and 3. subjective norms. In this study, 10.1% (female=11.2%,male=9.4%) of students responded always, almost always or often with the statement "I talk about mathematics problems with my friends".However, almost three quarters of the students in the survey identify as willing to study harder in their mathematics classes than is required.Students in this sample are unlikely to have mathematics related behaviors.Just 2.7% participate in a mathematics club.Only 9.6% do mathematics as an extracurricular activity.The mathematics related activity with the highest score of 31.0%was attributed to students helping their friends with mathematics.11.2% more male students than female students identified with this statement. Students' desires to study mathematics in the future are high at the end of their first year.The majority of students in this sample intend to pursue mathematics after school, either in college or by taking additional mathematics classes, and throughout their career.68.9% (female=66.0%,male=70.6%) of students agree or strongly agree with the statement "I am planning on pursuing a career that involves a lot of mathematics".63.7% (female=65.5%,male=62.6%) of students agree or strongly agree with the statement "I plan to take additional mathematics courses after school finishes". Students believe their parents and friends value the importance of mathematics and 80% believe their friends work hard at mathematics, eager to succeed.93.4% (female=89.9%,male=95.6%) of students agree or strongly agree with the statement "my parents believe that mathematics is important for my future career".Students in this study recorded extremely high levels in each of the questions relating to subjective norms.98.4% of students agree or strongly agree that their parents believe it's important for them to study mathematics, 93.4% of students agree or strongly agree that their parents believe mathematics to be important for their future career and three quarters of students surveyed believe that their parents like mathematics.These students also judge that their friends do well in mathematics (87.5%) and work hard in mathematics (77.9%). Gender Differences in Questionnaire Responses A difference of more than 10% in male and female answering occurred in the following questions.A difference of -20% means that female students in the sample recorded a value 20% lower than male students in the sample and a difference of +20% means that female students in the sample recorded a value that was 20% higher than male students in the sample for that particular question (Table 8). DISCUSSION This study provides an insight into students' disposition towards mathematics for 304 students at the end of their first year of secondary education.In general, the results point to students who have a positive disposition towards the subject and who are motivated to succeed in mathematics.Research has shown that a positive emotional disposition towards the subject is linked to success in the subject, is a key school outcome and affects the career intentions of the students (Haladyna et al., 1983).Overall, the self-concept and selfefficacy indicators show students as having positive self-belief, a valuable finding as self-efficacy is an important predictor of future performance (Pajares & Miller, 1994). The majority of students in this study plan on pursuing a career involving mathematics.They are eager to learn mathematics and show high levels of preparedness and engagement.This finding is in contrast to those of PISA (2012), which showed that Irish students are less likely to intend to have a career in mathematics or intend to study mathematics compared to students across all OECD countries (Perkins et al., 2013).The majority of students believe their friends and parents value mathematics which are important findings as parents' interest in the school and parents' level of satisfaction in mathematics is linked to students' belief in their own ability (Surgenor et al., 2006). Almost three quarters of the students in the survey identify as willing to study harder in their mathematics classes than is required however they are unlikely to have mathematics related behaviors.Just 2.7% participate in a mathematics club.This is disappointing given that research has shown that participation in mathematics club leads to improved performance in standardized tests in concepts, application and computation (Sherman & Catapano, 2011).It echoes the findings of PISA in 2012 that showed Irish students are among the third lowest group across all OECD countries to participate in mathematics related activities such as clubs and competitions (Perkins et al., 2013). Students' approaches to studying mathematics are less than satisfactory.This sample of students indicate they study mathematics by working out exactly what they need to learn (61%); learning as much as they can off by heart (42%); trying to figure out which ideas they still have not understood properly (46%), and going through examples again and again in order to remember the method for solving a mathematics problem (53%).Memorization strategies are important in retrieving information but not generally conducive to an indepth understanding of the information and an ability to apply the mathematics to other contexts (O'Brien, 1999).According to O'Brien (1999), memorization strategies are essentially 'parrot math' and have been shown to be detrimental to student performance in mathematics.An activity based approach is favored, which would involve skills such as problem solving, generalizing and hypothesizing.Elaboration strategies may involve integration of new and existing knowledge and the application of the new knowledge to other situations.It necessitates an understanding of the new material.Control strategies relate to how students manage how they learn.Students need to self-assess what they have learned already and what they need to learn.The preferences for studying mathematics uncovered are also not consistent with the ethos of the relatively recently introduced secondary school curriculum (2010) entitled "Project Maths," which endorses teaching and learning for understanding and the use of problem solving strategies. Gender differences are evident in perseverance, mathematics self-efficacy and mathematics anxiety.Male students show higher levels of self-efficacy and perseverance and lower levels of mathematics anxiety compared to their female counterparts.The greatest levels of gender differences in responses related to the student self-belief section of the questionnaire with females recording significantly higher levels of anxiety and lower levels of self-efficacy and self-concept.17.5% more females than males worry that they will get poor grades in mathematics and 17.7% more females than males worry that that they will experience difficulties in mathematics class and 15.8% more males identify as being able to learn mathematics quickly.These findings are mirrored in PISA 2012 results which found anxiety levels of Irish students to be higher than the OECD average (OECD, 2013b).Research has shown that self-efficacy is strongly associated with performance and -20.20%SE Finding the actual distance between two places on a map with a 1:10,000 scale. -19.20% SE Calculating how many square meters of tile you need to cover a floor. -17.90% SE Working out from a train timetable how long it would take to get from one place to another. -15.80% SC Calculating the rate of petrol consumption of a car. -15.20% SE When confronted with a problem I give up easily. -13.10% P I have always believed that mathematics is one of my best subjects. -12.60% SC I can easily link facts together. -11.70% OPS I help my friends with mathematics. -11.20% MSB I like to solve complex problems. -11.10% OPS I am quick to understand things. -10.70% OPS I feel helpless when doing a mathematics problem. 13.90% A I get very nervous doing mathematics problems 13.90%A I am just not good at mathematics.16.30% SC I worry that I will get poor grades in mathematics.17.50% A I often worry that mathematics classes will be difficult for me.17.70%A Note. D: Difference; QT: Question type; A: Anxiety; MSB: Mathematics' students' behaviors; OPS: Openness to problem solving; P: Perseverance; SC: Self-concept; & SE: Self-efficacy students with high levels of self-efficacy record significantly higher levels of achievement than those who reported low levels of self-efficacy (Close & Shiel, 2009).There is a significant difference in performance between students reporting high levels of anxiety and students reporting low levels of anxiety (Close & Shiel, 2009).Students who report high levels of anxiety are more likely to record low achievement and girls who completed PISA in 2012 showed significantly higher levels of anxiety than their male counterparts (Perkins & Shiel, 2016).The PISA 2012 results on anxiety are mirrored by this study.High levels of anxiety in mathematics have been shown to lead to avoidance of mathematics (Ashcraft, 2002) and compromise the development of higher level mathematical skills (Maloney et al., 2011).Close and Shiel (2009) suggest that stronger performance by male students is related to stronger levels of self-efficacy and lower levels of anxiety.In addition, PISA 2012 found a moderate to strong correlation between mathematics self-beliefs and performance (Perkins et al., 2013).Fishbein and Ajzen (1977) have shown that high levels of subjective norms increase motivation towards a desired behavior and the prospect of that behavior being exhibited.Given the positive emotional disposition students have displayed towards mathematics in this.The openness to learning suggests that approaches to learning other than memorization may be embraced by students.This study further highlights a gender disparity in mathematics self-beliefs, particularly in relation to self-efficacy and anxiety.This has implications for future female participation in STEM disciplines. Finally, it is important to consider the findings on student interest and willingness to engage with mathematics within the context of the larger study on academic performance, between the end of primary school and the end of the first year of secondary school.The larger study shows student failure to successfully negotiate transition in mathematics.On average, out of the 119 questions on the test, students scored 8 marks less than they had scored a year previously, despite covering a very similar curriculum in sixth class and first year.The losses incurred in mathematics in first year found in this study were far more pronounced than both national and international comparable studies (Cox & Kennedy, 2008;Galton et al., 1999;Smyth et al., 2004).This is a critical finding because despite student performance worsening, this study has shown that students are still interested in mathematics and eager to succeed in the subject.Given the positive emotional disposition students have displayed towards mathematics, this research establishes that first year in postprimary school represents and offers a huge opportunity for educators to develop students' mathematical skills, understanding, interest and appreciation of the subject.The questionnaire results show that students are engaged and motivated to learn mathematics so it is vital that students are not allowed to regress and ultimately disengage with the subject.More directed attention to students' academic performance in mathematics in the first year of transition has the potential to secure bigger gains based on better early performance in post-primary education. CONCLUSIONS AND RECOMMENDATIONS The role of the teacher in the learning of mathematics is of paramount importance (Attard, 2010;Feldlaufer et al., 1988;Galton et al., 2000;Midgley et al., 1989) and the transition from primary education to post-primary education in particular is a critical time for our young people (Akos et al., 2007;Anderson et al., 2000;Eccles et al., 1991;Smyth et al., 2004).Academic self-image becomes more negative over the transition to post-primary school for both male and female students, however, gender differences broaden between the ages of nine and thirteen in relation to academic self-image and anxiety measures.Smyth (2016) found poor self-image arising in students who had difficulty negotiating the transition to post-primary school and reported that student self-image is significantly associated to the relationships student have with their teachers.Students who experience praise and positive feedback tend to have positive self-image and similarly, students who are chastised by their teachers tend to have poorer self-image.Thus it is vital that all teachers are made aware of the difficulties students experience in making the transition and the need for increased praise and positive feedback in their interactions with first year students.This research echoes the recommendations of Smyth (2017) who advised the promotion of positive teacher-student interactions to form part of school development plans and initial and continuous teacher training.While teacher training should be designed to improve outcomes for all students, there needs to be a special focus on recognizing the problems that specifically female students face at this significant point in their education.The transition is a pivotal point for female students and it is important that the stereotype of mathematics as a male domain is challenged while students are making the transition.Further research into this area, with a larger sample over an extended period of time, is warranted. Table 1 . Gender of students in sample pursuing a career that involves a lot of mathematics and 69% plan on studying a course at higher education institutions that requires mathematics skills, indicating a positive attitude towards mathematics.Students in this sample study mathematics by working out exactly what they need to learn (61%); learning as much as they can off by heart (42%); trying to figure out which ideas they still have not understood properly (46%); and going through examples again and again in order to remember the method for solving a mathematics problem (53%) (Table2). Table 2 . Student responses about learning mathematics summary statistics Table 2 ( Continued).Student responses about learning mathematics summary statistics Table 3 . Student responses about problem solving experiences summary statistics Table 8 . Gender differences in questionnaire responses
10,708.6
2022-06-25T00:00:00.000
[ "Mathematics", "Education" ]
Analysis of an IPMSM Hybrid Magnetic Equivalent Circuit : The most common type of electric vehicle traction motor is the interior permanent magnet synchronous motor (IPMSM). For IPMSM designs, engineers make use of the magnetic equivalent circuit method, which is a lumped constant circuit method, and the finite element method, which is a distributed constant circuit method. The magnetic equivalent circuit method is useful for simple design through fast and intuitive parameters, but it cannot derive the distribution of the magnetic field. The finite element method can derive an accurate magnetic field distribution, but it takes a long time and is difficult to use for analysis of intuitive design parameters. In this study, the magnetic equivalent circuit method and Carter’s coefficient were combined for rotor structure design and accurate identification and analysis of circuit constants. In this paper, this design method is called the hybrid magnetic equivalent circuit method. Intuitive design parameters are derived through this hybrid magnetic equivalent circuit method. The air gap flux density distribution according to rotor shape, no-load-induced voltage, and cogging torque was analyzed and compared to results of the finite element method. The proposed method was found to achieve a short solving time and acceptably accurate results. Author Contributions: Conceptualization, I.-S.S.; Data curation, I.-S.S. and B.-W.J.; Formal analysis, I.-S.S.; Investigation, B.-W.J.; Supervision, K.-C.K.; Writing—original draft, B.-W.J.; Writing—review& editing, K.-C.K. Introduction Recently, the demand for electric vehicles (EVs) has increased worldwide in response to eco-friendly policies and stricter emission regulations. The traction motor for most EVs is the interior permanent magnet synchronous motor (IPMSM) [1][2][3][4]. The IPMSM field does not require an excitation current because a permanent magnet is used. Thus, the rotor loss is lower than that of other motors and high-efficiency operation is possible. In addition, the output density is higher than that of other motors because magnetic torque using the arranging force of the field and armature occurs at the same time as the reluctance torque using the salient polarity of the rotor. Furthermore, a wide operating range is made possible by its field-weakening operation characteristics. Because of these characteristics, an IPMSM has multiple ratings and is suitable as a traction motor for EVs, which require maximum distance traveled on one battery charge (or range). At the same speed, high torque in the motor indicates high output; thus, torque is a parameter that determines the EV's range. On the other hand, torque ripple is a source of electrical noise and vibration and therefore is a factor that affects driver and passenger riding comfort [5][6][7][8]. IPMSMs have two operation areas. One is a constant-torque range, and the other is a constant-output operation range. The change in this range is quite diverse depending on the motor design method [9][10][11]. To design EV IPMSMs, engineers mainly use the magnetic equivalent circuit method, which is a lumped constant circuit method, and the finite element method (FEM), which is a distributed constant circuit method. In the magnetic equivalent circuit, the maximum speed, demagnetization details, and maximum torque are simply determined by obtaining the flux per pole, d-axis inductance, and q-axis inductance. After the simple design, the final design is made in consideration of torque ripple, cogging torque, and voltage containing harmonic components with respect to the spatial and temporal magnetic field changes through the FEM, the spatial harmonic method, and other methods. The magnetic equivalent circuit method is useful for simple design through fast and intuitive parameters but cannot derive the magnetic field distribution. The FEM can derive an accurate magnetic field distribution, but it takes a long time, and it is difficult to analyze intuitive design parameters. Many studies have analyzed the magnetic field distribution in the air gap using an analytical circuit model to predict accuracy results [12][13][14]. To provide intuitive solutions in extremely short times, field quantities such as leakage flux paths and saturation, which are important in iron loss calculations, are evaluated using analytical circuit methods [15][16][17][18][19][20]. In addition, this analytical method can be applied to the analysis of many devices, such as transformers, DC/DC converter-coupled inductors, switched reluctance motors, and interior permanent magnet motors where saturation and losses are important [21]. The FEM is used to verify the theory and the expertise of the designer and must be used properly. In other words, the advantages and limitations of existing design techniques are clear [22][23][24][25]. In this study, the magnetic equivalent circuit method and Carter's coefficient are mixed for rotor structure design. Carter's coefficient helps to obtain accurate results based on defined parameters. This proposed method will be called the hybrid magnetic equivalent circuit (HMEC). It is a design method that combines the spatial harmonic method and the magnetic circuit method. It can provide accurate solution results in a shorter time than FEM, and intuitive design parameters are derived through the proposed HMEC method. Additionally, it will represent the air gap flux density distribution according to rotor shape, no-load-induced voltage, and cogging torque. In the conclusion, the proposed HMEC method is compared with the FEM method and verified that the obtained results are appropriate. Figure 1 shows three regions of the slotless flux line distribution by permanent magnet. θ p , θ a , and θ b represent, respectively, the pole arc angle, the angle between the starting point of the bridge from the center of the q-axis, and the angle between the bridge start point and end point. The slotless flux line distribution by the permanent magnet has a constant value in the region of the pole arc, and the magnetic field is close to zero in the region of the q-axis magnetic path and falls constantly in the region of a bridge. Therefore, if it is treated as a line function that falls constantly in a barrier region, the function of the air flux density by permanent magnet is derived as Equation (1) and shown in Figure 2 (the slotless air gap flux density distribution). If the Fourier series is applied to the function of Equation (1), Equations (2) and (3) are derived. The average air gap flux density is derived through the magnetic equivalent circuit. Calculation of the Slotless Air Gap Flux Density Distribution by Permanent Magnet where B g , n, α, B gn and N p are, respectively, the average flux density of the air gap, harmonic order, angle of rotor position, Fourier coefficient of flux density, and number of poles. where g B , n , α , gn B and p N are, respectively, the average flux density of the air gap, harmonic order, angle of rotor position, Fourier coefficient of flux density, and number of poles. Figure 3 shows the magnetic circuit of a slotless permanent magnet. The reluctance of the rotor core is quite small compared with that of the air gap, barrier, and permanent magnet, so it is neglected. Because the reluctance of the stator core is connected in series with reluctance of the air gap, as shown in Equation (4), it is treated as r K . As shown in Equation (5), the barrier leakage of the permanent magnet is treated as l K . By applying the flux distributive law, the flux equation of the air gap is derived as Equation (6). If Equation (6) is divided by the area of the air gap, the average of the air gap flux density of Equation (7) is derived. Figure 3 shows the magnetic circuit of a slotless permanent magnet. The reluctance of the rotor core is quite small compared with that of the air gap, barrier, and permanent magnet, so it is neglected. Because the reluctance of the stator core is connected in series with reluctance of the air gap, as shown in Equation (4), it is treated as K r . As shown in Equation (5), the barrier leakage of the permanent magnet is treated as K l . By applying the flux distributive law, the flux equation of the air gap is derived as Equation (6). If Equation (6) is divided by the area of the air gap, the average of the air gap flux density of Equation (7) is derived. Energies 2021, 14, 5011 4 of 17 where R m , R bl , R b , R g , R sy , φ g and φ r are, respectively, the reluctance of the permanent magnet, reluctance of a barrier, reluctance of a bridge, reluctance of the air gap, reluctance of the stator yoke, flux of the air gap, and residual flux. Calculation of the Air Gap Flux Density Distribution by Permanent Magnet with Slot The structure of a motor with slots, as shown in Figure 4, is common. Figure 4 shows the flux line distribution by permanent magnets with slots. When there is a slot, the flux can be divided into two regions. One is the flux region that goes directly from the pole to the teeth, and the other is the flux region that goes from the pole to the teeth through the slot opening. The magnetic field is constant in the teeth region, but as can be seen from the flux in the slot opening region, the length of the air gap increases toward the center of the slot opening and reaches its maximum length at the center. The distribution of the slotopening region is derived by applying the air gap length function of Carter's coefficient. In addition, the total average air gap flux density, as shown in Equation (8), is derived through a magnetic equivalent circuit, as shown in Figure 5. are, respectively, the reluctance of the rotor teeth, reluctance of the rotor yoke, permeance of the air gap corresponding to slot k, total permeance, area of the permanent magnet, area of air gap corresponding to Calculation of the Air Gap Flux Density Distribution by Permanent Magnet with Slot The structure of a motor with slots, as shown in Figure 4, is common. Figure 4 shows the flux line distribution by permanent magnets with slots. When there is a slot, the flux can be divided into two regions. One is the flux region that goes directly from the pole to the teeth, and the other is the flux region that goes from the pole to the teeth through the slot opening. The magnetic field is constant in the teeth region, but as can be seen from the flux in the slot opening region, the length of the air gap increases toward the center of the slot opening and reaches its maximum length at the center. The distribution of the slot-opening region is derived by applying the air gap length function of Carter's coefficient. In addition, the total average air gap flux density, as shown in Equation (8), is derived through a magnetic equivalent circuit, as shown in Figure 5. where R rt , R ry , P gk , P total , A m , A gk , A gtotal , φ gtotal and B gtotal are, respectively, the reluctance of the rotor teeth, reluctance of the rotor yoke, permeance of the air gap corresponding to slot k, total permeance, area of the permanent magnet, area of air gap corresponding to slot k, area of the total air gap, total flux of the air gap, and total average air gap flux density. Figure 6 shows the air gap flux line distribution per slot when there are slots. The average air gap flux density-derived Equation (12) is the total average of both the slotopening and the teeth region. A transformation of the equation is necessary to obtain an accurate distribution. First, the air gap flux density in the teeth region, as in Equation (14), can be obtained by multiplying Equations (12) and (13), which is the flux distributive law of the reluctance difference of the teeth and the slot-opening regions. Finally, the total air gap flux density in the teeth region is derived as Equation (15). are a correction factor for teeth about the air gap flux density, permeance of the air gap corresponding to the stator teeth of slot Figure 6 shows the air gap flux line distribution per slot when there are slots. The average air gap flux density-derived Equation (12) is the total average of both the slotopening and the teeth region. A transformation of the equation is necessary to obtain an accurate distribution. First, the air gap flux density in the teeth region, as in Equation (14), can be obtained by multiplying Equations (12) and (13), which is the flux distributive law of the reluctance difference of the teeth and the slot-opening regions. Finally, the total air gap flux density in the teeth region is derived as Equation (15) are a correction factor for teeth about the air gap flux density, permeance of the air gap corresponding to the stator teeth of slot Figure 6 shows the air gap flux line distribution per slot when there are slots. The average air gap flux density-derived Equation (12) is the total average of both the slotopening and the teeth region. A transformation of the equation is necessary to obtain an accurate distribution. First, the air gap flux density in the teeth region, as in Equation (14), can be obtained by multiplying Equations (12) and (13), which is the flux distributive law of the reluctance difference of the teeth and the slot-opening regions. Finally, the total air gap flux density in the teeth region is derived as Equation (15). where C tk , P gtk , 2P gso , A gtk , A gttotal , φ gttotal and B gttotal are a correction factor for teeth about the air gap flux density, permeance of the air gap corresponding to the stator teeth of slot k, permeance of the air gap corresponding to the slot opening of slot k, area of the stator teeth, total area of the stator teeth, total flux of the stator teeth, and total flux density of the stator teeth. Energies 2021, 14, x FOR PEER REVIEW 7 of 20 teeth, total area of the stator teeth, total flux of the stator teeth, and total flux density of the stator teeth. The function of the air gap length of the slot-opening region is expressed as Equation (16). In order to derive the function of the slot factor, the magnetic circuit equation obtained by functionalizing the air gap length is divided into the magnetic circuit equation before functionalization. If these equations are rearranged, the function equation of the slot factor is derived as Equation (17). However, this slot factor function does not reflect the saturation of flux density of the shoe part, as shown in Figure 7. Therefore, by applying the slotopening range of Hanselman, the slot opening is changed to 0.7, which is larger than the actual 0.5, and the function is changed to Equation (18) [14]. The final slot factor function is derived as Equations (19) and (20). Applying this expression to the Fourier series yields Equations (21)- (23). Figure 6b shows the slot factor derived through the HMEC and FEM. ( ) The function of the air gap length of the slot-opening region is expressed as Equation (16). In order to derive the function of the slot factor, the magnetic circuit equation obtained by functionalizing the air gap length is divided into the magnetic circuit equation before functionalization. If these equations are rearranged, the function equation of the slot factor is derived as Equation (17). However, this slot factor function does not reflect the saturation of flux density of the shoe part, as shown in Figure 7. Therefore, by applying the slot-opening range of Hanselman, the slot opening is changed to 0.7, which is larger than the actual 0.5, and the function is changed to Equation (18) [14]. The final slot factor function is derived as Equations (19) and (20). Applying this expression to the Fourier series yields Equations (21)- (23). Figure 6b shows the slot factor derived through the HMEC and FEM. where G sl (θ), G t , θ sp , θ so , θ tr , r si , G min and N s are a function of slot factor, stator teeth pitch, slot pitch, slot-opening pitch, stator teeth pitch considering saturation of the shoe, inner radius of the stator, minimum value of the slot factor, and the number of slots. where ( ) sl G θ , t G , sp θ , so θ , tr θ , si r , min G and s N are a function of slot factor, stator teeth pitch, slot pitch, slot-opening pitch, stator teeth pitch considering saturation of the shoe, inner radius of the stator, minimum value of the slot factor, and the number of slots. Energies 2021, 14, x FOR PEER REVIEW 9 of 20 Figure 9 shows the flux linkage derived through the HEMC and the FEM. Equation (25) represents the flux linkage of one coil. In the case of this model, the number of slots per phase per pole is three. Therefore, the flux linkage in one phase can be obtained as in Equation (26). By differentiating Equation (26), the no-load-induced voltage can be derived as Equation (27). Figure 10 shows a comparison of the no-load-induced voltage derived from the HMEC and the FEM. The cogging torque (Equation (29)) is derived by differentiating the air gap energy of Equation (28). Figure 11 shows a comparison of the cogging torque waveform derived from the HMEC and FEM. Figure 9 shows the flux linkage derived through the HEMC and the FEM. Equation (25) represents the flux linkage of one coil. In the case of this model, the number of slots per phase per pole is three. Therefore, the flux linkage in one phase can be obtained as in Equation (26). By differentiating Equation (26), the no-load-induced voltage can be derived as Equation (27). Figure 10 shows a comparison of the no-load-induced voltage derived from the HMEC and the FEM. The cogging torque (Equation (29)) is derived by Figure 11 shows a comparison of the cogging torque waveform derived from the HMEC and FEM. Calculation Process of Flux Linkage, No-Load-Induced Voltage, and Cogging Torque where N coil , L stk , λ coil and λ ph are the turns of one coil, stack length, flux linkage of one coil, and flux linkage of one phase. Figure 12 shows a magnetic equivalent circuit of a V-type rotor. As shown in Figure 12, unlike the bar shape, the V shape has an inner bridge and barrier part on the central axis of the pole arc. In addition, by adjusting the angle of the magnet, the area of the permanent magnet can be increased compared with the bar shape. Figure 13 shows a comparison of flux density distribution of the air gap with a bar-type and a V-type rotor with slots. Because the area of the air gap and the permanent magnet is the same, as shown in Figure 13, the difference of the two rotors is the difference in leakage flux according to the presence or absence of the inner bridge and barrier. Therefore, if the area of the permanent magnet is not increased in the V shape, the flux per pole drops as shown in Equations (30) and (31) because of the inner leakage flux. In addition, as shown in Equation (32), as the leakage of the bridge and barrier change, the value decreases and the cogging torque changes, as shown in Figure 14. Figure 14 shows a comparison of cogging torque for a bar-type and a V-type rotor with slots. Table 1 shows comparison of magnetic field distribution parameters of bar-type and V-type rotor when there is a slot. Calculation of Air Gap Flux Density Distribution by Permanent Magnet V-Type Rotor where R il , R ol , R ib and R ob are reluctance of the inner and outer barrier, and inner and outer bridge. where il R , ol R , ib R and ob R are reluctance of the inner and outer barrier, and inner and outer bridge. (a) (b) Figure 15 shows the air gap flux density distribution of a slotless double-layer bar-type rotor. In the double-layer bar-type rotor, the air gap region corresponding to the permanent magnet should be divided into two parts. Calculation of Flux Density Distribution of the Air Gap by Permanent Magnet Double-Layer Type Rotor Equation (33) shows the magnetic flux density distribution by permanent magnet. Applying this expression to the Fourier series yields Equation (34). Although the flux per pole drops when the same permanent magnet is used, the harmonic order is reduced because the air gap flux density can be made closer to sinusoidal. The double-layer rotor can be classified into three types. As shown in Figure 15, one is a double-layer bar (DB) structure, the second is a double-layer V (DV) structure, and the last is a delta (mixed bar and V-type) structure. The number of permanent magnets used, area of the air gap, and area of the permanent magnet are the same. Therefore, the amount of flux per pole in the two regions varies depending on the presence of the inner barrier and bridge. By applying these conditions, the flux density of the air gap for each two-layer shape in Table 2 was calculated. Figure 16 shows the magnetic equivalent circuit of a permanent magnet when there is no slot for each two-layer rotor. Table 2 shows equations of parameters of air gap flux density distribution according to double-layer rotor type. Because the reluctance of the rotor and stator core were ignored and there was an effective area difference of the air gap between the HMEC and FEM, the value calculated by the HMEC had an error ranging from 2.54% to 3.53% compared with that of the FEM, as shown in Table 3, which shows a comparison of the magnetic field distribution parameters of double layer rotors when there is a slot. In addition, the flux density of the air gap in the DB without the inner bridge and barrier was the highest, and that of the DV was the lowest. The delta shape had a medium flux per pole, but there was no barrier or bridge in the two-layer part, so the inner magnetic flux was large, and the outer magnetic flux was rather small compared with those of the double-layered V-shape. The comparison of flux density distribution when there is a slot for each two-layer shape derived through the HMEC and FEM is represented in Figure 17. As shown in Table 4 between the HMEC and FEM, the value calculated by the HMEC had an error ranging from 2.54% to 3.53% compared with that of the FEM, as shown in Table 3, which shows a comparison of the magnetic field distribution parameters of double layer rotors when there is a slot. In addition, the flux density of the air gap in the DB without the inner bridge and barrier was the highest, and that of the DV was the lowest. The delta shape had a medium flux per pole, but there was no barrier or bridge in the two-layer part, so the inner magnetic flux was large, and the outer magnetic flux was rather small compared with those of the double-layered V-shape. The comparison of flux density distribution when there is a slot for each two-layer shape derived through the HMEC and FEM is represented in Figure 17. As shown in Table 4 between the HMEC and FEM, the value calculated by the HMEC had an error ranging from 2.54% to 3.53% compared with that of the FEM, as shown in Table 3, which shows a comparison of the magnetic field distribution parameters of double layer rotors when there is a slot. In addition, the flux density of the air gap in the DB without the inner bridge and barrier was the highest, and that of the DV was the lowest. The delta shape had a medium flux per pole, but there was no barrier or bridge in the two-layer part, so the inner magnetic flux was large, and the outer magnetic flux was rather small compared with those of the double-layered V-shape. The comparison of flux density distribution when there is a slot for each two-layer shape derived through the HMEC and FEM is represented in Figure 17. As shown in Table 4 Conclusions In this paper, a hybrid magnetic equivalent circuit method combined with Carter's coefficient is proposed. It is a design method that combines the spatial harmonic method and the magnetic circuit method. The magnetic field distribution according to the rotor type was derived through the hybrid magnetic equivalent circuit method, and the cogging torque and no-load counter electromotive force were derived through the magnetic field distribution map. Through the hybrid magnetic equivalent circuit method, the existing design method was enhanced with reduced analysis time and intuitive design parameters. However, an error occurred because of the difference in the effective air gap area, as a result of ignoring the iron core resistance of the rotor and stator and the effect of bridge magnetic saturation. However, because this error was less than 5% in error range compared with the FEM, the proposed hybrid magnetic equivalent circuit method is considered valid. Conclusions In this paper, a hybrid magnetic equivalent circuit method combined with Carter's coefficient is proposed. It is a design method that combines the spatial harmonic method and the magnetic circuit method. The magnetic field distribution according to the rotor type was derived through the hybrid magnetic equivalent circuit method, and the cogging torque and no-load counter electromotive force were derived through the magnetic field distribution map. Through the hybrid magnetic equivalent circuit method, the existing design method was enhanced with reduced analysis time and intuitive design parameters. However, an error occurred because of the difference in the effective air gap area, as a result of ignoring the iron core resistance of the rotor and stator and the effect of bridge magnetic saturation. However, because this error was less than 5% in error range compared with the FEM, the proposed hybrid magnetic equivalent circuit method is considered valid. Conflicts of Interest: The authors declare no conflict of interest.
6,097
2021-08-15T00:00:00.000
[ "Engineering", "Physics" ]
Activation of disease resistance against Botryosphaeria dothidea by downregulating the expression of MdSYP121 in apple In plants, the vesicle fusion process plays a vital role in pathogen defence. However, the importance of the vesicle fusion process in apple ring rot has not been studied. Here, we isolated and characterised the apple syntaxin gene MdSYP121. Silencing the MdSYP121 gene in transgenic apple calli increased tolerance to Botryosphaeria dothidea infection; this increased tolerance was correlated with salicylic acid (SA) synthesis-related and signalling-related gene transcription. In contrast, overexpressing MdSYP121 in apple calli resulted in the opposite phenotypes. In addition, the results of RNA sequencing (RNA-Seq) and quantitative real-time PCR (qRT-PCR) assays suggested that MdSYP121 plays an important role in responses to oxidation–reduction reactions. Silencing MdSYP121 in apple calli enhanced the expression levels of reactive oxygen species (ROS)-related genes and the activity of ROS-related enzymes. The enhanced defence response status in MdSYP121-RNAi lines suggests that syntaxins are involved in the defence response to B. dothidea. More importantly, we showed that MdSYP121 forms a soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) complex with MdSNAP33, and the complex may participate in regulating resistance to B. dothidea. In conclusion, by regulating the interaction of SA pathway and oxidation–reduction process, MdSYP121 can influence the pathogen infection process in apple. Introduction During the long-term process of evolution, plants have evolved a secretory pathway, which is critical for biosynthetic and endocytic trafficking to the plasma membrane (PM) and vacuole 1 . Secretory membrane trafficking mechanisms have recently been shown to be involved in a variety of plant-specific processes, including plant development, tropic responses and pathogen defence 2,3 . Most endoplasmic reticulum-localised proteins and ER-resident chaperones of the secretory pathway and many antimicrobial compounds are destined for the apoplast or various organelles upon pathogen attack and penetration 4,5 . In plants, fusion of the endomembrane system is important and rather difficult. During the evolution of eukaryotes, a specialised class of proteins, the soluble N-ethylmaleimide-sensitive factor attachment protein receptors (SNAREs), were formed. These receptors function as mediators of fusion between vesicles and target lipid bilayers 3,6 . Based on the occurrence of either a conserved glutamine or arginine residue in the centre of the SNARE domain, SNAREs can be grouped as target site-localised Q-SNAREs (Qa, Qb, Qc and Q(b+c)) and vesicle-residing R-SNAREs 7,8 . SNAREs drive membrane merging by forming SNARE complexes via intermolecular interactions among vesicles and target subcellular SNAREs. A typical SNARE complex involves distinct types of SNARE proteins (Qa+Qb+Qc or Qa+Q(b+c)) and one R-SNARE polypeptide that together contribute to a four-helix bundle of intertwined SNARE domains 9 . SNARE-domain proteins represent a fundamental role in biotic defence against fungal pathogens. At least two SNARE protein-mediated exocytosis pathways appear to drive the directional or non-directional secretion of antimicrobial compounds, including defence-related proteins and cell wall building blocks, into the apoplastic space to terminate the infection of fungi or bacteria [10][11][12] . On one hand, plants exhibit an active resistance mechanism to combat extracellular infection. Silencing a specific PM syntaxin, NbSYP132, in Nicotiana benthamiana impairs the accumulation of a subset of pathogenesis-related (PR) proteins in the cell wall. NbSYP132 also contributes to basal and salicylateassociated defence. These results indicate that SYP132mediated secretion is an important component of resistance against bacterial pathogens in plants 11 . In addition, a SNAP25 homologue is required for pathogen resistance. Mutants of the barley MLO gene are fully resistant to powdery mildew; silencing the barley HvSNAP34 gene, a SNAP25 homologue, revealed increased fungal entry rates in the mlo genotype 10 . On the other hand, cellautonomous immunity is widespread in plant-fungal interactions and terminates fungal infection after pathogen entry 12 . Plants exhibit resistance by forming SNARE secretory complexes, and pathogen-induced subcellular dynamics enable the execution of immune responses by focal secretion 12 . The AtSYP122 syntaxin functions in exocytosis during defence responses to strengthen the cell wall and prevent penetration by microbes 13,14 . In addition, syntaxins can regulate heavy vesicle traffic towards the site of attempted infection when plants are subjected to fungal attack 14 . SNARE syntaxin 121 (SYP121) is one of the most important SNARE proteins. SYP121, which is a Qa-SNARE protein, cycles continuously between the PM and endosomes 15 and has a fundamental role in plant defence 10 . Importantly, SYP121-dependent disease resistance acts in vivo mainly by forming SNARE complexes together with the SNAP33 adaptor and a subset of vesicleassociated membrane protein (VAMP) subfamily members, VAMP721/VAMP722 12 . In Arabidopsis, SYP121 is an important element in resistance to non-host powdery mildew fungi. In addition, the barley ROR2 gene, a functional homologue of AtSYP121, is required for basal penetration resistance against Blumeria graminis f. sp. hordei (Bgh) 10 . A mechanistic link has been demonstrated between non-host resistance and basal penetration resistance in monocotyledons and dicotyledons 10 . AtSYP121 is encoded by PEN1 10 . The Arabidopsis mutant pen1-1 caused delayed formation of localised cell wall appositions (so-called papillae), resulting in increased Bgh infection at attack sites 16 . In addition, using brefeldin A (BFA) showed that interference in endosome recycling to the PM both severely inhibited the SYP121 focal accumulation at fungal entry sites that accompanied delayed callose deposition and compromised defence against powdery mildew fungi 17 . However, Zhang et al. reported that SYP121 and its homologue SYP122 are negative regulators of salicylic acid (SA) signalling and programmed cell death (PCD) 18 . The enhanced SA and jasmonic acid (JA) signalling levels and PCD in the syntaxin double mutant syp121-1 syp122-1 contributed to improved disease resistance against the virulent powdery mildew fungus Erysiphe cichoracearum and the bacterial pathogen Pseudomonas DC300 in an SA-independent manner 18 . These results suggested that SNAREs employs different mechanisms to regulate both the initial penetration resistance process and the subsequent immune signalling process; these processes differentially contribute to disease resistance. The PMlocalised syntaxin-related 1 (StSYR1) and StSNAP33 genes in potato are homologues of Arabidopsis AtSYP121 and AtSNAP33, respectively. The resistance of both StSYR1-RNA interference (RNAi) and StSNAP33-RNAi plants increased and was correlated with the constitutive accumulation of SA and PR1 transcripts, and these plants displayed an early senescence-like phenotype showing chlorosis and necrosis. In addition, downregulation of StSYR1 led to enhanced resistance against Phytophthora infestans and a cell death response at the site of infection 19 . Apple is one of the most widely cultivated fruits in the world. China has the largest planting area and is the greatest producer of apples worldwide. Apple ring rot is one of the most devastating diseases in China and greatly affects the production of apple; 20 this disease occurs mostly in the Circum-Bohai-Sea region (Shandong, Hebei and Liaoning provinces) 21 . Since the 1980s, in conjunction with the widespread planting of Fuji cultivars, the area of apple ring rot has increased in eastern China 20 . This disease, which is caused by the fungal pathogen Botryosphaeria dothidea, infects both fruits and branches 20 . The resistance mechanism of apple ring rot is very complex, and few studies have examined the molecular mechanism of apple resistance to B. dothidea infection. In the present study, based on RNA-Seq results, the MdSYP121 gene was isolated in apple. The results of a series of genomic, genetic and transgenic experiments suggest that MdSYP121 plays an important role in B. dothidea resistance by affecting the oxidation-reduction process in apple. More importantly, we showed that a SNARE complex composed of MdSYP121 and MdSNAP33 may play an important role in pathogen resistance. Together, these results increase the knowledge regarding the biological roles of SNARE proteins in disease resistance and improve our understanding of the pathogenesis of B. dothidea. This study hints at a useful disease management strategy and is helpful for breeding resistance to apple ring rot. Cultivation and treatment of plants Tissue-cultured calli of the apple cultivar 'Orin' were subcultured under the basic growth conditions of 24 ± 0.5°C and 24 h of darkness (at a relative humidity of 60-75%). The 'Orin' calli were subcultured in culture dishes (9 cm in diameter) containing 35 mL of Murashige and Skoog (MS) medium (0.4 mg L −1 6-BA, 1.5 mg L −1 2,4-D, 30 g L −1 sucrose and 7.5 g L −1 agar; pH 5.8-6.0; autoclaved at 121°C for 20 min). Tissue-cultured plants of the apple cultivar 'Gala' were incubated under greenhouse conditions of 24 ± 0.5°C and a 16-h light/8-h dark cycle (at a relative humidity of 60-75%). The 'Gala' explants were cultured in culture bottles (5.5 cm in diameter) containing 40 mL of MS subculture medium (0.5 mg L −1 6-BA, 0.2 mg L −1 IAA, 30 g L −1 sucrose and 7.5 g L −1 agar; pH 5.8-6.0; autoclaved at 121°C for 20 min). Botryosphaeria dothidea was incubated in culture dishes (9 cm in diameter) containing 15 mL of potato dextrose agar medium at 24 ± 0.5°C in darkness. Nicotiana benthamiana seeds were surface-sterilised and germinated on MS medium under greenhouse conditions of 24 ± 0.5°C and a 16-h light/8-h dark cycle (at a relative humidity of 60-75%). N. benthamiana seedlings were subsequently transplanted at the two-leaf or three-leaf stage into soil and grown under greenhouse conditions. Vector construction and genetic transformation Fragments and full-length coding sequences were cloned from a cDNA library that was reverse-transcribed from RNA isolated from Fuji fruits synthesised using a First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, USA) in accordance with the manufacturer's instructions. PCR was performed with Pfu/Taq polymerase (Fermentas, USA). The primers used are shown in Supplementary Table S1. All generated amplicons were subcloned into a pLB vector (Tiangen, China). Homologous MdSYP121 protein sequences were retrieved from the National Center Biotechnology Information (NCBI) database and aligned using DNA-MAN 5.2.2 software (Lynnon Biosoft, USA). A phylogenetic tree was subsequently generated using the Neighbour Joining method with MEGA 5.0 software. A MdSYP121 fragment from 121 to 613 bp was used for RNAi vector construction. The MdSYP121 RNAi fragments were subsequently transferred to pHANNIBAL and pCB302 vectors by T4 recombination (NEB, USA) to generate RNAi transformation vectors. The full-length products were used to create HA protein fusions by T4 recombination into pCB302-MdSYP121-HA under the control of the cauliflower mosaic virus (CaMV) 35S promoter to generate the overexpression (OE) transformation factor. The leaves of 4-week-old tissue-cultured 'Gala' plants were transformed with Agrobacterium tumefaciens LBA4404 carrying the RNAi transformation vector. The leaves were scratched and pre-differentiated on MS differentiation medium (2.0 mg L −1 TDZ, 0.2 mg L −1 IAA, 30 g L −1 sucrose and 7.5 g L −1 agar; pH 5.8-6.0; autoclaved at 121°C for 20 min) for 2 days, and then they were incubated with LBA4404 (optical density (OD) = 0.4-0.6) for 20-30 min and co-cultured on MS differentiation medium at 24°C for 7 days in the dark. The leaves were then transferred to screening differentiation medium containing glufosinate ammonium and carbenicillin under a 16-h light/8-h dark cycle (at a relative humidity of 60-75%). The screened explants were cultured on MS subculture medium. To generate transgenic apple calli, 10-day-old apple calli were transformed with A. tumefaciens LBA4404 carrying the RNAi or OE transformation vectors, respectively. The apple calli were incubated with LBA4404 (optical density (OD) = 0.4-0.6) for 20-30 min and co-cultured on MS solid media containing no antibiotics at 24°C for 48 h in the dark. The calli cells were then transferred to screening medium containing glufosinate ammonium and carbenicillin 22 . Subcellular localisation of MdSYP121 and MdSNAP33 The open reading frames (ORFs) of MdSYP121 and MdSNAP33 were inserted into a pCB302 green fluorescent protein (GFP) vector, whose N-terminus consists of a GFP under the control of the CaMV 35S promoter. To achieve transient expression, the recombinant plasmids pCB302-MdSYP121-GFP and pCB302-MdSNAP33-GFP were transformed into A. tumefaciens GV3101. After the cell cultures were incubated overnight, A. tumefaciens cells were harvested via centrifugation and resuspended in infiltration medium (100 mL of medium containing 1 mL of 1 M MES-KOH at pH 5.6, 333 μL of 3 M MgCl 2 and 100 μL of 150 mM acetosyringone). The leaves of 5-weekold N. benthamiana plants were used for transient expression, and the GFP signals were observed using a LSM 880 META confocal microscope (Carl Zeiss, Germany). A pCB302-GFP construct was used as a control. RNA extraction and quantitative real-time PCR (qRT-PCR) analysis Total RNA was isolated from apple tissue culture plants and calli using the cetyltrimethylammonium bromide method described by Wang et al. 23 . First strand cDNA was synthesised using a First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, USA) in accordance with the manufacturer's instructions. qRT-PCR was used to detect the expression levels of the target genes. The 20 μL PCR mixture comprised 10 μL of Fast Start Universal SYBR ® Green Master mix (Roche, USA), 0.6 μL of each primer (10 mM), 2 μL of diluted cDNA and 6.8 μL of PCR-grade H 2 O and a CFX96 TM Real-time Detection System (Bio-Rad, USA) was used to perform PCR. The following PCR programme was used: predenaturation at 98°C for 10 min; 40 cycles of 98°C for 15 s and then 60°C for 30 s; and a final melt cycle from 60 to 98°C. The PCR process was completed with a melting curve analysis program. The Malus × domestica actin gene was used as a standard control to quantify cDNA abundance. The primers used in the qRT-PCR analyses are shown in Supplementary Table S2. Pathogen infection assays For the pathogen infection analysis, transgenic and Vec (serving as an empty vector control) lines were transferred to MS solid medium that lacked glufosinate ammonium and carbenicillin. Ten-day-old Vec, RNAi and OE lines were infected with 0.5-cm-diameter agar discs containing uniform B. dothidea mycelia that were incubated for 5 days. The calli were co-cultured for 4 days in the dark. For expression pattern analyses, treated calli were collected from culture dishes, frozen in liquid nitrogen and stored at −80°C for RNA extraction. Each experimental treatment was repeated at least three times. Enzyme activity assays The activity levels of peroxidase (POD), catalase (CAT), ascorbate peroxidase (APX) and glutathione reductase (GST) were spectrophotometrically measured using hydrogen peroxide test kits (Nanjing Jiancheng Bioengineering Institute, China). Each experimental treatment was repeated at least three times. Library construction and RNA-Seq analysis Nine independent calli from RNAi and Vec lines were infected by B. dothidea. Both the total composite RNA from the RNAi lines and the total composite RNA from the Vec lines, respectively, were used for Illumina sequencing at Novogene Technologies (Beijing, China). All procedures during the cDNA library construction were performed in accordance with a standard Illumina sample preparation protocol. The RNA-Seq libraries were sequenced on an Illumina Genome HiSeq platform. After the RNA was sequenced, the adaptors were trimmed and low-quality sequences were removed from the raw data. The unigenes were annotated using the following databases: the Clusters of Orthologous Groups (COG) (http://www.ncbi.nlm.nih.gov), Gene Ontology (GO) (http://www.geneontol-ogy.org/) and Kyoto Encyclopaedia of Genes and Genomes (KEGG) (http://www. genome.jp/kegg/) [24][25][26] . To assay the differentially expressed genes (DEGs), trimmed mean of M-values normalisation and DEG seq were used to normalise gene expression levels for differential expression analyses, respectively 27 . A q-value < 0.005 served as the P-value threshold in multiple tests to determine the significance of differences in gene expression. Verification of RNA-Seq data using qRT-PCR RNA was extracted from each RNAi and Vec line to verify the RNA-Seq data. RNA was prepared, and cDNA was synthesised, respectively, as described above. The primers used for the qRT-PCR analyses are shown in Supplementary Table S2. All of the samples were tested at least three times, and the experiments were performed on three biological replicates. GST pull-down analysis MdSYP121 with an HA tag was expressed in apple calli. The calli were homogenized in an extraction buffer containing 50 mM Tris-HCl (pH 7.5), 150 mM NaCl, 50 mM EDTA, 1% Triton, 1 mM phenylmethanesulfonyl fluoride and a protease inhibitor cocktail (Roche, USA). For prokaryotic expression assays, a plasmid containing MdSNAP33 with a GST tag was transformed into Escherichia coli Rosetta (Tiangen, China), and the proteins were induced with 1 mM isopropyl 1-thio-β-Dgalactopyranoside at 28°C. The GST-MdSNAP33 protein was purified using a Pierce ® GST Spin Purification Kit in accordance with the manufacturer's protocol (Thermo Fisher Scientific, USA). The mixture of MdSYP121-HA protein extracted from the apple calli and the purified GST-MdSNAP33 protein with 2 μL of GST antibody were incubated together with gentle shaking for 2 h at 4°C. The protein mixture was then immunoprecipitated with G-agarose beads (Sigma-Aldrich, USA) by gentle shaking for 2 h at 4°C. The beads were collected and washed three times with washing buffer (100 mM NaCl, 10 mM HEPES, pH 7.5; 1 mM EDTA; a protease inhibitor cocktail; and 10% glycerol) and once with 50 mM Tris-HCl (pH 7.5). The immunoprecipitated proteins were analysed with an HA antibody. Equal amounts of total protein were electrophoresed on 10% SDS-PAGE. Bimolecular fluorescence complementation assays Full-length MdSYP121 and MdSNAP33 were fused to yellow fluorescent protein (YFP) vectors to pUC-SPYCE-35S and pUC-SPYNE-35S, respectively, by T4 recombination. The primers used are listed in Supplementary Table S1. These two recombinant plasmids were transiently expressed in tobacco leaves by A. tumefaciens (GV3101)-mediated infiltration 28 . The YFP fluorescence of tobacco leaves was detected after infiltration for 4 days using a LSM 880 META confocal microscope (Carl Zeiss, Germany). SDS resistance assays For in vitro-binding studies, MdSYP121-HA homogenized in extraction buffer (as mentioned above) and GST-MdSNAP33 purified protein (as mentioned above) were incubated together for 12 h at 4°C (head-over-head rotation). The bound material was eluted by incubation (30 min at 37°C) together with 4× SDS loading buffer, because it is SDS-resistant but heat-sensitive; the controls were incubated for 10 min at 100°C 29 . Protein samples were then separated on 10% SDS-PAGE. Western blot was used to detect the expression of proteins. Statistical analysis All experiments were performed at least three times. The error bars in each graph indicate the mean values ± SEs of three repetitions. Statistical significance between different measurements was determined using Tukey's honestly significant difference (HSD) test via IBM Statistical Product and Service Solutions (SPSS) statistics software version 19 (IBM, USA). Cloning and characterisation of MdSYP121 To study the resistance mechanism of apple to apple ring rot, we analysed the RNA-Seq data of Fuji cultivars inoculated with B. dothidea. MDP0000709455 was chosen for detailed characterisation because it was induced in 'Fuji' apple with B. dothidea inoculation and served as an important node by analysing the interaction network using Cytoscape (version 3.6.0, USA) (data not shown), and the orthologous genes of MDP0000709455 played critical roles in controlling disease resistance towards powdery mildew in Arabidopsis and barley 10 . A fragment was isolated from the cDNA library of Fuji fruit using the primers listed in supplementary Table S1. The full-length cDNA sequence consisted of a 1068-bp ORF. The ORF encoded a protein that was 356 amino acids in length and had a calculated molecular mass of 43.83 kDa and an isoelectric point of 7.15. This clone exhibited a high level of sequence similarity to AtSYP121 of Arabidopsis. Based on multiple sequence alignments with other plant SYP121s, MDP0000709455 contains Ha-conserved, Hbconserved and Hc-conserved subdomains, a Qa-SNARE subdomain and a transmembrane domain (Supplementary Fig. S1a). In addition, MDP0000709455 exhibits high homology to AtSYP121 (AEE75103), PpSYP121 (XP_007202232), CsSYP121 (XP_00648162) and OsSYP121 (ABB22872) (Supplementary Fig. S1a); so, this gene was designated MdSYP121. To investigate the evolutionary relationships among SYP121s from different species, a phylogenetic analysis based on their amino acid sequences was performed by the Neighbour Joining method using the software MEGA version 5.0. As shown in Supplementary Fig. S1b, MdSYP121 exhibited high similarity to the members of the Qa-SNARE group, including AtSYP121, PpSYP121, CsSYP121 and QsSYP121. These results suggest that MdSYP121 is a member of the Qa-SNARE. Subcellular localisation of MdSYP121 To determine the actual localisation of the MdSYP121 protein within cells, MdSYP121 was fused to an HA tag in the OE vector 35S::MdSYP121-HA; the recombinant plasmid was transferred into apple calli, and OE transgenic lines were obtained (Supplementary Fig. S2). The membrane proteins of the MdSYP121-OE lines were separated using a membrane protein extraction kit (BestBio, China). MdSYP121 was detected among both total proteins and membrane proteins by using western blot with anti-HA antibody (Fig. 1a). In addition, MdSYP121 was fused to a GFP tag in the OE vector 35S::MdSYP121-GFP (Fig. 1b). The fusion protein was transiently expressed in N. benthamiana leaves using agroinfiltration. A 35S::GFP construct served as a control. Fluorescence was detected in the PM of N. benthamiana leaves (Fig. 1c). These results suggested that MdSYP121 is a membrane-localised protein. Silencing MdSYP121 increased resistance to Botryosphaeria dothidea To investigate the function of MdSYP121, we silenced MdSYP121 in apple 'Gala' using the RNAi method. However, the transgenic plants showed the dwarfism and necrosis phenotypes, and only two transgenic plants were ultimately obtained (Supplementary Fig. S3). The transgenic plants were tiny and difficult to differentiate, so the functional analysis was difficult to perform. To further analyse the function of this gene, we produced calli tissue whose expression of MdSYP121 was inhibited by the RNAi method. Ten independent transgenic lines were selected using glufosinate ammonium resistance selection and qRT-PCR (Supplementary Fig. S4). Three typical MdSYP121-RNAi lines (RNAi3, RNAi5 and RNAi7) were randomly selected for further functional analysis. As a control, Vec lines were subcultured at the same time. To identify the function of MdSYP121 during B. dothidea infection, MdSYP121-RNAi lines growing on MS agar medium were inoculated with B. dothidea. Although the untreated Vec and MdSYP121-RNAi lines have slight difference, the spot extension areas of the MdSYP121-RNAi lines were clearly fewer than those of the Vec lines after inoculation with B. dothidea for 4 days (Fig. 2a and Supplementary Fig. S5b). The RNAi lines showed a nearly twofold decrease in fungal growth (Fig. 2b). The phytohormone SA serves as an endogenous messenger during biotic stress in plants and is required for the activation of systemic acquired resistance (SAR) 30 . The expression of SA signalling-related and synthesis-related genes was analysed in the MdSYP121-RNAi and Vec lines after inoculation with B. dothidea for 4 days. As shown in Fig. 2c,d, the expression levels of both the SA signallingrelated genes PR1, PR5 and NPR1 and the SA synthesisrelated genes EDS1, PAD4 and PAL were higher in the MdSYP121-RNAi lines than those in the Vec line. These Overexpression of MdSYP121 decreased resistance to Botryosphaeria dothidea To investigate the biological role of MdSYP121, we produced calli tissue that overexpressed MdSYP121. Six independent transgenic lines were selected using glufosinate ammonium resistance selection and western blot analysis ( Supplementary Fig. S2). Three typical lines (OE3, OE4 and OE5) were randomly selected and confirmed, and these lines were used for further functional analysis. As a control, Vec lines were subcultured at the same time. As shown in Fig. 3a and Supplementary Fig. S5a, 4 days after B. dothidea inoculation, fungal extension was significantly greater in the OE lines than that in the Vec lines. Compared with that of the Vec lines, the spot extension areas of the OE lines exhibited a nearly twofold increase in fungal growth (Fig. 3b). The SA signalling-related and synthesis-related genes were also analysed in the MdSYP121-OE and Vec lines after B. dothidea infection for 4 days. As shown in Fig. 3c, d, the expression of the SA-related genes was lower in the MdSYP121-OE lines than that in the Vec lines. These results show that overexpressing MdSYP121 decreased tolerance to B. dothidea. To further analyse the molecular mechanism of MdSYP121 involved in B. dothidea resistance, we performed an RNA-Seq analysis of Vec and MdSYP121-RNAi calli lines under mock conditions or 4 days after treatment with B. dothidea. After trimming the adaptor sequences and removing low-quality reads, we generated 29.65 Gb of clean reads. Among these unigenes, many DEGs were identified between the Vec and RNAi samples: in the absence of B. dothidea inoculation, 637 genes were upregulated in the RNAi samples, and 720 genes were downregulated; after B. dothidea inoculation, 364 genes were upregulated and 553 genes were downregulated (>twofold changes) (Fig. 4a,b). To analyse functional differences between the Vec and MdSYP121-RNAi samples, the identified DEGs were characterised using GO enrichment analysis to explore their relevant biological functions. Based on the GO annotation analysis of these genes, we found that the oxidation-reduction process (GO:0055114) and oxidoreductase activity (GO:0016491) were the major enriched GO terms among the upregulated genes ( Fig. 4c and Table 1). Oxidation-reduction enzymes have important functions; they can function as part of protective mechanisms involved in plant disease resistance. To test whether the MdSYP121 is related to oxidative regulation, four genes from GO:0055114 or GO:0016491 were randomly selected and verified using qRT-PCR. We confirmed that the expression of the selected genes in the MdSYP121-RNAi lines (RNAi3, RNAi5 and RNAi7) showed the same increasing tendency and that the expression was higher in those lines than in the Vec lines (Fig. 5a). We then analysed the related oxidation-reduction enzyme activity in the Vec and MdSYP121-RNAi lines. As shown in Fig. 5b, the activity of oxidation-reduction enzymes was higher in the MdSYP121-RNAi transgenic lines than that in the Vec lines both in the presence and absence of B. dothidea infection. In conclusion, these results show that MdSYP121 plays a vital role in plant disease resistance together with a set of reactive oxygen biotic stress genes. MdSYP121 interacted with MdSNAP33 Binary target membrane (t)-SNARE complexes are typically formed by SYP121 and SNAP25 at the PM during exocytosis. The Arabidopsis SNAP25 homologue, SNAP33, has been detected in the target tissue of powdery mildew fungi in leaves 12 . In the present study, the full length of MdSNAP33 (MDP0000242070) was cloned and analysed in apple. In Arabidopsis, AtSNAP33 is located in the PM 3,31,32 . To determine the actual localisation of the protein within cells, MdSNAP33 was fused to a GFP tag in the OE vector 35S::MdSNAP33-GFP. The fusion protein was transiently expressed in N. benthamiana leaves using agroinfiltration. The result showed that fluorescence was detected in the PM in N. benthamiana leaves (Fig. 6a). Interaction between MdSYP121 and MdSNAP33 was verified by bimolecular fluorescence complementation (BiFC) assays using tobacco leaves. Co-expression of MdSYP121-YFP C and MdSNAP33-YFP N produced strong signals in the PM (Fig. 6a). As negative combinations, YFP N /MdSYP121-YFP C , MdSNAP33-YFP N /YFPC and YFP N /YFP C produced no detectable fluorescence signals. The physical interaction between MdSYP121 and MdSNAP33 was also examined by pull-down analysis. We performed MdSYP121 pull-down assays using an antibody against the HA epitope, and the presence of MdSNAP33 was assayed using an antibody against the GST epitope (Fig. 6b). Immunoblot analysis with an anti-HA antibody indicated that MdSYP121 coimmunoprecipitated by MdSNAP33. These findings indicated that MdSYP121 specifically interacts with MdSNAP33. SYP121 forms SNARE complexes with SNAP33 adaptors and two functionally redundant VAMP72 subfamily members, VAMP721/VAMP722; these complexes are 29 . We detected SDS-resistant SNARE complexes by incubating MdSYP121-HA protein extracted from apple calli together with GST-MdSNAP33 protein bound to glutathione sepharose. Sedimented bead-bound GST-MdSNAP33 and MdSYP121-HA were released and detected by immunoblot analysis using GST and HA antibodies, respectively (Fig. 6c). The GST-MdSNAP33 protein and MdSYP121-HA apple calli proteins were able to form SDS-resistant and heat-sensitive complexes. These results showed that the MdSYP121 can form a SNARE complex together with MdSNAP33. Discussion Many studies have reported that SYP121 is an important element in host and/or non-host resistance, and SYP121 may play a role in penetration resistance to host powdery mildew fungi (E. cichoracearum) and non-host powdery mildew fungi (Bgh) 12 . In the present study, we isolated and identified a Qa-SNARE group gene, Fig. S1). The results of a series of genomic, genetic and transgenic experiments suggested that MdSYP121 plays an important role in B. dothidea resistance in apple. An SNARE complex was also identified and may play an important role in pathogen resistance by affecting oxidation-reduction processes in apple following B. dothidea infection. SNAREs proteins function as mediators of fusion between vesicular and target membranes 3 , and SYP121 syntaxin participates in vesicle fusion processes 33 . In Arabidopsis, SYP121 is a PM-localised protein 10 . Our analysis of the subcellular location revealed that MdSYP121 is located in the cell membrane (Fig. 1), indicating that the gene may function in membrane fusion and vesicular transport. In plants, SA is an essential signalling molecule that induces SAR and is implicated in resistance to pathogens 34 . Many studies have shown that EDS1 (enhanced disease susceptibility 1), PAD4 (phytoalexin-deficient 4) and PAL (Phenylalanine ammonia-lyase) play important roles in SA biosynthesis 35,36 . EDS1 and PAD4 specifically promote the expression of principal SA biosynthetic enzyme gene ICS1 (Isochorismate synthase 1) 35,37 . PR genes and NPR1 are the marker genes of the SA signalling pathway 38 . The expression level of PR genes indicates the activity of SA signalling 39 . NPR1 is considered as a positive regulator of SA-mediated plant immune responses, and AtNPR1 is considered as a key regulator of SAR 40 . In the Arabidopsis syntaxin double mutant syp121-1 syp122-1, the SA level is dramatically elevated, resulting in necrosis and dwarfism. Interference of the SA signalling pathway in syp121-1 syp122-1 mutants partially rescues the necrotic and dwarfed phenotype 18 . In the present study, silencing MdSYP121 in 'Gala' apple plants lead to a necrotic and dwarfed phenotype ( Supplementary Fig. S3), which is similar to the Arabidopsis syntaxin double mutant. We speculated that MdSYP121 may play a negative role in SA signalling pathway. Due to physiological and biochemical changes in cultured cells and tissues infected by pathogens 41 , plant tissue culture systems can be used as model systems to study the plant defence responses to pathogenic bacteria 42,43 . In recent years, apple calli have been used as model experimental 22 . In this study, apple calli were used to further study the function of MdSYP121 in response to B. dothidea. Based on our results, we noticed that silencing MdSYP121 could increase the disease resistance to B. dothidea, while OE of the gene decreases resistance to B. dothidea. The expression of SA synthesis-related genes (EDS1, PAD4 and PAL) and SA signalling pathway genes (PR1, PR5 and NPR1) were highly elevated in the MdSYP121-RNAi lines after B. dothidea inoculation compared with the Vec line, and they were lower in the MdSYP121-OE lines than in the Vec line (Figs. 2 and 3). The results indicated that, by regulating the biosynthesis of SA and/or the activity of SA signalling, MdSYP121 plays important roles in apple resistance to B. dothidea. To better investigate the molecular mechanism of MdSYP121 in regulating resistance to B. dothidea, RNA-Seq was used to study the expression levels of the genes before and after inoculation. Based on the results of RNA-Seq and the qRT-PCR assays, many genes associated with biotic stress were upregulated in the RNAi lines. The oxidation-reduction process (GO:0055114) and oxidoreductase activity (GO:0016491) constituted the major enriched GO terms in the upregulated group of genes (Fig. 4c). Most of the upregulated genes, including APX (MDP0000169497), CAT (MDP0000147628), POD (MDP0000611163) and GST (MDP0000300208), encoded proteins related to the synthesis of oxidoreductase. Enzymatic antioxidants such as POD and CAT are involved in scavenging H 2 O 2 in living cells 45 . PODs are αhelical heme-containing proteins and play important roles in scavenging late massive reactive oxygen species (ROS) during plant-pathogen interactions. POD and CAT have prevailing functions in basal resistance and lignification against Alternaria tenuissima and are key resistance markers in potato 46 . Generation of ROS is among the earliest plant defence responses to various biotic stresses. ROS can enhance the hypersensitive response (HR) or act as secondary messengers in resistance mechanisms, leading to the upregulation of defence-related genes and interactions with other signalling molecules 47 . Previous studies have shown that SA pathway interacts with ROS in stressed plants 48 . EDS1 plays an important role during oxidative stress caused by the release of singlet oxygen 49 . And PAD4 also involves in the integrated regulation of ROS homoeostasis 50 . The pathogen B. dothidea causes cankers characterised by the collapse of cells and discoloured areas; these cankers alter cellular regulatory processes and increase the production of ROS. In this study, the results of B. dothidea inoculation assays suggested that the MdSYP121-RNAi lines displayed better tolerance than the Vec lines (Fig. 2). Higher expression levels of SA pathway-related genes and higher activities of APX, CAT, POD and GST were observed in the MdSYP121-RNAi lines than those in the Vec lines (Figs. 2 and 5). Based on our results, we speculated that MdSYP121 influence disease resistance to B. dothidea by regulating the interaction of SA pathway and oxidation-reduction process. The Arabidopsis thaliana SYP121 syntaxin resides in the PM and was previously shown to act together with its partner SNAREs, the adaptor protein SNAP33 and endomembrane-anchored VAMP721/722 to form a ternary SYP121-SNAP33-VAMP721/VAMP722 SNARE complex, which is required for in the execution of secretory immune responses against powdery mildew fungi 12 . The results of the analysis of the subcellular localisation of MdSYP121 and MdSNAP33 revealed that green fluorescence was detected in the cell membrane, indicating that MdSYP121 and MdSNAP33 are located in the PM and may function together in vesicular transport and membrane fusion (Figs. 1 and 6). In apple, MdSYP121 and MdSNAP33 may also function in disease resistance by forming a complex. The ternary SNARE complexes are SDS-resistant but heat sensitive 12,29 . The abundance of SYP121-containing SNARE complexes were examined in Vec and MdSYP121-OE lines by comparing band disappearances in response to boiling in an immunoblot solution containing anti-GST antibody (Fig. 6c). However, we detected the presence of the complex in both Vec and MdSYP121-OE lines, which is consistent with the higher levels of MdSYP121 monomers in the MdSYP121-OE lines. Previous studies have shown that the formation of SYP121-dependent ternary SNARE complexes is critical for plant pre-invasive resistance to powdery mildew fungi 12 . The MdSYP121 complex identified in our research may participate in resistance to B. dothidea. In conclusion, MdSYP121 is involved in balancing penetration resistance and regulating SA-mediated defences and oxidation-reduction processes; both of these functions are important in apple ring rot resistance. The results show that MdSYP121 plays a central regulatory role in resistance against B. dothidea penetration in apple. Our results imply a potential function of MdSYP121 in apple resistance to B. dothidea. This knowledge concerning defence mechanisms involved in B. dothidea resistance could be useful in breeding programmes aiming to introduce apple genotypes that exhibit high levels of immunity against this destructive fungus. This knowledge could also promote the breeding of new cultivars that display enhanced disease resistance capabilities against this devastating disease.
8,045.8
2018-05-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Revolutionizing plasmonic platform via magnetic field-assisted confined ultrafast laser deposition of high-density, uniform, and ultrafine nanoparticle arrays The remarkable capabilities of 2D plasmonic surfaces in controlling optical waves have garnered significant attention. However, the challenge of large-scale manufacturing of uniform, well-aligned, and tunable plasmonic surfaces has hindered their industrialization. To address this, we present a groundbreaking tunable plasmonic platform design achieved through magnetic field (MF) assisted ultrafast laser direct deposition in air. Through precise control of metal nanoparticles (NPs), with cobalt (Co) serving as the model material, employing an MF, and fine-tuning ultrafast laser parameters, we have effectively converted coarse and non-uniform NPs into densely packed, uniform, and ultrafine NPs (∼3 nm). This revolutionary advancement results in the creation of customizable plasmonic ‘hot spots,’ which play a pivotal role in surface-enhanced Raman spectroscopy (SERS) sensors. The profound impact of this designable plasmonic platform lies in its close association with plasmonic resonance and energy enhancement. When the plasmonic nanostructures resonate with incident light, they generate intense local electromagnetic fields, thus vastly increasing the Raman scattering signal. This enhancement leads to an outstanding 2–18 fold boost in SERS performance and unparalleled sensing sensitivity down to 10−10 M. Notably, the plasmonic platform also demonstrates robustness, retaining its sensing capability even after undergoing 50 cycles of rinsing and re-loading of chemicals. Moreover, this work adheres to green manufacturing standards, making it an efficient and environmentally friendly method for customizing plasmonic ‘hot spots’ in SERS devices. Our study not only achieves the formation of high-density, uniform, and ultrafine NP arrays on a tunable plasmonic platform but also showcases the profound relation between plasmonic resonance and energy enhancement. The outstanding results observed in SERS sensors further emphasize the immense potential of this technology for energy-related applications, including photocatalysis, photovoltaics, and clean water, propelling us closer to a sustainable and cleaner future. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Introduction Plasmonic surfaces, with their ability to manipulate light at the nanoscale through the arrangement of distributed metallic nanostructures, hold immense significance in various energy research fields [1].These metallic nanostructures, functioning as nano-antennas on the 2D surfaces, interact with incident light, resulting in the resonant excitation of plasmons and the formation of nanoscale 'hot spots' that can modify optical wavefronts [2].The extraordinary wavefront shaping capabilities of plasmonic surfaces encompass a range of applications, from super-lenses and electromagnetic shielding to optical camouflage, holograms, and surface-enhanced Raman scattering (SERS) platforms [3][4][5][6][7][8][9].Of particular interest is the ability to engineer plasmonic surfaces with precise control over their micro-structural design to achieve desired functionalities.SERS, for example, provides molecular diagnostics through inelastic light scattering by molecular vibrations, offering valuable 'fingerprint' information [10][11][12][13].However, the inherent limitation of conventional Raman spectroscopy lies in its low scattering intensity, restricting its applications for chemicals in low concentrations [14][15][16].To address this challenge, the SERS technique has been developed, utilizing specially designed metallic patterns with high-curvature nanostructures and gaps to significantly enhance light excitation of surface plasma resonances [17].The resulting confined electromagnetic fields (EFs) or 'hot spots' are believed to originate from localized surface plasmons arising from the coupling of light and surface patterns [18]. Key factors to optimizing SERS performance are the size and compactness of nanoparticles (NPs).Smaller and more compact NPs exhibit stronger plasmonic resonances as their electrons oscillate more readily when subjected to light of specific frequencies.However, achieving tunable nanoscale 'hot spots' on a large scale for SERS patterns remains a challenging and insufficiently explored aspect of research.This calls for intensified efforts toward green manufacturing approaches that allow for precise control and scalability of such plasmonic patterns.Considering the vast potential of plasmonic platforms in various energy-related applications, including photocatalysis, photovoltaics, and clean water, it is imperative to deepen our understanding and advance the fabrication techniques of these materials.Emphasizing the controllability of micro-structural design and scalability in plasmonic surfaces will undoubtedly drive significant advancements in energy research, paving the way for sustainable and cleaner technologies. Laser processing could be an ideal alternative manufacturing technique to overcome these metamaterial fabrication limits.By pulsed laser manufacturing, laser shock peeing effects would occur, and metallic nanostructures could be fast imprinted on large-scale with enhanced surface plasma 'hot spots' [19,20].Liquid metal could also be massively processed using pulsed-laser-induced shockwave [21], with fancily tuned laser spot sizes.Periodical micro-structural liquid metal patterns were synthesized as mechanically strong electric circuits.Thus, laser techniques are able to provide a wide variety of customized solutions by selecting the suitable wavelengths, laser operation mode (pulsed or continuous-wave laser), laser pulse width, laser energy, spot size, and frequency.However, these previous studies were limited to a single material system like aluminum foil or liquid metal.In plasmonic applications like SERS, composite systems are usually more functional to provide abundant plasmonic 'hot spots'.Considering NP/carbon composites [22,23], the metal NPs were usually assembled into compact arrays to provide enhanced surface plasmon, while the carbon would serve as a backbone to locate the NP arrays as green and affordable mechanical supports.A discussion of the effect of particle size and its effect on SERS continues to be of interest.The largest SERS enhancement sites are situated in the nanoscale gaps between NP dimers, and analytes adsorbed in these 'hot spots' are thought to be responsible for most of the SERS signal [17,24].Increasing particle size decreases the final particle coverage, which diminishes the number of hot spots situated in the SERS excitation area and thus decreases the signal [25].At the nanoscale, the plasmonic resonances of a particle can be strongly influenced by its size and shape, as well as the material it is made of.The plasmonic properties of a particle are also affected by its proximity to other particles.When particles are close together, their plasmons can interact with each other, leading to additional resonances and new behavior.Sub-3 nm NPs with a close and compact arrangement would have superior plasmonic properties compared to larger or more dispersed particles.This is because the small size of the particles and the close proximity between them lead to stronger plasmonic resonances, which can result in enhanced optical absorption and scattering of light.Additionally, the close arrangement of the particles can result in novel optical phenomena, such as localized surface plasmon resonances, that are not observed in larger or more dispersed particles.Overall, the superior plasmonic properties of sub-3 nm NPs with a close and compact arrangement would be one of the best choice to serve as metal NP/carbon composite SERS devices.Paramagnetic NPs can be aligned using a magnetic field (MF) by exploiting the magnetic dipole moment of the particles.Paramagnetic NPs have a magnetic moment due to their internal magnetic structure when interacts with an external MF.When an MF is applied, the magnetic moments of the particles will align with the field, leading to a change in the overall magnetic properties of the particles.Recently, we found that besides the in-situ converted cobalt embedded carbon (Co@carbon) matrix on the bottom glass during laser direct writing [26], a thin layer of the ultrafine yet dense 3 nm Co NPs is attached on the confinement glass at the meantime.We assume that ultrafast laser-induced well-aligned uniform NPs could form strong photonic 'hot spots' and serve as plasmonic surfaces in many applications including SERS sensing.Herein, we further delve into formation of 2D plasmonic metasurfaces by utilizing the deposited high-dense ultrafine metal NP arrays on the top confinement glass, by MF assisted ultrafast laser direct deposition (MAPLD).During ultrafast laser irradiation, the laser plasma plume forms with high heat and pressure.The metal-organic-framework (MOF) materials are thus ablated, and converted by photochemical reactions within this plasma plume.In the meantime, the Co NPs and carbon species deposit on to the top confinement glass by ultrafast pulsed laser deposition mechanism, forming uniform C@carbon thin film.Tunable plasmonic patterns could be fancily written by a programmable scanning algorithm of the laser configurations and controllable hydrophobicity could be further designed by laser-induced dewetting (LDW) patterns, laser factors, and the involvement of MF.Ultrafine yet compact graphene-coated Co NPs are uniformly embedded in the carbon and provide superior plasmonic 'hot spots'.SERS sensors are thus obtained without additional modification, while the bottom Co@carbon metamaterials are served as advanced catalyst for electrochemical water splitting.The 'turn-waste-into-value' concept utilized here not only satisfies the green chemistry standards, but also explores a new application of the surface residues.This customized SERS sensor from surface glass shows enhanced sensitivity, environmental adaptivity, and reusability, providing a promising green manufacturing technology for 2D plasmonic pattern fabrication on a large scale. Results and discussion The fabrication of the tunable plasmonic surface was shown in figure 1, and the detailed experimental description was provided in supporting information.The precursor preparation was as same as in our previous study [26].In general, a ZIF-67/N-methyl-2-pyrrolidone slurry was coated on 9 µm copper foil and followed with drying at 100 • C for 12 h.The ZIF-67 coated copper foil was sandwiched by two pieces of glass slides and sealed tightly.An MF was introduced by placing a strong magnet down below the bottom glass and the ultrafast 5 ps laser treatment was applied from the upper confinement glass, as illustrated in figure 1(a).Other than the total screening of the whole ZIF-67 samples for graphene coated cobalt embedded carbon (G@Co@carbon) metamaterials, the scanning algorithm was also programmed to three different pattern structures (PSz): honeycomb (HC), re-entrant honeycomb (RH), and chiral truss (CT) for different structural hydrophobicity purposes.A thin uniform ca.50 µm thickness of G@Co@C patterns (figure 1(b)) was then deposited on the top confinement glass and was peeled off with washing and drying process.Figure 2 presented detailed SEM images of the top surface for the coated glass.In general, the involvement of MF generated a more uniform particle distribution together with the higher laser energy.Therefore, ultrafine yet compact Co NP arrays were obtained in these laser patterns under the guidance of MF manipulation, as shown in figure 1(c).The final products were named by their patterns with laser energy and MF existence, as PS-X or PS-XM.PS could be HC, RH, and CT while X stood for laser condition.Naming with M indicated the product was processed with MF involvement.For example, CT-1hM means a chiral truss plasmonic pattern processed by the highest laser energy under the MF.Full naming details are displayed in table S1. In this report, the MAPLD could be closely related to the pulsed laser deposition (PLD) process, as demonstrated in figure 1(d) and the schematic comparison of PLD with MAPLD in figure S2.The ZIF-67 precursor and the top confinement glass could be treated as the target and the substrate in the PLD process, respectively.When the ultrafast 5 ps pulsed laser shot in and was absorbed by the 'target' ZIF-67, the laser energy was converted to electronic excitation and then into thermal, chemical, and mechanical energy.The MF further enhanced the phenomena and promoted the resulting evaporation, ablation, and exfoliation.Besides the in-situ conversion of Co/carbon metamaterials, the ejected Co NPs and carbon species in the plasma plume expanded onto the 'PLD substrate', the top confinement glass, forming thin films of Co NPs with carbon patterns.High resolution transmission electron microscope (HR-TEM) images again supported the importance of MF.Without MF, coarse G@Co NPs were observed (figure 1(e)).The MF successfully manipulated the Co NP formation and spatial arrangement, where dense and compact G@Co NPs were clearly seen in the HR-TEM images in figure 1(f).Figures 2(g) and (h) provided more HR-TEM images about PS-12M and PS-11M.The mechanism of MFinduced finer NPs is that the Co NPs are 'cut' by the MF and formed smaller sizes.Atomic force microscope (AFM) images in figures 2(i) and (j) gave surface structure details of the PS-h and PS-hM samples.The MF again took control over the Co NP formation and distribution for the surface morphology. Compared with a typical PLD process where sophisticated sample/gas preparation, high-vacuum or inert atmosphere, long time, and high energy consumption were required, our MAPLD process was in one step, fast (5 cm × 5 cm within 10 s), and was carried out in ambient conditions.Since the laser-converted ZIF-67 on the bottom glass was proven to be an excellent Co@C composite catalyst for electrochemical water-splitting, we extended the deposited Co@C on the upper glass with tunable pattern design for plasmonic applications for the maximization usage of precursor materials, which resulted in almost zero waste.This unique ultrafast laser deposition technique is not only eco-friendly but also takes full use of the precursors which could maximize the overall margin profit by getting two products from one single manufacturing process.The methodology used here might give a deeper insight into the fabrication process optimization and green manufacturing standards. The particle movements with/without MF during the MAPLD process were simulated by COMSOL Multiphysics to investigate the MF-manipulated NP assembly mechanism, and the results were displayed in figure 3.In the case of ZIF-67 treatment without the application of an MF, the charged particles emitted following laser excitation exhibit unidirectional motion, ultimately resulting in their sparsely distributed deposition on the upper glass substrate.In contrast, the paramagnetic Co clusters, when subjected to the influence of the MF, undergo a controlled circular motion around the center of the laser spot.This manipulation of charged particles within the plasma plume leads to a reduction in the distance traveled by particles ejected with the same initial velocity, resulting in a more compact and finely distributed deposition pattern on the upper glass substrate.For more comprehensive insights, detailed simulation procedures and outcomes are elaborated upon in the supporting information.Additionally, we have provided full-length videos in the supplementary information (videos S1 and S3), along with figure S3, which offer a closer examination of plasma expansion during the ultrafast LDW process, both from frontal and top perspectives.In these videos, the plasma plume originating from the laser spot displays a scattered pattern throughout space, resulting in a disorganized central-scattered plume.In contrast, the MAPLD process, as depicted in videos S2, S4, and figure S4, demonstrates a vertically oriented plasma plume movement, primarily driven by MF manipulation.Within this designated vertical plasma plume, Co and carbon species are more uniformly deposited on the upper glass substrate compared to scenarios without MF involvement.The MF-induced circular movement effectively repositions the Co NPs into a finer and more compact assembly.This refined assembly configuration provides a greater density of plasmonic 'hot spots,' which, in turn, enhances SERS sensitivity compared to situations where MF manipulation is absent. To investigate the optical mechanisms behind the graphenecoated Co NPs, finite-difference time-domain (FDTD) simulations were carried out using FDTD Solutions software (Lumerical Co. Ltd).Detailed simulation setups are provided in supporting information.Figures 4(a) and (b) were the proposed model for PS-h, PS-hM, and their corresponding electric field (EF) enhancements were calculated in figures 4(c) and (d), respectively.According to the HR-TEM results, the PS-h was set as three-coarse G@Co NPs with sizes of 5, 10, and 15 nm, and the PS-hM was set as aligned 3 × 4 G@Co NP arrays with uniform sizes of 3 nm.The normalized EF intensity distributions in the x-y planes of PS-h were around two times lower than that of the PS-hM, as the max EF intensity is 1.6 and 3.3 for PS-h and PS-hM, respectively.High EF intensity, as the 'hot spots' in the red bar, was rarely seen in PS-13 because of the coarse Co NPs.The MF-aligned uniform Co NPs in PS-hM, however, provided abundant 'hot spots' at the intersections of Co NPs due to the enhanced plasmonic effects of ordered metal NP assembly [14,18,25].This as-synthesized uniform, reliably and stable metasurface is critical for practical application of SERS substrates. Figures 4(e) and (f) provided the SERS spectra of 10 −10 mol l −1 (M) RhB aqueous solution loaded PS samples without MF and with MF, respectively.The black baseline was the bare sample, 10 −8 M RhB loaded on glass.All metasurfaces showed enhanced Raman signals for the detection of RhB with a relatively low concentration of 10 −8 M, and PS-M samples with much higher enhancements.One of the signal peaks of RhB, 1 649 cm −1 , was selected to better understand the influence of laser energy and MF involvement.Figure S5 plotted the SERS intensities at 1649 cm −1 of all PS samples and their corresponding Raman intensity/RhB ratio.Higher laser energy density and the existence of MF could contribute to higher SERS signal enhancements.PS-hM had the highest SERS signals, which was an 18 fold increment compared to bare RhB and almost a two-fold increment compared to PSh, the sample without MF but under the same laser conditions.A brief schematic of the reason for SERS enhancement was described in figure S6.The MF-manipulated ultrafine yet compact NPs were crucial to introducing active sites for plasmonic applications and the SERS mapping was carried out to examine their uniformity.Figure S7 presented the SERS signals at 1649 cm −1 of MF-manipulated metasurfaces and figure S8 for metasurfaces without MF, countering the bare RhB sample.Weak RhB signals could be detected in the bare RhB@Glass sample while all RhB@PS-M samples were 'hot' with RhB signals.Regular PS sample still showed decent SERS enhancement, especially when the laser energy is high at PS-h.Overall, these findings fitted well with our previous assumptions and FDTD simulations: higher laser energy and MF manipulation would lead to finer and denser Co NPs, and these well-aligned compact NP arrays could significantly enhance the plasmonic 'hot spot' that could benefit a lot of plasmonic devices like SERS sensors.In addition, we have conducted measurements of the absorption spectrum under different processing conditions using a spectrum photometer (figure S9).Our results clearly demonstrate that the absorption of the Co NP-based platform is substantially lower under conditions without the application of an MF, specifically in cases PS-11 and PS-12.These findings strongly corroborate the notion that the presence of an MF significantly enhances the optical absorption characteristics of the plasmonic platform.Consequently, this enhancement contributes to an improved surface plasmonic resonance, which is pivotal for optimizing the performance of SERS. The low optical absorption of Co NPs at 633 nm, a wavelength commonly used in SERS, is noteworthy.This study explores a novel aspect of SERS by investigating the SERS performance of the graphene/Co structure under an MF, which reveals a significantly higher enhancement compared to non-MF conditions, primarily due to the unique nanostructure after MAPLD.This nanostructure features an ultrahigh density of uniformly distributed metal particles with ultrasmall gaps between them.While materials like Co are not typically considered ideal for traditional SERS due to their low optical absorption, the application of an MF-controlled laser processing in MAPLD overcomes this limitation, resulting in a highly sensitive SERS substrate.This underscores the potential utility of Co NPs within a specific context.Moreover, our study seeks to explore alternative materials for SERS beyond traditional Ag and Au NPs, which is much more expensive than earth abundant materials such as Co.It is important to note that Co NPs represent just one selection and may not be universally ideal for SERS.Our platform holds applicability to a variety of metals and alloys that may offer superior SERS performance compared to Co.In addition, the enhancement of Raman scattering can be significantly attributed to the graphene coating on the Co NPs.It is important to note that in both scenarios, whether an MF is applied or not, the Co NPs consistently feature a graphene coating, highlighting the magnetically induced nanostructure as the key factor in the observed SERS enhancement.This exceptional nanostructure leads to a remarkable SERS enhancement, exceeding 18 times, largely influenced by the presence of an MF. Another key feature of laser processing is the ability to program the scanning algorithms and the adaptability to precisely switch laser conditions during laser irradiation.Since different laser conditions and the MF involvement have proven to be direct factors that could regulate and control the final products' SERS performance, the SERS signals for the intersection of two different metasurfaces were studied to better design the hybrid metasurface structures.As shown in figure S9, three Honeycomb Re-entrant honeycomb Chiral truss vertical lines were plasmonic patterns lased from low to high energy without MF.Three lateral lines were plasmonic patterns lased from low to high energy with MF, marked as lM, mM, and hM, respectively.The six lines were then loaded with 10 −10 M RhB as SERS sensors and the intersections of each two lines were studied by Raman SERS mapping.Figures S9(b)-(j) were SERS intensities at 1649 cm −1 of the intersections of hM-h, hM-m, hM-l; mM-h, mM-m, mM-l; and lMh, lM-m, lM-l, respectively.A clear boundary could be seen for hM-l, mM-l, and lM-l, presenting the importance of laser energy and MF manipulation of Co NPs.Only under the MF influence, the orientation of NPs could be controlled in the carbon matrix to the well-aligned NP arrays.The NPs, which have a net magnetic moment due to their magnetic anisotropy, align themselves with the MF, resulting in an anisotropic distribution of NPs within the carbon matrix.The desirable plasmonic platform was then produced under the highest laser input with MF.Nevertheless, a major problem is reproducibility: often the SERS signal differs among different nanostructures (or locations on the support) due to variation in morphologies of nanostructures [27][28][29].Besides, the SERS enhancements are not uniform in real samples, and only a few sites exhibit the highest SERS enhancement.On the other hand, only molecules located (usually <2 nm) within the 'hot spots' could contribute to the overall SERS signals.Therefore, both (1) the uniformity of hot spots on backbone patterns and (2) the well distribution of molecules on these corresponding 'hot spots' are necessary to achieve high-sensitive and reliable plasmonic surfaces. With these findings in mind, we started the customization of metasurface structures, and the relationship between three surface patterns: HC, RH and CT, and their hydrophobicity were studied.Detailed water contact angle photos were available in supporting information, from figures S10-S12, and their water contact angles were calculated and presented in table 1.Generally, CT structures had the lowest water contact angles which mean more hydrophilic, and HC structures tended to be more hydrophobic.RH structures sit in the middle of CT and HC structures.Furthermore, higher laser energy also increased their hydrophobicity, mainly due to higher graphitic crystallinity in the carbon backbone which is highly hydrophobic.From regular SERS testing results, higher laser energy and MF-manipulation could achieve better SERS enhancements, like PS-hM.However, combining the results from SERS and water contact angle measurements, PS-hM might not be an ideal candidate for aqueous solution-based SERS sensing because of the uneven loading of active materials.Special attention should be given, and designated solutions were needed to overcome this issue. A multi-scale engineered hybrid SERS pattern design was developed to empower the best SERS sensing platform, PS-hM, with customized hydrophilicity.Owing to the flexibility of laser processing features, the laser power could be precisely controlled during each step of the process.Herein, we proposed a two-scale hybrid SERS pattern by combing the most SERS sensing PS-hM and the most hydrophilic PS-l, together with the aforementioned three metastructures: HC, RH, and CT.As shown in figures 5(a)-(c), the SERS patterns were divided into two parts: lateral scanning and vertical scanning for the scanning algorithm.One side of the scanning was conducted by high laser energy, with MF, serving as the SERS sensing platform.The other side of the scanning was laser processed by low laser energy without MF, aiming to adjust the hydrophobicity of overall metasurface patterns.The pristine water contact angles for HC-hM, RH-hM, and CT-hM were 36.8,39.6, and 30.8 • , respectively.After the multi-scale engineering design, the water contact angles were regulated to 33.2, 24.8, and 21.9 • , respectively.The angles were reduced by 9.8%, 37.4%, and 28.9% for HC, RH, and CT, respectively.This multi-scale engineering design successively regulated the hydrophobicity by adjusting the water contact angles from 20 • to 40 • , with the combination of three pattern designs by onestep MAPLD in air.With the help of improved hydrophilicity, the Raman spectra were taken back to check the potential loss of SERS sensitivities. The reusability of the SERS sensing platform remains one of the key issues that prevent its widespread applications. Typically, SERS sensors are delicately fabricated by assembling metal NPs with supporting substrates like graphene or other 2D materials.Such composite systems are usually fragile and could only be loaded with sensing objectives once because the delicate structures might be reformed after rinsing by breaking the weak van der Waals force.Our MAPLD provided an ultimate solution to reuse the plasmonic metasurfaces as the Co NPs were tightly deposited on the glass substrate and were further 'glued' by the graphene coating, as the result of similar PLD mechanism.The as-tested RhB-loaded CT-hM/l was chosen to perform the cyclability for it is superior water wettability.As shown in figure 6(a), the CT-hM/l was first loaded with RhB, followed by initial SERS sensing tests.The RhB-loaded CT-hM/l was then rinsed with water to remove RhB.Raman spectra were carried out to examine the metasurface integrity of the rinsed CT-hM/l at this step, presented in figure 6(b).The rinsed CT-hM/l was then re-loaded with RhB for the second cycle, and the process was repeated 50 times and the 25th and 50th SERS signals were given in figure 6(c).From figure 6(b), the D/G peak ratio of carbon materials remains unchanging, stating the rinsing process was damage-free and the CT-hM/l was super stable and robust [30][31][32].The 25th and 50th cycle's SERS signals in figure 6(c) also remained similar and this well supported the reusability of our robust plasmonic patterns. Conclusion In conclusion, we demonstrated scalable manufacturing of tunable plasmonic patterns by the MF manipulated ultrafast laser direct writing in air.Under the MF manipulation, the growth of the extracted Co clusters was terminated once reached their critical magnetic size of 3 nm.Afterward, the 3 nm ultrafine Co NPs were aligned by MF in spatial distribution for forming a dense compact in the carbon matrix on the top glass as robust plasmonic 'hot spots'.The laser scanning algorithm, laser conditions, and the MF involvements were introduced as three major factors to design the multiscale hybrid plasmonic platform.Firstly, single-scale plasmonic surfaces were loaded with RhB as SERS sensors to explore their plasmonic performances.Secondly, water contact angle tests were carried out to derive the relationship between hydrophobicity and plasmonic enhancement of these metasurfaces.Thirdly, the most plasmonic sensitive PS-hM and the most hydrophilic PS-l were integrated via a two-scale engineered hybrid design by splitting the whole laser scanning algorithm into two parts.The resultant hybrid CT-hM/l system presented adjustable hydrophobicity which could be served in various detection environments.Finally, the hybrid SERS platform, CT-hM/l was rinsed and re-loaded with RhB for 50 cycles, and its SERS performance remained unchanging, proofing to be a reusable yet robust SERS sensor.Overall, our tunable MAPLD in air meets the green manufacturing criteria and turns 'waste' into 'value.The as-synthesized multiscale hybrid plasmonic platform shows customized hydrophobicity and plasmonic 'hot spots', owing to the feasibility of laser processing and MF-manipulation.The plasmonic resonance effect in SERS sensors is directly linked to the significant energy enhancement observed in the Raman scattering signal.By harnessing the unique capabilities of plasmonic nanostructures to generate intense local electro MFs, SERS has become a powerful analytical tool with widespread applications in diverse fields due to its exceptional sensitivity and selectivity.The remarkable outcomes witnessed in SERS sensors underscore the vast potential of this technology in driving energy-related applications toward a sustainable and cleaner future.Its transformative capabilities extend to diverse fields, such as photocatalysis, photovoltaics, and clean water, offering promising avenues for efficient and environmentally friendly energy solutions. Figure 1 . Figure 1.Fabrication of the tunable plasmonic surface by MAPLD.(a) Experimental setup for tunable MAPLD.(b) Top layer is collected as a plasmonic surface.(c) The extracted Co NPs are well-aligned by MF in the surface pattern.(d) Schematic illustration of the importance of MF during laser processing.HR-TEM images of (e) without MF and (f) with MF under high laser energy. Figure 3 . Figure 3. Top view of simulated particle movements with/without MF under different time scale. Figure 4 . Figure 4. Finite-difference time-domain (FDTD) simulations for optical property of graphene-coated metal NPs.Model setup for (a) PS-h and (b) PS-hM and their simulated corresponding field intensity distribution in the X-Y plane in (c) and (d), respectively.SERS spectra of RhB-loaded plasmonic surfaces: (e) without MF and (f) with MF. Table 1 . Water contact angles of the plasmonic surfaces.The HC, RH, and CT patterns are shown in the upper section. Figure 6 . Figure 6.Reusable testing of CT-hM/l.(a) The schematic of the RhB rinsing process.(b) Raman spectra for the rinsed CT-hM/l after 25 and 50 cycles.(c) SERS signals for the RhB re-loaded CT-hM/l after 25 and 50 cycles.
6,609.4
2024-03-05T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Anti-Disturbance Control for Quadrotor UAV Manipulator Attitude System Based on Fuzzy Adaptive Saturation Super-Twisting Sliding Mode Observer : Aerial operation with unmanned aerial vehicle (UAV) manipulator is a promising field for future applications. However, the quadrotor UAV manipulator usually suffers from several disturbances, such as external wind and model uncertainties, when conducting aerial tasks, which will seriously influence the stability of the whole system. In this paper, we address the problem of high-precision attitude control for quadrotor manipulator which is equipped with a 2-degree-of-freedom (DOF) robotic arm under disturbances. We propose a new sliding-mode extended state observer (SMESO) to estimate the lumped disturbance and build a backstepping attitude controller to attenuate its influence. First, we use the saturation function to replace discontinuous sign function of traditional SMESO to alleviate the estimation chattering problem. Second, by innovatively introducing super-twisting algorithm and fuzzy logic rules used for adaptively updating the observer switching gains, the fuzzy adaptive saturation super-twisting extended state observer (FASTESO) is constructed. Finally, in order to further reduce the impact of sensor noise, we invite a tracking differentiator (TD) incorporated into FASTESO. The proposed control approach is validated with effectiveness in several simulations and experiments in which we try to fly UAV under varied external disturbances. Introduction Unmanned aerial vehicles (UAVs) have become a popular and active research topic among scholars worldwide [1,2]. It could work at locations where entry is difficult and humans cannot access [3]. Recently, they are not only used in traditional scenes, such as monitoring, aerial photography, surveying, and patrol, but are also employed in new application scenarios requiring physically interacting with external environment. Then, to conduct physical interaction, aerial robots are either equipped with a rigid tool [4] or an n-degree-of-freedom (DoF) robotic arm [5][6][7][8], which are called UAV manipulators. As for the UAV equipped with a robotic arm, several disturbances cannot be avoided, such as wind gust and model uncertainties, during task execution. Therefore, to meet the aerial operation task requirements with high reliability, disturbance rejection control problem should be investigated. Many controllers have been proposed for disturbance rejection. The Robust Model Predictive Control (MPC) method is used for multirotor UAV to provide robust performance under an unknown but bounded disturbance [9]. Meanwhile, the adaptive control structure is built to obtain coefficients of system with uncertainty online. A controller based on Model Reference Adaptive Control is proposed in [10], which performs better on disturbance rejection compared with non-adaptive controllers. However, as what we know, one of the biggest drawbacks of adaptive controllers is that they perform little robust under the bursting phenomenon. Adopting the disturbance and uncertainty estimation and attenuation (DUEA) strategy, such as Disturbance observer (DOB) [11], unknown input observer (UIO) [12], and extended state observer (ESO) [13], would be a potential solution for disturbance rejection. Meanwhile, there are several types of DOBs applied to UAVs. Work [14] proposes an inner-loop control structure to recover the dynamics of a multirotor combined with additional objects similar to the bare multirotor using DOB-based method. A linear dual DOB is built in [15] to reject modeling error and external disturbance when designing the control system. The authors of [16] propose a DOB-based tracking flight control method for a quadrotor to reject disturbance, which is supposed to be composed of some harmonic elements, and applies it to flight control of the UAV Quanser Qball 2. A disturbance observer with finite time convergence (FTDO), which could conduct online estimation of the unknown uncertainties and disturbances, is incorporated into an hierarchical controller to solve the problem of path tracking of a small coaxial rotor-type UAV [17]. The authors of [18] introduce a robust DOB for an aircraft to compensate the uncertain rotational dynamics into a nominal plant and proposes a nonlinear feedback controller implemented for desired tracking performance. As for ESO, it was first proposed by Han [19] and has been introduced in many control methods [20]. Traditionally, ESO methods mainly focus on coping with disturbances which slowly change [21]. Nevertheless, it is evident that the disturbance caused by the wind gust sometimes performs drastic change so that it can not be estimated by traditional ESO thoroughly. Thus, an enhanced ESO that could quickly estimate the disturbance is necessary in this field. A high order sliding mode observer is built to estimate unmodeled dynamics and external disturbances for an aerial vehicle to track the trajectory [22]. Moreover, a sliding mode observer is proposed for equivalent-input-disturbance approach to control the under-actuated subsystem of a quadrotor UAV [23]. However, chattering phenomenon is normal in traditional sliding mode observer (SMO) because of the discontinuous sign function. In order to reduce the chattering problem, the super-twisting algorithm is introduced in a ESO to mitigate estimation chattering [24], but there is still a little chattering. In order to solve it, the authors of [25] proposed a new SMO control strategy to reduce the estimated speed chattering of the motor, in which the switching function is replaced by a sigmoid function. The sigmoid function has also been used in SMO for UAV in [26]; however, it is conducted without experiment. The main contributions of this study are given as follows. (1) In order to alleviate the estimation chattering problem of traditional SMESO, a new observer named fuzzy adaptive saturation super-twisting extended state observer (FASTESO) is proposed, in which a saturation function is invited to replace the discontinued sign function, meanwhile a super-twisting algorithm is introduced to prevent excessively high observer gain and TD [19] and is incorporated for avoiding directly using acceleration information, which is full of noise. (2) To stay robustness under disturbances with unknown bounds, fuzzy logic rules are introduced as an adaptive algorithm to adaptively adjust the observer switching gains. Furthermore, it also contributes to the chattering attenuation under high switching gain with low estimation value and observer performance improvement under low switching gain with high estimation value. (3) The proposed method is verified with effectiveness on a quadrotor UAV manipulator prototype in several simulations and experiments. The rest of this article is organized as follows. Section 2 introduces some preliminaries of this work. Section 3 describes the kinematic and dynamic models of the quadrotor UAV manipulator. The construction of proposed FASTESO and attitude controller are given in Section 4. Additionally, several simulations are conducted in Section 5. Moreover, Section 6 shows the experiment. Finally, Section 7 presents the conclusion. Preliminaries In this section, some mathematical preliminaries are provided for understanding the whole paper more easily. Notation The 2-norm of a vector or a matrix is provided by · . λ max (Z) and λ min (Z) represent the maximal and minimum eigenvalue of the matrix Z, respectively. Moreover, the operator S(·) denotes a vector to a skew symmetric matrix as The sign function is given as Quaternion Operations The unit quaternion q = q 0 q v T ∈ R 4 , q = 1, is used to represent the rotation of the quadrotor in this paper. Several corresponding operations are given as follows. The quaternion multiplication: The relationship between rotation matrix C B A and unit quaternion q is represented by: Take the time derivative of Equation (4), we could obtaiṅ The derivative of a quaternion and the quaternion error q e are provided, respectively, aṡ where q d represents the desired quaternion whose conjugate is q * d = q d0 −q dv T . Moreover, ω denotes the angular rate of the system. Models of UAV In this section, we present a mathematical description of the quadrotor UAV manipulator system model. The abstract graph is shown in Figure 1, in which the robotic arm is fixed at the geometric center of the UAV. (10) where R t ∈ R 3×3 represents the transformation matrix for converting Euler angle ratesΨ into ω I . R b ∈ R 3×3 denotes the rotation matrix representing the orientation of O b relative to O I , and Kinematic Model c(·) and s(·) denote cos(·) and sin(·), respectively. Moreover, the position and angular rate of the frame O i , which is fixed to the robotic arm, with respect to O I are provided by , represent the position of O i and the angular rate of the i-th robotic arm frame with respect to O b , respectively. Additionally, more relationships are provided, where J pi ∈ R 2×2 and J ri ∈ R 2×2 are Jacobian matrices representing the translational and angular velocities of each robotic arm link to theη, respectively. According to Equations (12) and (13), we could get the translational and angular velocity of O i with respect to O I as where S(·) represents the skew-symmetric matrix. Dynamic Model In order to obtain the dynamic model of the quadrotor UAV manipulator system, the Euler-Lagrange equation is introduced. where k = (1, ..., 8). L: the Lagrangian with kinetic energy K and potential energy U of the integrated system. u k : the generalized driving force. d k : external disturbance applied to the system. Additionally, K could be given in detail as where m b is the quadrotor base mass, m i is the i-th robotic arm mass (i = (1, 2)), I is the inertia matrix, and R b i the rotation matrix between the frame fixed to the center of mass of the i-th link and O b . Then, the total potential is provided as where e 3 denotes unit vector [0, 0, 1] T and g represents the gravity constant. By substituting Equations (16) and (17) into Equation (15), the dynamic model of the quadrotor UAV manipulator system could be obtained as Moreover, the total kinetic energy can be expressed: where Details of the inertia matrix M(H) and Coriolis matrix C(H,Ḣ) can be found in [27]. The G(H) could be obtained via partial derivative Equation (21): Express Equation (18) in detail as where u t , u r , u l represent the generalized control inputs corresponding to p I , Ψ, η. u l is not used, as we do not consider a dynamic robotic arm in this work. Vector d = [d t , d l , d r ] T are the lumped external disturbance. As for the quadrotor attitude loop subsystem, we would transform u r to the quadrotor which is a positive correlation vector with forces produced from the quadrotor propellers. Moreover, ω i represents rotor rate of quadrotor propeller (i = 1, 2, 3, 4). Then, convert the generalized control inputs to the propeller rate: where Λ T and Λ C are the thrust and drag coefficients, respectively. Additionally, l denotes the distance from each motor to the quadrotor CoG. According to Equation (22), we could obtain in detail the dynamics of the attitude loop subsystem which is the base of next section: Method Based on the UAV model, the proposed control strategy is described in detail in this section. As shown in Figure 2, quadrotor attitude control problem could be briefly divided into two components: FASTESO, used for estimating and compensating the disturbance, plays a role of the feedforward loop, and Backstepping controller, built for regulating the orientation to track the desired attitude timely, plays a role of the feedback loop. Meanwhile, TD, saturation function, and fuzzy logic methods are incorporated into the whole control method. FASTESO UAVs with robotic arms have more model uncertainty than traditional UAVs. Additionally, UAVs usually face some other disturbances such as external wind when flying outside. Moreover, the control performance of closed loop system will be largely determined by the observation performance. Therefore, in this part, in order to enhance the performance of feedback controller, a new SMO named FASTESO is built to estimate the lumped disturbance exerted on quadrotor UAV manipulator in finite time. As for the traditional SMO, the sign function is generally adopted as the control function, and owing to switch time and space lag the SMO performs serious chattering problem [28,29]. In order to alleviate the chattering phenomenon, the sign function is replaced by a saturation function in this work. Moreover, the chattering phenomenon would also be invited by large switching gain because when the control signal crosses the sliding surface, larger switching gain would produce faster and bigger switching control parts and perform terrible chattering phenomenon. Additionally, the invited saturation function has a slower performance than the sign function on arithmetical speed, so there would be a time delay relatively. To further settle the problems of chattering and time delay mentioned before, fuzzy logic rules are employed in FASTESO to adjust the switching gains according to the states of the sliding surface. Construction of Traditional Super-Twisting Extended State Observer (STESO) In this part, according to our previous work [30], a traditional SMESO named STESO for quadrotor UAV manipulator is constructed. In this work, the situation, in which the robotic arm keeps constant while the UAV faces external disturbances during the flight, is mainly considered: (η =η = 0). That is a general scenario that the quadrotor UAV manipulator often encounters when conducting aerial tasks. Nevertheless, usually there is unwanted vibration of the equipped robotic arm caused by the high-speed motors and propellers. To consider all mentioned disturbances, the model (25) could be reconstructed as where ∆M 21 , ∆M 22 , ∆M 23 represent model uncertainties. ∆η and ∆η, considered as a portion of model uncertainty, denote the residuals from the temporary variation of the robotic arm parametersη,η. In order to build the observer, reconstruct the attitude dynamic Equation (26) as Introduce the lumped disturbance as Combining Equation (27) and Equation (28), we get where the variablep I is measured directly from the sensors or velocity differential is too noisy to use, so TD is introduced here to estimate the system acceleration for noise alleviation. Additionally,ṗ I anḋ Ψ measured from the sensors are also a little noisy, in that case, TD would be used to reduce noise, too. Moreover, the items M 21 , C 21 and G 2 could be obtained in advance according to the presented UAV model. As for dynamics model (29), considering the feedback linearization method, we could reformulate the control input as Combining Equation (29) and Equation (30), we get When building the STESO, it is supposed that every channel is independent from each other. Therefore, only one channel will be presented here and the other two are completely identical. As for model Equation (31), the one-dimensional model for STESO design is provided as Introduce a new extended state vector ζ = [ζ 1 ζ 2 ] T , where ζ 1 = J iΨi and ζ 2 = d ri * , (i = ϕ, θ, ψ). Reconstruct the model (32) as where χ denotes the derivative of d ri * , supposed by |χ| < f + . It means that the differentiation of lumped disturbance is bounded. Then, according to the work in [30], build STESO for the observable system (33) as where z 1 and z 2 are estimations of ζ 1 and ζ 2 , respectively. e 1 = ζ 1 −ζ 1 and e 2 = ζ 2 −ζ 2 denote estimation errors. We could obtain that the system estimation errors e 1 and e 2 would converge to zero within finite time under suitable observer gains α 1 , α 2 . Moreover, the details of the convergence analysis and parameter selection rules could be found in [30]. Saturation Function To mitigate the chattering problem, we replace the sign function, usually expressing the discontinuous control in the observer, with a saturation function. The traditional STESO Equation (34) could be rewritten as where k f is the adaptive proportional factor of switching gain generated from fuzzy logic rules which will be introduced in next part. sat() represents the saturation function whose curve is shown in Figure 3, where particularly e is the difference between the real system and the estimated one. k e is the output of the saturation function. We could adjust the sliding mode effect by regulating values of e and k e . Obviously the chattering would be reduced via increasing e and decreasing k e . If e is big enough and k e is small enough, the high-frequency chattering could be avoided. However, meanwhile the robustness of the sliding mode method would also be reduced. Therefore the tuning of these parameters is a tradeoff between the chattering alleviation performance and the system robustness. Therefore, e and k e should be considered overall according to real applications. Adaptive Switching Gains with Fuzzy Logic Rules The fuzzy logic controller was proposed many years ago [31]. In this part, the fuzzy rules are designed according to the fuzzy control theory for effectively optimizing the switching gain based on the states of the sliding-mode surface. Normally, for purpose of enhancing the ability of observer on anti-disturbance and undertaking the generation of sliding mode, the sliding mode switching gain is often chosen to be too large, but this would further enhance the chattering noise in the disturbance estimation. Therefore, it is feasible to introduce the intelligent method to effectively estimate the switching gain according to the sliding mode arrival condition to alleviate the system chattering problem. For example, as the system state is far from the sliding surface, it means that the absolute value of difference |e| is relatively large, the proportional factor k f should be enlarged to drive the system state back. Similarly, as the system state is close to the sliding surface, proportional factor k f should be smaller. The proposed fuzzy logic system in this work consists of one input variable and one output variable which would be quantized first. The input and output variables are divided into four fuzzy subsets. Let |e| = ZR PS PM PB denote the input variable and k f = ZR PS PM PB represent the output variable, respectively. Each rank is depicted by a membership function of trigonometric function, as shown in Figures 4 and 5, where the fuzzy language is defined as ZR(zero), PS(positive small), PM(positive middle), and PB(positive big). Then define the fuzzy rules as Rule 1: If |e| is PB, then k f is PB. Rule 2: If |e| is PM, then k f is PM. Rule 3: If |e| is PS, then k f is PS. Rule 4: If |e| is ZR, then k f is ZR. Finally, as for the fuzzy outputs, according to the work in [32], the center of gravity algorithm is adopted as the defuzzification method to convert the fuzzy subset duty cycle changes to real numbers, which is given as where µ out represents the resulting output membership function. It takes every rule into account performing the union of the resulting output membership function µ out,i of each Rule i (i = 1, ..., 4), which means the maximum operation between them. Attitude Controller In order to achieve high-precision attitude stabilization in the presence of external wind and model uncertainties, a backstepping controller is built here combined with FASTESO for UAV attitude system. The main objective of this part is to guarantee that the state of attitude q and angular rate w converge to the reference values q d and w d in real time. As this part is identical to our previous work [30] except the UAV model, which would not impact the controller design. Therefore we are not going to describe it in detail. According to the authors of [30], the control signal vector u r is designed as whered r is the estimated result of disturbance from FASTESO. Additionally, like what we did at FASTESO, TD will also be adopted here to process several variables in Equation (37) to reduce noise. Moreover, K b1 , K b2 are controller gains to be designed, and ω e ,ω e , and q ev are defined in Equation (38), details of which can be found in [30]. Simulation In this section, the performance of proposed FASTESO-based control method is validated on the PX4/Gazebo platform [33], which provides simulations of physics close to the real world. A manually designed scalable disturbance, which includes a sudden change in some special points, is defined in Equation (39) and adopted in the simulations. Moreover, its amplitude could be adjusted by a. The main nominal coefficients of the quadrotor base are generated by the online toolbox of Quan and Dai [34] given in Table 1. Table 1. Quadrotor base coefficients used in simulation. Coefficients Description Value CASE 1 (Observers Performance Comparison) In this section, several comparison simulations on observer performance of disturbance estimation are conducted. As shown in Figure 6, both FASTESO and ASTESO have good estimation performance under scalar a = 10. The resultant observer gain of FASTESO is described in Figure 7. Nevertheless, the ASTESO starts to break down when the disturbance is larger, which could be found in Figure 8. Under a = 20, the ASTESO fails to follow the larger disturbance, but the proposed FASTESO still performs well. This is because the fixed parameter of ASTESO is not big enough to estimate the disturbance with the amplitude in Figure 8. As shown in Figure 9, the adaptive gain of FASTESO also shows why it is effective under disturbances with different amplitudes. To summarize, this part shows that the proposed FASTESO could estimate a wider range of disturbance thanks to the fuzzy logic rules for adaptive observer gain. Various Observers Comparison To demonstrate the effectiveness of the introduction of the saturation function into SMO, a simulation similar with last part is conducted under FASTESO with an applied disturbance under a = 5 shown in Figure 10. Meanwhile the 2nd-order linear ESO (ESO2) and STESO are also invited for comparison. The parameters of the mentioned observers are listed. FASTESO:α 1 = diag(0.4,0.4,0.5); α 2 = diag(0.7, 0.7, 0.9); e = 0.0001; k e = 0.05. ESO2:k 1 = diag(4.5,4.5,6.5); k 2 = diag (22,22,28); k 3 = diag (5,5,8); STESO:α 1 = diag(0.2,0.2,0.3); α 2 = diag(0.3, 0.3, 0.4); BAC: K b1 = diag (5,5,8); (3,3,4). From the response results in Figure 10, we could find that FASTESO has a perfect performance on estimation even at the special point with a sudden change. Although the curve under ESO2 could usually follow the actual value, it performs an overestimation near the peak. As for STESO, it could follow the fast variation of disturbance; however, it has a terrible chattering problem which is also the main reason for the introduction of saturation function in this work. To summarize, the FASTESO performs well at all range of disturbance (a = 5,10,20). Meanwhile, it follows better than other traditional observers under varied disturbance. CASE 2 (Composite Comparison) To further illustrate the effectiveness of the whole proposed control strategy, we would show all-sided simulation results from Section 5.1.2. Moreover, a single backstepping controller without any observer is also conducted on quadrotor UAV manipulator system for comparison. Three-axis components of disturbance torque d r = d rϕ d rθ d rψ T are equal and shown in Figure 10, which are applied on the UAV simultaneously. The result curves of both attitude and angular rate are shown in Figures 11-16. We could easily find that, without an observer, the single BAC has biggest attitude offset and angular rate fluctuation, which proves its bad robustness under disturbances. Specifically, the attitude curves even follows the applied torque disturbance well and angular rate has a dramatic change at special points. As for BAC+ESO2, although the estimation value follows the disturbance well to some extent, it has a performance of drastic change on both attitude and angular rate at peak point owing to the over estimation near the peak. As for BAC+STESO, in spite of the good performance of STESO on following the disturbance, it has a terrible chattering phenomenon and leads to a mess on UAV system. Compared to other situations, the UAV under BAC+FASTESO has the best performance on both attitude offset and angular rate fluctuation with varied torque disturbance. Experiment In this section, to verify the effectiveness of the whole proposed method, the quadrotor UAV manipulator is assigned to conduct a hovering task beside an electrical fan, which could generate several level gear wind as torque disturbance acting on the UAV system. BAC+FASTESO is adopted for UAV disturbance rejection test, meanwhile a single BAC without any observer is also conducted for comparison. The experimental scene is shown in Figure 17. The whole proposed control scheme is built in Pixhawk [35]. The parameters are chosen as follows. For BAC+FASTESO: (3,3,5); K b2 = diag(2,2,2.5). For BAC: K b1 = diag (6,6,9); K b2 = diag(3.5,3.5,5). The fan is placed to mainly apply the external torque in roll channel. The results of UAV hovering under first and second gear wind are shown in Figures 18-23. We found that the attitude of hovering UAV changes following the variation of wind gear. Moreover, the attitude vibration offset is reduced by help of FASTESO compared to the situation with single BAC. Meanwhile, the angular rate fluctuation is also reduced. They intuitively verify the effectiveness of FASTESO. Additionally, Figures 22 and 23 show the control output generated only from BAC in the control situations single BAC and BAC+FASTESO, respectively. We could find that the control signal is around zero with FASTESO; however, the control signal is impacted so much owing to external disturbance in situation without any observer. Moreover, the controller gain in BAC+FASTESO could keep smaller to make the whole system stable thanks to the disturbance estimation and compensation from FASTESO, which also contributes to the better performance of BAC+FASTESO. Meanwhile, the estimation of the lumped disturbance including external wind and model uncertainties is shown in Figure 20. Although we do not know the ground truth of the torque disturbance generated from the fan and model uncertainties to verify if the estimation result in Figure 20 is right or not, according to Figures 20 and 22, we could see that their results are almost the same, which also demonstrates the effectiveness of FASTESO to an extent. Conclusions In this study, a new observer named FASTESO is proposed, which is incorporated with the saturation function, TD and fuzzy logic rules. Combined with backstepping controller, the whole control scheme for the quadrotor UAV manipulator system is finally built for rejection of lumped disturbance including external wind and model uncertainties. The effectiveness of the whole control structure is veritied in both several simulations and experiments. In the future, we would pay more attention to the estimation separation of the model uncertainties from the external disturbance.
6,195
2020-05-27T00:00:00.000
[ "Engineering", "Computer Science" ]
Applying aspiration in local search for satisfiability The Boolean Satisfiability problem (SAT) is a prototypical NP-complete problem, which has been widely studied due to its significant importance in both theory and applications. Stochastic local search (SLS) algorithms are among the most efficient approximate methods available for solving certain types of SAT instances. The quantitative configuration checking (QCC) heuristic is an effective approach for improving SLS algorithms on solving the SAT problem, resulting in an efficient SLS solver for SAT named Swqcc. In this paper, we focus on combining the QCC heuristic with an aspiration mechanism, and then design a new heuristic called QCCA. On the top of Swqcc, we utilize the QCCA heuristic to develop a new SLS solver dubbed AspiSAT. Through extensive experiments, the results illustrate that, on random 3-SAT instances, the performance of AspiSAT is much better than that of Swqcc and Sparrow, which is an influential and efficient SLS solver for SAT. In addition, we further enhance the original clause weighting schemes employed in Swqcc and AspiSAT, and thus obtain two new SLS solvers called Ptwqcc and AspiPT, respectively. The eperimental results present that both Ptwqcc and AspiPT outperform Swqcc and AspiSAT on random 5-SAT instances, indicating that both QCC and QCCA heuristics are able to cooperate effectively with different clause weighting schemes. Introduction The Boolean satisfiability (SAT) problem is one of the most studied NP-complete problems, and is of significant importance in both theory and pracite [1]. The SAT problem has a broad range of applications in various fields, such as mathematical logic, inference, machine learning, constraint satisfaction, VLSI engineering and computing theory [2]. Similarly, many realworld problems, including testing [3], formal verification [4,5], synthesis [6], nano-fabric cell mapping [7] and various routing problems [8], can be encoded into SAT, and further be solved by efficient SAT solvers [9][10][11][12]. Given a propositional formula in conjunctive normal form (CNF), the SAT problem is to decide whether there exists an assignment of Boolean variables that makes the propositional formula evaluate to be true. Although it is important to solve the SAT instance encoded from industrial problems, solving random SAT instances, especially random 3-SAT instances, also only in the greedy mode. Moreover the authors developed a SLS solver named Swqcc [33], which is more efficient than Swcc. However, the performance of Swqcc on random 3-SAT instances is still not satisfactory. In this situation, to enhance the efficiency of local search, Salhi proposed an aspiration mechanism and integrated it with tabu search [34]. Similarly, Cai successfully used the aspiration mechanism to improve the efficiency of the neighboring-variables-based CC heuristic for solving SAT problems [35]. In this paper, we combine the QCC heuristic and the aspiration mechanism in an effective way, and propose a new heuristic dubbed QCCA. Based on the QCCA heuristic, we also develop a new SLS solver called AspiSAT for solving SAT. Our experimental results show that the performance of AspiSAT on random 3-SAT instances exceeds that of Swqcc and Sparrow, which is an influential and effective SLS solver for SAT. Moreover, we enhance the clause weighting methods employed in both Swqcc and AspiSAT, resulting in two new SLS solvers, Ptwqcc and AspiPT, respectively. The related experiments demonstrate that Ptwqcc and AspiPT perform much better than Swqcc and AspiSAT on random 5-SAT instances. The remainder of our paper is organized as follows. In Section 2, we provide necessary preliminaries. Section 3 presents a brief review of the stochastic local search algorithm. Section 4 shows the pattern detection heuristics. In Section 5, we combine quantitative configuration checking heuristics with aspiration mechanism. In Section 6, we propose the stochastic local search algorithm based on QCCA heuristic. In Section 7, we evaluate the performance of Aspi-SAT, Swqcc and Sparrow on random 3-SAT instances and structured instances. Further empirical analyzes on probability and threshold weighting method and evaluation of AspiSAT and Ptwqcc are demonstrated in Section 8. Finally, Section 9 concludes the paper and lists some future work. Preliminaries Given a set of n Boolean variables V = {x 1 , x 2 , � � �, x n }, and a set of 2n literals associated with each variable in V, i.e., L = {x 1 , ¬x 1 , x 2 , ¬x 2 , � � �, x n , ¬x n }, a clause is a disjunction of literals. In the classic k − SAT problem, each clause consists of a fixed number of k literals. A propositional formula F can be expressed in the conjunctive normal form, i.e., F = c 1^c2^� � �^c m , where m is the number of clauses and c i (1 ⩽ i ⩽ m) is a clause in F. In this paper, we use the notation V(F) to represent all the variables in the formula F, the notation C(F) to represent all the clauses in the formula F and the notation r = m/n to denote the clause-to-variable ratio of the formula F. Two different variables are called neighboring variables if they appear simultaneously in at least one clause, and the notation N(x) = {y|y 2 V(F)} (y and x are neighbor variables) is used to represent all neighboring variables of the variable x. We also define the notation CL(x) = {c|c is a clause which variable x appears} to represent the set of all clauses where the variable x appears. A mapping α: V(F) ! {true, false} is called an assignment. If α maps all variables to a Boolean value, then this assignment is complete. For SLS algorithms for solving SAT, a candidate solution is a complete assignment. Given a complete assignment, any clauses of formula F has two possible states: satisfied or unsatisfied. A clause is satisfied if and only if at least one literal of the clause is true under the assignment α; otherwise, the clause is unsatisfied. Given a formula F, an assignment α satisfies F if and only if α satisfies all the clauses in the formula F. The SAT problem is to decide whether there exists a complete assignment that satisfies all clauses in F. Since these three SLS solvers (namely AspiSAT, Ptwqcc and AspiPT) proposed in this paper are all dynamic local search algorithms, here we give an brief introduction of related concepts of dynamic local search. In the dynamic local search, each clause c (c 2 C(F)) in the formula F is associated with a positive integer (c) as its weight, and we use the notation weight(c) to denote the weight of the clause c. The average weight of all clauses is recorded as � w. We use the notation cost(F, α) to represent the sum of the weights of all unsatisfied clauses in the formula F under the assignment α. For any variable x in the formula F, when the variable x is flipped, such that the assignment α becomes a new assignment β, we define score(x) = cost(F, α) − cost (F, β). In this paper, a clause c is large-weighted if and only if weight(c) ⩾ 2. The basic framework The basic framework of the SLS algorithm for solving SAT problems is described as follows: • Step 1: For the CNF formula F, the algorithm randomly generates a complete assignments to all variables in formula F; • Step 2: The algorithm performs the search process in an iterative manner: in each search step (also known as iteration), the algorithm picks a variable according to a heuristic and flips the selected variable (i.e., modifying the truth value of the variable accordingly); • Step 3: The algorithm will repeat Step 2 until it finds a complete assignment that satisfies the formula F, or the number of steps performed by the algorithm exceeds the preset step limit; • Step 4: Once the search process is terminated, if the algorithm finds an assignment that satisfies F, then the algorithm reports this assignment; otherwise, it reports 'Unknown'. The pseudo code for a typical stochastic local search algorithm for SAT can be seen in Algorithm 1, where α is a variable assignment and G is a set of variables in the given formula F. Greedy mode and diversification mode SLS algorithms for solving SAT problems usually work between two modes [36]: the greedy mode and the diversification mode. In the greedy mode, SLS algorithms prefer to flip those variables that can reduce the number of unsatisfied clauses. In the diversification mode, SLS algorithms tend to explore the search space in order to better avoid the local optimum, so a random strategy is usually employed to accomplish the task. Li et al. proposed a heuristic based on the concept of promising decreasing variable (PDV), and developed an SLS solver G 2 WSAT [37] using this PDV-based heuristic. The PDA-based heuristic has been widely used by winning SLS solvers in international SAT Competitions (e.g., G 2 WSAT [37], adaptG 2 W-SAT [38], gNovelty+ [39], Sparrow [40], and EagleUP [41]). Among these solvers, Sparrow [40] achieves the best performance for solving SAT instances. Compared with the improvements in the greedy mode, recently more related work is focused on how to improve heuristics in the diversification mode. Most of the improvements for heuristics in the diversification mode belong to the Novelty family, for instance, Novelty [42], Novelty+ [43], Novelty++ [37], Novelty+p [38], and adaptNovelty+ [44]. Unlike the Novelty series, the probability distribution based heuristic [40] used by the Sparrow solver [40] is a recent breakthrough in the diversification mode. Review of configuration checking heuristics As mentioned before, the cycling problem is a major bottleneck of stagnating the performance of SLS algorithms, and configuration checking (CC) is proposed for handling this problem. CC was originally proposed for improving the performance of SLS algorithms for solving the problem of minimum vertex cover. According to its generality, CC heuristics have also resulted in an SLS solver called Swcc for solving SAT. According to the experiments reported in the literature [14], the results show that the performance of Swcc on the random 3-SAT instances exceeds the winning solver called TNM [45] in random track of SAT competition, which from the practical perspective indicates that the CC heuristic is effective for dealing with the cycling problem. The most important concept in CC heuristics is the definition of configuration. In the context of the SAT problem, the CC heuristic defines a configuration for each variable that appears in the formula F, which measures the circumstance information of the corresponding variable. In formal, the configuration of the variable x is a truth-valued vector, denoted by the notation configuration(x). Cai et al. gave the first definition of configuration where configuration(x) is composed of the truth value (i.e., true or false) of each variable in all neighboring variables of variable x (i.e., N(x)) [14]. The core idea of CC heuristic is to avoid any flipping variable whose configuration has not changed since the last flip of the corresponding variable [14]. In the literature [33], Luo et al. refined the definition of the configuration and gave a new definition of configuration based on the clause states. The related experiments reported in that literature showed that the performance of Swqcc with the newly defined configuration is better than Swcc which adopts the original definition of configuration. This paper is also focused on the clause-states-based configuration, so we give a formal definition of the clause-states-based configuration as follows. Definition 1 Given a formula F in CNF form and a complete assignments a, the configuration of the variable x is a vector, denoted by the notation configuration(x). The vector configuration(x) is composed of the states (i.e., satisfied or unsatisfied) of each clause in the set CL(x) consisting of all clauses where the variable x appears. In [33], Luo et al. not only redefined the definition of configuration, but also quantified the changes of configuration; then Luo et al. proposed the quantitative configuration checking (QCC) heuristic. The QCC heuristic uses the notation confvariation(x) to represent the number of configuration changes since the last flip of the variable x [33]. For each variable x, the initial value of confvariation(x) is 1, and the specific update operations of confvariation(x) can be found in [33]. It is worth noting that the literature [33] also uses a smoothing mechanism for confvariation(x). In this paper, we do not smooth the confvariation(x) based on preliminary experiments. As mentioned in [14], previous heuristics never consider the circumstance information of a variable when selecting a variable to be flipped, but take a number of variable properties (e.g., score [21], break [46] and age [47]) into account. However, the family of CC heuristics consider both the variable properties and the circumstance information of the variables when selecting a variable to be flipped. This is the essential difference between the family of CC heuristics and the previous heuristics. Combining quantitative configuration checking heuristics with aspiration mechanism Although the QCC heuristic achieves improvement over the original CC heuristic, according to the implementation details described in [33], the QCC heuristic, which is similar to the original CC heuristic, still makes SLS algorithms ignore flipping a number of variables with relatively large score(x) when reaching the local optima. The aspiration mechanism is an effective way to handle this problem, and has been successfully applied to the tabu search and the original CC heuristic. The aspiration mechanism renders the CC strategy more flexible by selecting the variables with great scores to flip. Without aspiration, variables in the diversification mode may be flipped mistakenly. This would delay the local search transferring to promising regions of search space. The aspiration mechanism corrects such mistakes and thus avoids the detention [19]. In this work, we focus on combining the QCC heuristic with the aspiration mechanism in an effective way and then propose a new heuristic dubbed QCCA (Quantitative Configuration Checking with Aspiration). In the QCCA heuristic, an important concept is the significant decreasing (SD) variable [35]. Due to its importance, we give its formal definition. Definition 2 Given a formula F of the CNF form and a complete set of true value assignments α, the variable x is a SD variable if and only if score(x) ⩾ κ, where the threshold κ is a relatively large integer. In this work, when solving random 3-SAT instances, the threshold κ is set to the average clause weight � w; when solving other instances, the threshold κ is set to 2. Since the QCCA heuristic is improved from the QCC heuristic, the concepts and definitions used in the QCCA heuristic are consistent with the QCC heuristic. According to the literature [33], the QCC heuristic maintains an important set of candidate variables G. In this paper, the QCCA heuristic maintains the candidate variable set G in the same way as the QCC heuristic does. Moreover, in order to be consistent with the QCC heuristic, the QCCA heuristic selects the variable to be flipped between the greedy mode and the diversification mode. The QCCA heuristic chooses which mode to select the variable to be flipped depending on whether or not the set of candidate variable G and the set of variable SD are empty. If the candidate variable set G or the SD variable set is not empty, the QCCA heuristic works in the greedy mode to select the variable; otherwise the QCCA heuristic works in the diversification mode to select the variable. The details of how the greedy mode and the diversification mode work are described as follows. 1. When the QCCA heuristic works in greedy mode, the QCCA heuristic first determines whether or not G is empty; if G is not empty, the QCCA heuristic will select the variable with the largest score to be flipped from the set G; otherwise (G set is empty), the QCCA heuristic will activate the aspiration mechanism, i.e., selecting the SD variable with the largest score to be flipped. If there is more than one variable with the largest score, then it will select the one with the maximum confvariation. 2. When the QCCA heuristic works in the diversification mode, the QCCA heuristic first randomly picks an unsatisfied clause c, and then selects the variable with the largest confvariation in the clause c to be flipped. If there is more than one variable with the largest confvariation, select the variable that has not been flipped most recently. Smoothed clause weighting method The clause weighting method, especially the smoothed clause weighting method, significantly improves the performance of the SLS algorithm for solving SAT instances. The smoothed clause weighting method has been adopted by many advanced SLS solvers [40,41,48,51]. AspiSAT adopts the method of smoothed clause weighting(SCW) used by Swqcc, which resembles the SAPS weighting method [48]. In the SCW method, each clause has a weight, weight(c), which is a positive integer. In the initialization phase, the weight of each clause is set to 1. Whenever the SLS algorithm activates the SCW method, the weight of each unsatisfied clause will be increased by 1. In addition, the SCW method will use the smoothing mechanism periodically. The main rule used by the SCW smoothed mechanism to maintain clause weights is described as follows: Rule 1. When the average clause weight � w is greater than a fixed threshold δ, the weight of each clause appearing in the formula will be smoothed: for each clause c 2 C(F), weightðcÞ ¼ bg � weightðcÞc þ dð1 À gÞ � � we, where γ is a real-valued number ranging from 0 to 1 (0 < γ < 1). In the AspiSAT algorithm, AspiSAT activates the SCW method when the algorithm reaches local optima. AspiSAT algorithm In this section, we present a new SLS algorithm dubbed AspiSAT, which is based on the QCCA heuristic. We list the pseudo code of the AspiSAT algorithm in Algorithm 2, and describe the details of the AspiSAT algorithm as follows. In the initialization phase of the algorithm, the algorithm randomly generates a complete assignment α as the initialization solution, and the weight of each clause is set to 1. Then for each variable x, the algorithm calculates the score(x) according to the initialized assignment α. Also, for each variable x, the property confvariation(x), which measures the frequency of the change on x's configuration, is initialized to 1. After that, the algorithm puts all variables with score > 0 into G, the set of candidate variable. Then G is maintained during the search process of the algorithm (Lines 14 and 20 in Algorithm 2), and flipping the variables in G would decrease the total weight of all unsatisfied clauses. When the initialization phase is complete, the algorithm performs the search process in an iterative manner: the algorithm would select and flip a variable in each search step (also known as iteration) until the algorithm finds a satisfying assignment or the number of search steps exceeds the step limit. In each search step, the algorithm selects the variable to be flipped according to the selection mechanisms employed in the greedy mode and the diversification mode of the QCCA heuristic. It is worth noting that, in the current search step, when the candidate variable set G and SD variable set are both empty, the algorithm will be considered as it reaches a local optimum, and the algorithm will activate the SCW method to update the clause weights. After the execution of the SCW method is finished, the algorithm will select a variable to be flipped in the diversification mode. In each step, the algorithm would flip the selected variable and then update the candidate variable set G as well as the variable properties, such as score, confvariation. Once the search process is finished, if the algorithm finds a satisfying assignment, the assignment reports such assignment as a solution to the formula; otherwise, the algorithm reports 'Unknown'. Experimental evaluation of the AspiSAT algorithm In this section, we first briefly describe the testing benchmarks and the experimental setups. Then, we evaluate the performance of AspiSAT, Swqcc [33], Sparrow [40] and gNovelty +GCwa [52] on random 3-SAT instances and structured instances. The experiment is divided into four parts. In the first part, we compare the performance of AspiSAT, Swqcc, Sparrow and gNovelty+GCwa on large-scale random 3-SAT instances (2500 ⩽ ]var ⩽ 50000) in random track of the international SAT competition; In the second part, we compare the performance of AspiSAT, Swqcc and Sparrow on a huge-scale random 3-SAT instance (55000 ⩽ ]var ⩽ 100000). In the third part, in order to show the significant performance gap between AspiSAT and Swqcc, we compare the performance of AspiSAT and Swqcc on the larger-scale random 3-SAT instance (110000 ⩽ ]var ⩽ 150000). In the fourth part, we compare the performance of AspiSAT and Sattime [53] on the satisfiable structured instances; we would like to note that Sattime is well-known as it is an efficient SLS solver for solving structured SAT instances. Finally, we summarize and analyze the results of these comparative experiments. Testing benchmarks In this experiment, we set up four testing benchmarks. The first benchmark contains largescale random 3-SAT instances used in the Random SAT Track of the International SAT11 Competition [49]; The reason why we did not adopt the medium-sized random 3-SAT instances used in the SAT competition is that for modern SLS algorithms, those instances are too simple and can be quickly solved. The second and third testing benchmarks are consisting of random 3-SAT instances generated by the fixed clause length model. The second testing benchmark includes those random 3-SAT instances with 55000 ⩽ ]var ⩽ 100000; the third testing benchmark contains those random 3-SAT instances with 110000 ⩽ ]var ⩽ 150000. The fourth testing benchmark is composed of selected satisfiable instances used in the Crafted Track of the international SAT11 Competition. We do not consider those unsatisfiable instances because they cannot be determined by any SLS algorithms. All instances in the first three testing benchmarks sets share the same clause-to-variable ratio of 4.2, and they are all satisfiable. Hence these instances can be used to test the performance of the SLS algorithm. Competitors and experimental setup In this experiment, the AspiSAT solver is implemented in C++. Based on preliminary experiments, for AspiSAT, the smoothing threshold δ is set to 200 + (]var + 250)/500, and the parameters β and γ are set to 0.3. In order to make our comparison fair, the parameter settings used by the Swqcc solver [33] are the same as those ones used in the AspiSAT solver. The Sparrow solver and the Sattime solver use the version of the International SAT11 Competition [50]. It is worth noting that Sparrow [40] is one of the best-performing SLS solvers for solving random SAT instances, while Sattime [53] is one of the best performing SLS solvers for solving the structured SAT instances. The gNovelty+GCwa solver [52] is an effective local search solver for solving SAT. All our experiments were conducted on a workstation with an Intel(R) Core(TM) i7-2620 CPU, clocked at 2.7GHz and 7.8GB of memory. The operating system of the machine is GNU/ Linux. Evaluation criteria In this experiment, the evaluation criteria we used were similar to those used in the International SAT Competition. We compare the average time and success rate of each algorithm in solving each set of instances. We set the cutoff time for each solver run: for the first testing benchmark, the cutoff time is set to 1000 seconds; for the second testing benchmark, the cutoff time is set to 2000 seconds; for the third testing benchmark, the cutoff time is set to 3000 seconds; for the fourth testing benchmark, the cutoff time is set to 2000 seconds. For the first testing benchmark, each solver performs 100 runs on each instance; for the second testing benchmark, each solver performs 10 runs on each instance; for the third testing benchmark, each solver performs 1 run on each instance; for the fourth testing benchmark, each solver performs 10 runs on each instance. For each solver run, it is said to be successful if and only if the solver finds a satisfying assignment; otherwise it is said to be failed. Evaluation results Discussion on the experimental results on the first testing benchmark. Table 1 shows the performance of AspiSAT, Swqcc, Sparrow and gNovelty+GCwa on the first testing benchmark. According to the results, in terms of the averaged run time, the performance of AspiSAT and Swqcc significantly outperform Sparrow and gNovelty+GCwa. The averaged run time of gNovelty+GCwa is even larger than 1000.0 on the 5 sets of instance sets (k3-v25000, k3-v30000, ks-v35000, ks-v40000 and k3-v50000). Similarly, in terms of the success rate, Aspi-SAT achieves 100% in each instance family, while Swqcc has a success rate of less than 100% on 2 sets of instance sets (k3-v40000 and k3-v50000), and the success rates of Sparrow on the 6 sets of instance sets (k3-v2500, k3-v25000, k3-v30000, k3-v35000, k3-v40000 and k3-v50000) are less than 100%. As for gNovelty+GCwa, the success rates on all the 10 sets of instance sets are less than 100%. Especially on the last instance family which is also the most difficult instance family to be solved (k3-v50000), the success rates of AspiSAT, Swqcc and Sparrow are 100%, 99.8%, 73.3% and 0.0% respectively. This shows that the performance of Sparrow and gNovelty+GCwa is significantly lower than AspiSAT and Swqcc. Based on the experimental results in Table 1, we could conclude that AspiSAT performs best on the first testing benchmark. Discussion on the experimental results on the second testing benchmark. We also compared the performance of AspiSAT, Swqcc and Sparrow on large-scale random 3-SAT instances (55000 ⩽ ]var ⩽ 100000). The experimental results are summarized in Table 2. According to the experimental results, AspiSAT surpasses Swqcc and Sparrow in terms of both the success rate and the averaged run time. From the perspective of the success rate, Aspi-SAT is able to achieve the success rate higher than 98% for each instance family on the second testing benchmark (There are 10 instance families in total, among which the success rates of 6 instance families are 100%). The success rate of Swqcc is 100% on only 3 instance families, and the success rate of Swqcc is lower than that of AspiSAT on other instance families. Also, the success rates of Sparrow on these instance families range from 51.6% to 96.3%, which are significantly lower than AspiSAT and Swqcc. In particular, on the largest instance family (k3-v100000), the success rate of AspiSAT is 98.4%, while as a comparison, the success rates of Swqcc and Sparrow are only 91.4% and 51.6%, respectively. In addition, in terms of averaged run time, AspiSAT uses less than 700 seconds on all instance families. The average run time of Swqcc on all instance families ranges from 201.1 to 967.8, while that of Sparrow ranges from 573.6 to 1583.6. In particular, on the k3-v100000 instance family, the averaged run time of AspiSAT is 693.4 seconds, while the averaged run time of Swqcc and Sparrow is 967.8 seconds and 1556.4 seconds, respectively. The experimental results show that AspiSAT performs best on the second testing benchmark. Discussion on the experimental results on the third testing benchmark. In order to better illustrate the performance gap between AspiSAT and Swqcc and the performance gap between the QCCA heuristic and the original QCC heuristic, we further explore the performance of our AspiSAT solver and the Swqcc solver on a larger-scale random 3-SAT instances (110000 ⩽ ]var ⩽ 150000). Table 3 reports the experimental results of this empirical comparison and indicates that AspiSAT performs clearly better than Swqcc. According to the experimental results, the performance gap between AspiSAT and Swqcc in terms of both the success rate and the average run time shows that the improvement of the QCCA heuristic over the original QCC heuristic is substantial, which indicates the effectiveness of the aspiration mechanism. We would like to note that the AspiSAT solver could also be combined with the probability distribution heuristic adopted by the Sparrow solver, and the performance might be further improved. Discussion on the experimental results on the fourth testing benchmark. It is widely recognized that, for solving the structured SAT instances, complete solvers based on conflict driven clause learning (CDCL) shows the best performance. However, it is also very interesting to test the performance of SLS solvers on solving structured SAT instances, and Sattime is an efficient SLS solver for solving structured SAT instances. To demonstrate the robustness of the AspiSAT solver, we compare the performance of AspiSAT, Swqcc and Sattime on the fourth testing benchmark. According to the experimental results reported in Table 4, it is clear that AspiSAT achieves the comparable performance compared to Sattime on the fourth testing benchmark, and even performs better than Sattime on a number of instance families. When focusing on the comparison between AspiSAT and Swqcc, the AspiSAT solver performs better or achieves the comparable performance compared to Swqcc. More particularly, out of 7 instance families in total, AspiSAT performs better than Swqcc in terms of the success rate on 3 instance families, and achieves the same success rate compared to Swqcc on the other 4 instance families. The experimental results show that the QCCA heuristic is able to achieve improvements over the original QCC heuristic on solving structured SAT instances. We would like to note that Sattime adopts reasoning mechanisms before local search in solving structured SAT instances. However, in order to better demonstrate the effect of QCCA heuristic, we do not integrate reasoning mechanisms into the implementation of AspiSAT. By combining effective reasoning mechanisms, the performance of AspiSAT could be enhanced on solving structured SAT instances. Summary: The above experimental results show that AspiSAT performs better than Swqcc, Sparrow and gNovelty+GCwa on solving random 3-SAT instances, which indicates the effectiveness of the QCCA heuristic. Moreover, the QCCA heuristic achieves improvement over the original QCC heuristic on solving both random 3-SAT instances and structured SAT instances. Improving clause weighting method for AspiSAT solver and Swqcc solver In this section, by improving the original SCW weighting method of AspiSAT solver and Swqcc solver, we can develop new SLS solvers based on AspiSAT and Swqcc, respectively: AspiPT and Ptwqcc, which shows performance improvement on solving random 5-SAT instances. Our experimental results show that AspiPT and Ptwqcc perform significantly better than AspiSAT and Swqcc on random 5-SAT instances, and the performance of AspiPT and Ptwqcc on random 7-SAT instances is highly comparable to that of AspiSAT and Swqcc. Probability and threshold weighting method In this subsection, we present a probability and threshold weighting (PTW) method which is similar to the PAWS weighting method [51]. The PTW weighting method has a total of two components: an increasing weight component and a decreasing weight component. In the search phase, the original PAWS method would activate the decreasing weight component in a fixed period. Unlike the PAWS method, the PTW weighting method activates the weight decreasing component in a probabilistic manner-whenever PTW is activated, PTW determines which component to be called depending on the probability parameter sp and the threshold θ. The PTW weighting method will be executed as follows. First, a probability p is randomized. When p < sp and the number of large weight clauses is greater than θ, the PTW performs the decreasing weight component: for each large weight clause, the clause weight is decreased by 1; otherwise, the PTW executes the increasing weight component: for each unsatisfied clause, the clause weight is increased by 1. We replace the original SCW weighting method with the proposed PTW weighting method in AspiSAT and Swqcc, resulting in new SLS solvers called AspiPT and Ptwqcc, respectively. Empirical evaluation of AspiPT and Ptwqcc In this section, we evaluate the performance of AspiPT, Ptwqcc, AspiSAT and Swqcc on random 5-SAT instances and random 7-SAT instances. The random 5-SAT testing benchmark we used contains all large-scale random 5-SAT instances in the International SAT competition (10 instances per instance family). The random 7-SAT testing benchmark we used includes random 7-SAT instances with ]var = 150 and ]var = 200 in the International SAT competition (10 instances per instance family). Then we introduce the parameter settings for AspiPT and Ptwqcc. For solving random 5-SAT instances, the probability parameter sp in AspiPT and Ptwqcc is set to 0.75 and the threshold θ is set to 8. For solving random 7-SAT instances, the probability parameter sp in AspiPT and Ptwqcc is set to 0.9 and the threshold θ is set to 10. In addition, since AspiPT uses the aspiration mechanism, we set the parameter κ of AspiPT to 2. In our experiment, each solver performs 10 runs on each instance. For each solver run, the cutoff time is set to 1000 seconds. Table 5 reports the experimental results of AspiPT, Ptwqcc, AspiSAT and Swqcc on random 5-SAT instances and random 7-SAT instances. As can be seen from the experimental results, the performance of AspiPT and Ptwqcc on random 5-SAT instances is significantly better than that of AspiSAT and Swqcc. We would like to note that Swqcc and AspiSAT cannot solve random 5-SAT instances with ]var ⩾ 1250 within the cutoff time, while AspiPT and Ptwqcc can at least solve random 5-SAT instances with ]var = 2000. In addition, the performance of AspiPT and Ptwqcc is highly comparable to that AspiSAT and Swqcc on solving random 7-SAT instances. The experimental results show that the QCCA heuristic and the QCC heuristic can cooperate well with different clause weighting methods (SCW method and PTW method). Conclusion In this paper, we propose a new QCCA heuristic, which effectively combines the QCC heuristic and the aspiration mechanism. The QCCA heuristic is able to improve the performance of the SLS algorithm for solving the SAT problem. Based on QCCA heuristics, we develop a new SLS solver AspiSAT for solving SAT problems. Extensive experiments present that the Aspi-SAT solver performs better than Swqcc, Sparrow and gNovelty+GCwa on solving a broad range of random 3-SAT instances. Moreover, for solving the structured SAT instances, Aspi-SAT outperforms Swqcc in terms of the success rate, and the performance of AspiSAT is comparable to that of Sattime. Furthermore, we replace the original weighting method SCW in AspiSAT and Swqcc with the PTW weighting method, resulting in two new SLS solvers dubbed AspiPT and Ptwqcc, respectively. Our experiments show that AspiPT and Ptwqcc perform significantly better than AspiSAT and Swqcc on random 5-SAT instances. The conclusion shows that the QCCA heuristic and the QCC heuristic are able to cooperate well with different clause weighting methods. For future work, we plan to integrate the probability distribution heuristic into AspiSAT, in order to further improve the performance. We would also like to redefine the concept of configuration, in order to propose new and more effective heuristics based on the QCC heuristic and the QCCA heuristic for solving other types of SAT instances.
8,092
2020-04-23T00:00:00.000
[ "Computer Science" ]
OrthoDB in 2020: evolutionary and functional annotations of orthologs Abstract OrthoDB provides evolutionary and functional annotations of orthologs, inferred for a vast number of available organisms. OrthoDB is leading in the coverage and genomic diversity sampling of Eukaryotes, Prokaryotes and Viruses, and the sampling of Bacteria is further set to increase three-fold. The user interface has been enhanced in response to the massive growth in data. OrthoDB provides three views on the data: (i) a list of orthologous groups related to a user query, which are now arranged to visualize their hierarchical relations, (ii) a detailed view of an orthologous group, now featuring a Sankey diagram to facilitate navigation between the levels of orthology, from more finely-resolved to more general groups of orthologs, as well as an arrangement of orthologs into an interactive organism taxonomy structure, and (iii) we added a gene-centric view, showing the gene functional annotations and the pair-wise orthologs in example species. The OrthoDB standalone software for delineation of orthologs, Orthologer, is freely available. Online BUSCO assessments and mapping to OrthoDB of user-uploaded data enable interactive exploration of related annotations and generation of comparative charts. OrthoDB strives to predict orthologs from the broadest coverage of species, as well as to extensively collate available functional annotations, and to compute evolutionary annotations such as evolutionary rate and phyletic profile. OrthoDB data can be assessed via SPARQL RDF, REST API, downloaded or browsed online from https://orthodb.org. INTRODUCTION OrthoDB is a leading resource of evolutionary and functional annotations of orthologs (1). Orthology enables formulation of the most specific functional hypothesis for descendant genes (2,3), and therefore it is the preferred way for tentative characterization of genes in newly sequenced organisms. Orthology is also the cornerstone for comparative evolutionary studies. Initially coined for a pair of species, the term orthologs referred to genes that derive from a single gene in the last common ancestor (LCA) of the species (4). Generalization of the term to more than two species, however, required the LCA to be specified and led to the concepts of groups of orthologs or orthogroups (5) and levels of orthology (6,7). Although orthologs are clearly defined evolutionarily as descendant genes from a single gene of the LCA of the considered organisms, practical delineation of orthologs in extant species can be challenging as there are limitations to the accuracy with which gene and species genealogies can be reconstructed. Large-scale delineation of gene orthology is an even more demanding and challenging task, as there is a trade-off between accuracy and scalability of methods for making evolutionary inferences. The demand for automated methods for ortholog prediction is growing, and the challenges are exemplified by the numerous approaches proposed (6,(8)(9)(10)(11)(12)(13)(14)(15)(16). Orthologer is the software behind the delineation of orthologs in OrthoDB that is freely available for standalone computations. The concept of orthology is inherently hierarchical, as each phylogenetic clade or subclade of species has a distinct common ancestor, leading to more finely-resolved orthologs for more closely related species. OrthoDB has explicitly emphasized this since its inception (7), defining levels of orthology at each major radiation of the species taxonomy. Overcoming the original pair-wise thinking of orthology however is tangled and many find navigation D390 Nucleic Acids Research, 2021, Vol. 49, Database issue to the most relevant level of orthology confusing. This prompted us to enhance the OrthoDB web interface to improve usability and simplify access to the ever-growing database. Evolutionary and functional annotations of orthologs, predicted across thousands of genomes from all forms of life including viruses, remains a key feature of OrthoDB. Functional annotations of genes were extensively collated from major resources including Uniprot (17) and NCBI (18) gene records, as well as InterPro (19) Gene Ontology (GO) (20). Notably, we collated and made searchable references such as to KEGG (21) pathways and to Online Mendelian Inheritance In Man (OMIM ® ) (22) linking to human diseases. All data were processed to assemble consolidated and non-redundant per-gene annotation records, presented as short one-line descriptions. Consequent aggregations to the orthogroup-level from the corresponding genelevel annotations allowed us to compile: (a) one-line orthologous group descriptors to concisely but precisely outline functional knowledge in a human-readable language as described in an earlier publication (23) whenever possible assign functional categories of COG (24), GO, KEGG pathways and the enzyme EC number. Such high-level functional descriptors are informative for comparative studies and metagenomics. Functional annotation of genes is complicated and may contain errors. Discordant annotations should be considered with caution, and OrthoDB makes such errors in the underlying data apparent. Evolutionary annotations were computed for each orthologous group from the available genomics data and sequence alignment metrics. As detailed earlier in (23) metrics include: 'phyletic profile' that reflects gene universality (proportion of species with orthologs) and duplicability (proportion of multi-copy versus singlecopy orthologs), 'evolutionary rate' that reflects the relative constraints on protein sequence conservation or divergence, and 'sibling groups' that reflects the sequence uniqueness of the orthologs. The universality of a gene family hints at a functional load that is widely necessary and basal, while lineage-restricted genes may underlie lineage-specific adaptations. Duplicability is also indicative of the type of molecular functions, e.g. members of a signal-transduction pathway or a protein complex may evolve under the single-copy control (25). 'Evolutionary rate' is a relative measure of sequence divergence, where faster-than-average genome evolution may indicate positive selection. The 'sibling groups' allow navigation to gene families possibly having similar molecular functions. These annotations, which provide an evolutionary perspective, remain unique to OrthoDB. The OrthoDB resource is public, including both data and data processing software. The optional registration allows the authenticated users to upload their own proteomic data, for example from freshly sequenced genomes, for performing online BUSCO (26) analysis and for mapping to the current OrthoDB data. This enables the user to map existing functional annotations to the new genes, as well as to generate user-tailored comparative charts depicting the total gene count, the fraction of common genes, the fraction of the most conserved single-copy genes, etc. COVERAGE OF ORGANISMS In 2020, OrthoDB continues to provide the worldwide leading coverage of organisms. It notably includes Viruses and is set to further increase the sampling of Bacteria three-fold in release 11. The actual statistics of the analyzed organisms, their taxonomic relations, and the genome assembly accession numbers are shown and searchable under the 'Advanced' section of the user interface. The orthology levels, referring to the last common ancestors from which extant orthologs evolved, are defined according to the NCBI Taxonomy (27). Protein-coding gene translations are retrieved mostly from RefSeq and NCBI complete genomes (28). The number of sequenced genomes continues to grow exponentially, and it is important to uniformly cover the emerging diversity of sequences. This will not only facilitate studies of these organisms, but will also empower resolution of complex ortholog genealogies, with multiple gene duplications and losses. Sampling of species diversity was shown to be a major factor affecting the accuracy of inferred gene orthology, besides the quality of the underlying genomes and their annotations (29). OrthoDB collects available complete genomes, identifies well-sampled taxonomic units having over 96% pair-wise genomic identity using MASH (30), and then selects the most annotated and complete representative genomes for each taxonomic unit. WEB INTERFACE The OrthoDB user interface has been enhanced to respond to the massive growth in available data There are now three views on the data: (i) a sorted list of ortholog groups resulting from free-text search against all identifiers and textual description records by user query, (ii) a detailed view of an ortholog group and (iii) a gene-centric view. Since the last OrthoDB publication: (i) the retrieved list of ortholog groups is now arranged to visualize their hierarchical relations. Previously, when a user searched by a keyword, an identifier or a complex query, OrthoDB returned a list of ortholog groups sorted only by textual relevance (i.e. a count of the number of query matches per group). The list remained unsorted if a single gene identifier pinpointed one gene in one organism, e.g. if a Uniprot id or NCBI gid was queried. Since even simple queries by a gene identifier return several ortholog groups corresponding to different levels of orthology, for example across Eukaryotes, vertebrates, mammals etc., such an unsorted list can be confusing. In the revised interface we opted for double-sorting, firstly by textual relevance, then by the taxonomic level, from the more general to the more specific. The hierarchy is visualized by a ladder-like shifted list of ortholog group banners. This results in a more biologically intuitive output when OrthoDB is queried by a particular gene identifier, for example via a URL coming from NCBI linkouts ( Figure 1B). evolutionarily related groups, where the flow width is proportional to the number of genes in each group. This naturally facilitates visual navigation between the levels of orthology ( Figure 1C). We also arranged the member genes by organism taxonomy (Figure 1D), which can be interactively expanded or collapsed. There are now many ortholog groups containing hundreds to thousands of genes, making them cumbersome to browse. Since many users are only interested in a small subset of available species, or specific taxonomic clades, this rearrangement is critical for OrthoDB usability, in that it hides most of the unnecessary data, while being intuitive and transparent for orthologs in other organisms. The default choice of expanded species can be customized by users in the 'Advanced' settings panel ( Figure 1E). Each gene is further annotated with a wealth of collated functional descriptions and cross-references to other public resources that are hidden by default and can be viewed by clicking on '>>' (where the size of the chevrons reflects the amount of available annotations). (iii) we added a gene-centric view, providing a list of pairwise orthologs in example species in addition to the available annotations. Although since its inception OrthoDB explicitly emphasized groups of orthologs and their hierarchical nature (7), the notion of pairwise orthology (i.e. between two species) is commonly used, and we hope this gene-centric view will facilitate bridging these two concepts. The pair-wise orthology in example species is arranged by the phylogenetic distance to the organism in which the gene matching the query was found. These are consequently linked to the corresponding groups of orthologs at the closest level of orthology to enable navigation to orthologs in the other organisms. ORTHOLOGER SOFTWARE Orthologer, the software used to delineate orthologs in OrthoDB, is freely available for standalone computations (see more from https://www.orthodb. org/orthodb userguide.html#standalone-orthologersoftware). It is based on pair-wise assessments of protein sequence homology between complete genomes using MMseqs2 (31) and subsequent clustering, as previously described in (23). The latest adjustment to Orthologer includes the addition of an optional mode to perform hierarchical computation of orthology guided by a user-provided organism taxonomy. DATA AVAILABILITY As for previous versions of OrthoDB we provide data files for bulk download, one file per level of orthology; as well as the underlying amino acid gene translations. To retrieve substantial subsets of data from OrthoDB or to access it programmatically we provide a REST API, documented at https://www.orthodb.org/orthodb userguide.html#api, that returns data in JSON, FASTA or TAB formats. All data are distributed under the Creative Commons Attribution 3.0 License from http://www.orthodb.org/. The RDF SPARQL interface was introduced in 2016 and it is gaining momentum. Adopting URIs of UniProt proteins and Ensembl genes, to be compatible with both UniProt and Ensembl SPARQL endpoints, it provides the possibility for very elaborate queries and a number of clickable links to Ensembl Genomes, NCBI, Interpro and GO resources. Users can also navigate to OrthoDB records by following links from FlyBase 'Orthologs' section, UniProt 'Phylogenomic databases' section, or NCBI 'Additional links/ Gene LinkOut' section. CONCLUSIONS AND PERSPECTIVES The growing number of sequenced genomes increases the power of comparative analyses, but it also presents challenges regarding scalability of methods and data presentation to end users. OrthoDB strives to uniformly sample the available genomic space and to refine the accuracy of ortholog delineations.
2,781.8
2020-11-16T00:00:00.000
[ "Biology", "Computer Science", "Environmental Science" ]
Pointwise control of the linearized Gear-Grimshaw system In this paper we consider the problem of controlling pointwise, by means of a time dependent Dirac measure supported by a given point, a coupled system of two Korteweg-de Vries equations on the unit circle. More precisely, by means of spectral analysis and Fourier expansion we prove, under general assumptions on the physical parameters of the system, a pointwise observability inequality which leads to the pointwise controllability when we observe two control functions. In addition, with a uniqueness property proved for the linearized system without control, we are also able to show pointwise controllability when only one control function acts internally. In both cases we can find, under some assumptions on the coefficients of the system, the sharp time of the controllability. Introduction Wave phenomena occur in many branches of mathematical physics and due to the wide practical applications it has become one of the most important scientific research areas. During the past several decades, many scientists developed mathematical models to explain the wave behavior. The Korteweg-de Vries equation (KdV) u t + u xxx + uu x = 0 was first proposed as a model for propagation of unidirectional, one-dimensional, smallamplitude long waves of water in a channel. A few of the many other applications include internal gravity waves in a stratified fluid, waves in a rotating atmosphere, ionacoustic waves in a plasma, among others. Starting in the latter half of the 1960s, the mathematical theory for such nonlinear dispersive wave equations came to the fore as a major topic within nonlinear analysis. Since then physicists and mathematicians were led to derive sets of equations to describe the dynamics of the waves in some specific physical regimes and much effort has been expended on various aspects of the initial and boundary value problems. For instance, since the first coupled KdV system was proposed by Hirota and Satsuma [18,19], it has been studied amply and some important coupled KdV models have been derived. In particular, general coupled KdV models were applied in different fields, such as in shallow stratified liquid: where u = u(x, t) and v = v(x, t) are real-valued functions of the real variables x and t, and a 1 , a 2 , a 3 , b 1 , b 2 and r are real constants with b 1 > 0 and b 2 > 0. System (1.1) was proposed by Gear and Grimshaw [13] as a model to describe strong interactions of two long internal gravity waves in a stratified fluid, where the two waves are assumed to correspond to different modes of the linearized equations of motion. It has the structure of a pair of KdV equations with linear coupling terms and has been object of intensive research in recent years. We refer to [3] for an extensive discussion on the physical relevance of the system in its full structure. 1.1. Setting of the problem. In this paper we are mainly concerned with the study of the pointwise controllability of the linearized Gear-Grimshaw system posed on the unit circle T: where a, c, d, r are given positive constants, δ x 0 denotes the Dirac delta function centered in a given point x 0 ∈ T and f, g are the control functions. More precisely, the purpose is to see whether one can force the solutions of those systems to have certain desired properties by choosing appropriate control inputs. Consideration will be given to the following fundamental problem that arises in control theory, as proposed by Haraux in [15]: Pointwise control problem: Given x 0 ∈ T, T > 0 and (u 0 , v 0 ), (u T , v T ) in L 2 (T) × L 2 (T), can we find appropriate f (t) and g(t) in a certain space such that the corresponding solution (u, v) of (1.2) satisfies u(T ) = u T and v(T ) = v T ? If we can always find control inputs to drive the system from any given initial state (u 0 , v 0 ) to any given terminal state (u T , v T ), then the system is said to be pointwise controllable. 1.2. State of the art. As far as we know, the internal controllability problem for the system (1.1) remains open. By contrast, the study of the boundary controllability properties is considerably more developed. Indeed, the first result was obtained in [28], when the model is posed on a periodic domain and r = 0. In this case, a diagonalization of the main terms allows us to decouple the corresponding linear system into two scalar KdV equations and to use the previous results available in the literature. Later on, Micu et al. [29] proved the local exact boundary controllability property for the nonlinear system, posed on a bounded interval. Their result was improved by Cerpa and Pazoto [9] and by Capistrano-Filho et al. [7]. By considering a different set of boundary conditions, the same boundary control problem was also addressed by the authors in [8]. We note that, the results mentioned above were first obtained for the corresponding linearized systems by applying the Hilbert Uniqueness Method (HUM) due to J.-L. Lions [25], combined with some ideas introduced by Rosier in [31]. In this case the problem is reduced to prove the so-called "observability inequality" for the corresponding adjoint system. The controllability result was extended to the full system by means of a fixed point argument. The internal stabilization problem has also been addressed; see, for instance, [5,10,30] and the references therein. Although controllability and stabilization are closely related, one may expect that some of the available results will have some counterparts in the context of the control problem, but this issue is open. Particularly, when the model is posed on a periodic domain, Capistrano-Filho et al. [5], designed a time-varying feedback law and established the exponential stability of the solutions in Sobolev spaces of any positive integral order by using a Lyapunov approach. This extends an earlier theorem of Dávila [10] also obtained in H s (T), for s ≤ 2. The proof follows the ideas introduced in [24] for the scalar KdV equation by using the infinite family of conservation laws for this equation. These conservation laws lead to the construction of suitable Lyapunov functions that give the exponential decay of the solutions. In [5] the Lyapunov approach was possible thanks to the results established by Dávila and Chavez [11]. They proved that, under suitable conditions on the coefficients of the system, the system also has an infinite family of conservation laws. 1.3. Main results. As we mentioned before, no results are available in the literature on the internal controllability of the Gear-Grimshaw system. In this work we use spectral analysis and Fourier series to prove some results of pointwise controllability for the system (1.2). Fourier series are considered to be very useful in linear control theory (see, e.g. [32] and its references). In particular, a classical generalization of Parseval's equality, given by Ingham [20], and its many recent variants are very efficient in solving many control problems where other methods do not seem to apply. An outline of this theory is presented in [1,23,26]. We also prove some new results concerning the use of harmonic analysis in the framework of dispersive systems. In this spirit, we derive the controllability of the linearized Gear-Grimshaw system posed on the unit circle T. As it was pointed out earlier by Haraux and Jaffard [15,16,17] by studying some other systems, for non-periodic boundary conditions the controllability properties may heavily depend on the location of the observation or control point. One of the main result provides a sharp positive answer for the controllability issue mentioned in the beginning of this introduction. Theorem 1.1. For almost all quadruples (a, c, d, r) ∈ (0, ∞) 4 the following property holds. For any fixed there exist control functions f, g ∈ L 2 loc (R) such that the solution of (1.2) satisfies the final conditions u(T ) = u T and v(T ) = v T . We prove this theorem by applying the Hilbert Uniqueness Method (HUM) due to J.-L. Lions [25] (see also Dolecki and Russell [12]) that reduces the controllability property to the observability of the homogeneous dual problem More precisely, Theorem 1.1 will be obtained as a consequence of Theorem 1.2. For almost all quadruples (a, c, d, r) ∈ (0, ∞) 4 the following properties hold. (i) Given any (u 0 , v 0 ) ∈ H, the system (1.3) has a unique solution (u, v) ∈ C b (R, H), and the linear map The energy of the solution, defined by the formula does not depend on t ∈ R. (iii) For every solution and x 0 ∈ T the functions u(·, x 0 ) and v(·, x 0 ) are well defined in L 2 loc (R). (iv) For every non-degenerate bounded interval I there exist two positive constants α, β such that for all solutions of (1.3) and for all x 0 ∈ T. By applying a general method [22], analogous to HUM, Theorem 1.2 will also imply the pointwise exponential stabilizability of (1.2): (i) Given any H), and the linear map for all solutions and for all t ≥ 0. Another relevant result of this work is a uniqueness theorem when only one function, u(·, x 0 ) or v(·, x 0 ), is observed. It is known that Fourier series and, in particular, Ingham type inequalities, are very efficient in solving many control problems when the frequencies satisfy a uniform gap condition. However, in some control problems only some weakened gap conditions are satisfied; see, for instance, [1,4,21]. Under such assumptions we may still get some weakened observability theorems. Theorem 1.4 is obtained by applying a general result of this type. Similarly to Theorem 1.2, Theorem 1.4 implies some exact controllability and exponential stabilizability results by acting in only one of the equations. (ii) The above results remain valid for the scalar KdV equation. We will not present the proofs because they are similar (and simpler) than the proofs given in this paper. (iii) If we require more regularity on the initial data, say (u 0 , v 0 ) ∈ H 2 (T) × H 2 (T), then the results obtained for the linear system allow us to prove the local controllability of the nonlinear system by means of a fixed point argument. The proof is similar to that of [6, Theorem 2.2], and hence it will be omitted. The plan of the present article is the following. In Section 2 we present some known and new vectorial Ingham type theorems which will form the basis of the proofs of our observability and uniqueness theorems. Then Theorems 1.2, 1.1, 1.3 and 1.4 will be proved (in strengthened forms) in Sections 3, 4, 5 and 6, respectively. A review of Ingham type theorems Given a family Ω = (ω k ) k∈K := {ω k : k ∈ K} of real numbers, we consider functions of the form where I is some given bounded interval. We recall the definition of the upper density D + = D + (Ω) of Ω. For each ℓ > 0 we denote by n + (ℓ) the largest number of exponents ω k that we may find in an interval of length ℓ, and then we set It can be shown (see, e.g., [1, p. 57] or [23, p. 174]) that It follows from the definition that D + is subadditive: for any families Ω 1 and Ω 2 . If Ω is uniformly separated, i.e., if then D + ≤ 1/γ, and hence D + < ∞. Henceforth, unless stated otherwise, by a complex family we mean a square summable family of complex numbers. First we recall a classical theorem of Ingham and Beurling [20,2]. In fact, the constants α(I) and β(I) depend only on the length of I. The inequalities in (ii) and (iii) are called direct and inverse inequalities, respectively. Remark 2.2. If A : X → R and B : X → R are two nonnegative functions, defined on some set X, then we will write, following Vinogradov, For example, in the estimates (ii) and (iii) above the integral and the sum may be considered as real-valued functions defined on the vector space of the complex families (c k ) k∈K , and we may write the inequalities in the form respectively, if we do not want to indicate explicitly the constants α(I) and β(I). We will need an extension of Theorem 2.1 to the more general where Ω has a finite upper density, but it is not necessarily uniformly separated. If D + (Ω) < ∞, then we may enumerate the elements of Ω into an increasing sequence (ω k ), where k runs over a (finite or infinite) subset of Z, formed by consecutive integers. Assume for simplicity that k runs over all integers. Given an arbitrary bounded interval I of length |I|, by [1, Proposition 1.4] or [23, Proposition 9.3] there exists a positive integer M such that Ω satisfies the weakened gap condition Fix 0 < ε ≤ γ M arbitrarily. For each maximal chain ω k , . . . , ω n satisfying ω j+1 − ω j < ε for j = k + 1, . . . , n (note that it has at most M elements), we introduce the divided differences e k (t), . . . , e n (t) by Newton's formulas (see [1] or [23] for more details), and we rewrite the usual exponential sums in the form There exists always a positive constant θ such that for all complex families (c k ) k∈K . On the other hand, there exists a positive constant η such that for all complex families (c k ) k∈K if and only if Ω is uniformly separated. then choose an arbitrary ε ∈ (0, γ M ], and introduce the functions e k (t) according to (2.5). Then there exist two positive constants α(I) and β(I) such that for all complex families (c k ) k∈K . As before, the constants α(I) and β(I) depend only on the length of I. (i) Mehrenberger [27] proved that 2πD + is the critical length for the validity of (iii). (ii) If the sequence (ω k ) has a uniform gap γ > 0, then choosing ε ∈ (0, γ] every maximal chain is a singleton, so that b k = c k for all k. In this special case Theorem 2.3 reduces to Theorem 2.1. We will also need a vectorial extension of Theorem 2.1 (i), (ii). Theorem 2.5. Let Ω = (ω k ) k∈K be a uniformly separated family of real numbers, and (Z k ) k∈K a uniformly bounded family of vectors in some Hilbert space H, so that Proof. This is an easy adaptation of the usual proof of Theorem 2.1 (ii); see, e.g., the proof of [23, Theorem 6.1, p. 90]. Finally, we will also need a vectorial generalization of a powerful estimation method of Haraux [14]. Let us consider a finite number of families of real numbers: and corresponding nonzero vectors Z j,k in some Hilbert space H. The following proposition is a special case of [23, Theorem 6.2]: Theorem 2.6. Assume that the families (2.7) are uniformly separated. Furthermore, assume that there exist finite subsets F j ⊂ K j , a bounded interval I 0 , and two positive constants α ′ , β ′ such that Finally, assume that for each exceptional index (j, k) such that k ∈ F j , the exponent ω j,k has a positive distance from the sets {ω j,n : n ∈ K j , n = k} and {ω ℓ,n : n ∈ K ℓ } for all ℓ = j. Then for each bounded interval I of length > |I 0 | there exist two positive constants α(I) and β(I) such that holds for all complex complex families Remark 2.7. The assumption (2.8) is stronger than the assumption (2.6) of Theorem 2.5. Indeed, applying (2.8) to one-element sums we obtain that for all j and k ∈ K j \ F j . Since F 1 ∪· · ·∪F J is a finite set and the vectors Z j,k are different from zero, we may choose two other positive constants β ′′ ≤ β ′ and α ′′ ≥ α ′ such that for all j and k ∈ K j . Hence (2.6) is satisfied with ∆ = α ′′ / |I 0 |. 8 Pointwise observability Given four positive constants a, c, d, r, we consider the following system of linear partial differential equations with 2π-periodic boundary conditions: It will be more convenient to write u(t)(x) := u(t, x), and to work on the unit circle T without boundary conditions, i.e., to rewrite our system in the form Let us write (3.1) in the abstract form and the linear operator where Z ′ is the time derivative and D is the spatial derivative. The well posedness of (3.1) in most cases follows from the following result. Proposition 3.1. If ad = 1, then A is an anti-adjoint operator in H for the Euclidean norm given by and hence it generates a group of isometries in H. Proof. It is clear that Indeed, for any (ϕ, ψ), (u, v) ∈ D(A), we obtain by integrating by parts the equality It remains to show that D(A * ) = D(−A), i.e., that each (ϕ, ψ) ∈ D(A * ) belongs to D(A). Pick any (ϕ, ψ) ∈ D(A * ). Then there exists a constant C > 0 satisfying for all (u, v) ∈ D(A) the inequality Choosing v = 0 and u ∈ C ∞ (T) hence we infer from that the distributional derivative ϕ xxx + aψ xxx belongs to L 2 (T). Similarly, choosing u = 0 and v ∈ C ∞ (T) we obtain that the distributional derivative dϕ xxx + rψ x + ψ xxx belongs to L 2 (T). Combining the two relations we obtain that (1 − ad)ψ xxx + rψ x ∈ L 2 (T), and in case ad = 1 this implies that ψ ∈ H 3 (T). Combining this property with the relation ϕ xxx + aψ xxx ∈ L 2 (T) we conclude that ϕ xxx ∈ L 2 (T), and therefore ϕ ∈ H 3 (T). In order to establish the well posedness of (3.1) in the general case we determine the eigenvalues and eigenfunctions of the operator A. There exists an orthogonal basis There exist non-trivial solutions if and only if 2cω ± k = (c + 1)k 3 − rk ± k 4acdk 4 + [(c − 1)k 2 + r] 2 . If k = 0, then ω − k = ω + k , and two corresponding non-zero eigenvectors are given by the formula If k = 0, then both eigenvalues are equal to zero, and two linearly independent eigenvectors are given for example by the formula Furthermore, we have According to Remark 2.2, the notation Z ± k ≍ 1 means that there exist two positive constants θ and η such that Proof. The orthogonality follows from the orthogonality of the functions e −ikx in L 2 (T) and from the orthogonality relations The latter equalities may be checked directly: we have The limit relations readily follow from (3.4) because k −2 → 0. Since Z ± = 0, they imply the property Z ± k ≍ 1. Next we establish the well posedness of (3.1). Let us denote by C b (R, H) the Banach space of bounded continuous functions R → H for the uniform norm. (i) Given any (u 0 , v 0 ) ∈ H, the system (3.1) has a unique solution (u, v) ∈ C b (R, H), and the linear map The energy of the solution, defined by the formula does not depend on t ∈ R. (iii) The solution is given by the explicit formula with suitable square summable complex coefficients c ± k satisfying the equality and hence the relation Proof. If ad = 1, then the theorem follows from Proposition 3.1 and Lemma 3.2. The following alternative proof works even if ad = 1. The functions (2π) −1 e −ikx , k ∈ Z form an orthonormal basis in L 2 (T). Furthermore, by Lemma 3.2 the non-zero vectors Z ± k are orthogonal in C 2 , and Z ± k ≍ 1. Hence the functions e ± k (x) := e −ikx Z ± k , k ∈ Z form an orthogonal basis in H, and e ± k ≍ 1. By a standard method, the theorem will be proved if we show that for any given square summable sequences (c ± k ) the series H) is a Banach space, it suffices to check the uniform Cauchy criterium. Setting the following equality holds for all n > m > 0 and t ∈ R: Using the above mentioned orthogonality properties and the relation Z ± k ≍ 1 hence we infer that The Cauchy property follows by observing that the last expression is independent of t, and that it converges to zero as n > m → ∞ by the convergence of the numerical series The above proof also yields (by taking m = −1) the equalities (3.6)-(3.8). Now we turn to the question of observability. We need some additional information on the eigenvalues: Proof. Since ω + −k = −ω + k , it suffices to consider the case k → ∞. Since Hence and therefore This implies the first two cases of the lemma because B and 1 − ad have the same sign. If ad = 1, then (3.9) implies the relation Corollary 3.6. We have Proof. By using the definition of the upper density, this follows from Lemmas 3.4 and 3.5. The rest of this section is devoted to the proof of the following Fix x 0 ∈ T arbitrarily, and consider the solutions of (3.1). 13 (i) The direct inequality holds for all non-degenerate bounded intervals I. (ii) If ad = 1, then the inverse inequality also holds for all non-degenerate bounded intervals I. (iii) If ad = 1, then (3.11) holds for all bounded intervals I of length |I| > 2π(c + 1)/r, and it fails if |I| < 2π(c + 1)/r. Here we use the notation ≪ introduced in Remark 2.2: two sides of the direct and inverse inequalities may be considered as real-valued functions of the initial data (u 0 , v 0 ) of the problem (3.1).) Remark 3.8. The assumption (3.10) is will not be needed in the proof of the direct inequality. We will show in Lemma 3.9 below that (3.10) is satisfied for almost all (a, c, d, r) ∈ (0, ∞) 4 . We proceed in several steps. Proof of the direct inequality in Theorem 3.7. Fix an arbitrary bounded interval I and for brevity, using the Young inequality, applying Theorem 2.1 (ii), and finally using the relation Z ± k ≍ 1 from Lemma 3.2, we have Using (3.8) we conclude that Proof of a weakened version of the inverse inequality in Theorem 3.7. We fix a positive number K whose value will be precised later, and we consider only solutions of the form (3.6) satisfying c ± k = 0 whenever |k| ≤ K, i.e., functions of the form with square summable complex coefficients c ± k . Using the Young inequality and applying Theorem 2.5 we have Since Z + and Z − are orthogonal, and Z ± ≥ 2ac by Lemma 3.2, it follows that Assume that ad = 1. If K is sufficiently large, say K ≥ K 0 , then by Corollary 3.6 we may apply Theorem 2.1 (iii) to conclude with some constant β(I, K) > 0 the estimate Let us observe that if K ≥ K 0 , then β(I, K) ≥ β(I, K 0 ) α(I, K) ≤ α(I, K 0 ) because we have less complex families to consider for K than for K 0 . Therefore we have for every K ≥ K 0 and for all complex families (c ± k ) |k|>K . Since δ K → 0 as K → ∞, we may choose a sufficiently large K ≥ K 0 such that β(I, K 0 ) − α(I, K 0 )δ 2 K > 0. Then using (3.8) we conclude that under the assumption ad = 1 the inverse inequality holds for all functions of the form (3.13). If ad = 1, then by Corollary 3.6 we may repeat the last reasoning for every bounded interval of length |I| > 2π(c + 1) r . Proof of the inverse inequality in Theorem 3.7. Thanks to our assumption (3.10) and to Corollary 3.6 we may apply Theorem 2.6 to infer the inverse inequality from the weakened inverse inequality, established in the preceding step. Proof of the lack of the inverse inequality if ad = 1 and |I| < 2π(c+1) r . For any fixed positive integer K, by Corollary 3.6 and Remark 2.4 (i) we may find square summable complex numbers c − k satisfying the inequality Moreover, since the removal of finitely many frequencies does not alter the upper density, we may also assume that Then, applying the Young inequality and then Theorem 2.5 (ii) with J = 1 and 16 we have the following estimate: Since the expression in front of the sum tends to zero as K → ∞, this estimate shows that the inverse inequality (3.11) does not hold. Theorem 1.2 (iv) now follows from Theorem 3.7 and from the following lemma, showing that the hypothesis (3.10) holds for almost all choices of the parameters a, c, d, r: Lemma 3.9. For almost every quadruple (a, c, d, r) ∈ (0, ∞) 4 , we have ω + k = ω + n and ω − k = ω − n whenever k = n. For any fixed (a, c, d) ∈ (0, ∞) 3 and for any (k, n) ∈ Z 2 with k = n the polynomial equation (3.16) vanishes for at most three values of r. Therefore for any fixed (a, c, d) ∈ (0, ∞) 3 , ω + k = ω + n and ω − k = ω − n for all k = n for all but countable many values of r. The lemma follows by applying Fubini's theorem. Pointwise controllability In this section we study the pointwise controllability of the system (4.1) where a, c, d, r are given positive constants, δ x 0 denotes the Dirac delta function centered in a given point x 0 ∈ T, and f, g are the control functions. We will prove the following Theorem 4.1. Fix x 0 ∈ T arbitrarily, and choose a, c, d, r such that (3.10) be satisfied. Set Given T > T 0 arbitrarily, for every (u 0 , v 0 ), (u T , v T ) ∈ H there exist control functions f, g ∈ L 2 loc (R) such that the solution of (4.1) satisfies the final conditions u(T ) = u T and v(T ) = v T . Remark 4.2. In case ad = 1 the system is not controllable for any T < T 0 . This follows from Theorem 3.7 and from a general theorem of duality between observability and controllability; see, e.g., [12]. By a general argument of control theory, it suffices to consider the special case of null controllability, i.e., the case where u T = v T = 0. We prove this by applying the Hilbert Uniqueness Method (HUM) of J.-L. Lions [25] as follows. Fix (u 0 , v 0 ) ∈ H arbitrarily. Choose (ϕ 0 , ψ 0 ) ∈ D(A) arbitrarily, and solve the homogeneous adjoint system Then solve the non-homogeneous problem with zero final states (instead of the non-zero initial states in (4.1)) by applying the controls (4.3) f := −ϕ(·, x 0 ) and g := −ψ(·, x 0 ) : and then from Theorem 3.7 that Applying the Lax-Milgram Theorem we conclude that Λ is an isomorphism of H onto itself; in particular, it is onto. (iii) there exist a non-degenerate bounded interval I and a positive constant c I such that the solutions of (5.1) satisfy the inequalities Pointwise stabilizability The operator B is usually called an observability operator: We may think that we can observe only BU and not the whole solution U. Under the assumptions (i), (ii), (iii) we may define the solutions of the dual problem by transposition, were H ′ , G ′ denote the dual spaces of H, G, and A * , B * denote the adjoints of A and B. The operator B * is usually called a controllability operator: we may think that we can act on the system by choosing a control W . Motivated by a formal computation, by a solution of (5.2) we mean a function V : R → H ′ satisfying the identity for all U 0 ∈ H and for all s ∈ R. Then for every V 0 ∈ H ′ and W ∈ L 2 loc (R; G ′ ), (5.2) has a unique solution. Moreover, the function V : R → H ′ is weakly continuous. 20 The Hilbert Uniqueness Theorem states that if the inverse inequality of (iii) also holds: (iv) there exist a bounded interval I ′ and a constant c ′ such that the solutions of (5.1) satisfy the inequality then the system (5.2) is exactly controllable in the following sense: Theorem 5.1. Assume (i), (ii), (iii), (iv), and let T > |I ′ | (the length of I ′ ). Then for all V 0 , V 1 ∈ H ′ there exists a function W ∈ L 2 loc (R; G ′ ) such that the solution of (5.2) satisfies V (T ) = V 1 . We may of course assume that W vanishes outside the interval (0, T ). Under the assumptions of the theorem we may also construct feedback controls yielding arbitrarily fast decay rates: Theorem 5.2. Assume (i), (ii), (iii), (iv), and fix ω > 0 arbitrarily. There exists a bounded linear map F : H ′ → G ′ and a constant M > 0 such that the problem has a unique weakly continuous solution V : R → H ′ , and the solutions satisfy the estimates Let us observe that Theorems 5.1 and 5.2 have the same assumptions. These assumptions have been verified during the proof of Theorem 1.1. Therefore we may also apply Theorem 5.2 for the Gear-Grimshaw system, and Theorem 1.3 follows. Use of one control In this section we establish a variant of Theorem 3.7 when we observe only one of the functions u(·, x 0 ) and v(·, x 0 ). Such observations do not allow us to determine completely the initial data in (3.1). Indeed, if (u, v) solves (3.1), then for any constant c the couple (u, v + c) also solves (3.1) with some other initial data, so that the observation of the component u may allow to determine v up to an arbitrary additive constant. An analogous situation occurs by observing v. Theorem 6.2 and Corollary 6.3 below will show that up to this indeterminacy the determination of the solutions is possible by observing only one component. Since the solutions of (3.1) are given by the formulas where we use the notation (3.12), we need an Ingham type theorem for the family ω ± k : k ∈ Z . It does not have a uniform gap, because ω + 0 = ω − 0 = 0 and because ω + k may be close to ω − n for many couples (k, n), but it satisfies the weakened gap condition of Theorem 2.3 with M = 2. Given a positive number ε we consider in the set Ω := ω ± k : k ∈ Z the following equivalence relation: x ∼ y if x = y, or if there exists a finite sequence x 1 , . . . , x n in Ω such that x = x 1 , x n = y, and |x j+1 − x j | < ε for j = 1, . . . , n − 1. Lemma 6.1. (i) We have z ± k,j ≍ 1, j = 1, 2. (ii) For almost every quadruple (a, c, d, r) ∈ (0, ∞) 4 , we have (iii) Assume (6.2) and (6.3). If ε sufficiently small, then each equivalence class of Ω has one or two elements. Moreover, if it has two elements, then one of them belongs to ω + k : k ∈ Z , and the other one belongs to ω − k : k ∈ Z . Proof. (i) This follows from the explicit expression (3.4) of these vectors. (ii) In view of Lemma 3.9 we only need to consider the property (6.3). For this we adapt the proof of Lemma 3.9 as follows. (iii) For any fixed ε > 0, by Lemmas 3.4 and 3.5 there exists a sufficiently large positive integer K such that ω + k − ω + n ≥ 2ε and ω − k − ω − n ≥ 2ε whenever |k| , |n| > K and k = n. Then each equivalence class in the restricted set ω ± k : |k| > K has at most two elements. Indeed, if two elements are equivalent, then they have to belong to the different families ω + k and ω − k , say ω + k − ω − n < ε. Then we infer from our choice of K that ω + k − ω − m > ε for all m = n and ω + m − ω − n > ε for all m = k, so that no other exponent is equivalent to ω + k and ω − n . This property remains valid if we change ε to a smaller positive value. Indeed, each one-point equivalence class remains the same, while the others either remain the same or they split into two one-point equivalence classes. Using this and Lemma 6.1, the theorem follows by applying Theorem 2.3 with M = 2. (i) If u(t, x 0 ) = 0 for all t ∈ I, then u = 0 and v is an arbitrary constant function. (ii) If v(t, x 0 ) = 0 for all t ∈ I, then v = 0 and u is an arbitrary constant function. Similarly, if u(t, x 0 ) = 0 for all t ∈ I, then we obtain that c + 0 − c − 0 = 0, and c ± k = 0 for all k ∈ Z * . This implies that v = 0 and u is an arbitrary constant function. We end this paper by proving two variants of Theorem 4.1 where we apply only one control. Let us observe that if f = 0 in (4.1), then T u(t, x) dx does not depend on t ∈ R because d dt T u dx = − T u xxx + av xxx dx = 0 24
8,313.2
2018-08-10T00:00:00.000
[ "Mathematics" ]
Integrated Object-Based Image Analysis for semi-automated geological lineament detection in southwest England Regional lineament detection for mapping of geological structure can provide crucial information for mineral exploration. Manual methods of lineament detection are time consuming, subjective and unreliable. The use of semi-automated methods reduces the subjectivity through applying a standardised method of searching. Object- Based Image Analysis (OBIA) has become a mainstream technique for landcover classi fi cation, however, the use of OBIA methods for lineament detection is still relatively under-utilised. The Southwest England region is covered by high-resolution airborne geophysics and LiDAR data that provide an excellent opportunity to demonstrate the power of OBIA methods for lineament detection. Herein, two complementary but stand-alone OBIA methods for lineament detection are presented which both enable semi-automatic regional lineament mapping. Furthermore, these methods have been developed to integrate multiple datasets to create a composite lineament network. The top-down method uses threshold segmentation and sub-levels to create objects, whereas the bottom-up method segments the whole image before merging objects and re fi ning these through a border assessment. Overall lineament lengths are longest when using the top-down method which also provides detailed metadata on the source dataset of the lineament. The bottom-up method is more objective and computationally e ffi cient and only requires user knowledge to classify lineaments into major and minor groups. Both OBIA methods create a similar network of lineaments indicating that semi-automatic techniques are robust and consistent. The integration of multiple datasets from di ff erent types of spatial data to create a comprehensive, composite lineament network is an important development and demonstrates the suitability of OBIA methods for enhancing lineament detection. Introduction Mapping of geological structures can be both time-consuming and challenging in the field, particularly in areas of poor outcrop exposure. Structures such as strike-extensive faults can be particularly difficult to map by a geologist in the field due to partial exposure and subtle topographic variations. Lineament detection can aid the mapping of geological structure. A lineament is a mappable rectilinear or curvilinear linear feature of a surface, distinct from adjacent patterns, and which may represent a subsurface phenomenon (O'Leary et al., 1976). Remotely sensed data, including satellite imagery and airborne geophysical data, are commonly used for regional lineament mapping. Classical lineament extraction techniques include manually digitising linear features. At the regional scale, optical imagery is commonly used; however, this is time consuming and subjective and therefore lacks reproducibility (Masoud and Koike, 2006;Scheiber et al., 2015). Grebby et al. (2012) Koike et al. (1995). Algorithms used with potential field data, such as those by Lee et al. (2012) and Šilhavý et al. (2016), employ curvature and hillshade enhancements, respectively. Most recently, the LINDA software has been developed as a comprehensive tool for lineament detection for spectral and potential field data, and incorporates the STA algorithm Masoud and Koike (2017). Object-Based Image Analysis (OBIA) is a powerful tool for analyzing spatially correlated groups of pixels. It utilises image segmentation algorithms to group contiguous pixels into image objects. The advantages of image objects are that the grouped pixels can be assessed together to measure texture and geometry and have corresponding summary statistics, whilst being linked geospatially through a topology (Lang, 2008). The generation of representative image objects is of paramount importance to OBIA and therefore some user knowledge is required for a successful analysis (Blaschke et al., 2004). The idea of user knowledge refining image objects is considered to be the change from an objectbased workflow to an object-oriented workflow (Baatz et al., 2008). For simplicity, we retain the term object-based in this paper. Despite the adoption of OBIA techniques within the geosciences in the early 2000s, geological studies using OBIA techniques are largely restricted to classification studies (e.g. Grebby et al., 2016) and not to lineament detection. Previous studies using OBIA for lineament detection include Marpu et al. (2008), Mavrantza and Argialas (2008) and Rutzinger et al. (2007), who applied the method to Synthetic Aperture Radar (SAR), Landsat and LiDAR DTM data, respectively. The first application of the OBIA techniques to airborne geophysics was reported by Middleton et al. (2015) who utilised airborne magnetic data over the Enontekiö area of northern Finland. The objective of this study is to develop semi-automated OBIA methods for geological lineament detection that integrate airborne geophysical and remote sensing datasets to create a composite lineament network. The top-down OBIA method from Middleton et al. (2015) is developed to integrate multiple datasets and compared to a new approach presented here which uses a bottom-up OBIA approach. The Southwest (SW) England region is used as a case study due to the complex structural geology and the availability of high-resolution airborne geophysical datasets. Final lineament maps are contrasted with a regional fault map from 1:50 000 British Geological Survey mapping districts in the study area. The Southwest England region The Tellus South West project (www.tellusgb.ac.uk) flew high-resolution regional airborne surveys over SW England, the coverage of which is outlined in Fig. 1. The geology of SW England provides an excellent opportunity to test OBIA methods to detect lineaments in a region where the structural geology is well-documented at multiple scales. The temperate climate of SW England limits well-exposed areas to coastal wave-cut platforms. Therefore, detailed structural studies are focussed along coastal sections and accurate field mapping inland is problematic due to soil cover. The high-resolution regional airborne geophysical surveys provide an excellent dataset for detection of geological lineaments. SW England is one of the most prospective areas within the UK for both metalliferous and industrial minerals and deep geothermal energy. The occurrence and exploitation of these resources are strongly influenced by regional fault networks and detailed lineament maps will help inform future exploration activity. Previous lineament studies in SW England Early attempts at regional lineament detection in SW England were conducted on Landsat MSS data (Moore and Camm, 1982). The study aimed to identify major structural lineaments relating to tin-tungsten mineralisation. Due to the coarse resolution of the imagery, the observations were only appropriate for 1:500 000 scale mapping, although significant NW-SE structures were clearly identifiable. A further study was conducted by James and Moore (1985) to investigate the temporal variations between different scenes and the incorporation of eastward-looking SAR data from Seasat imagery to enhance structural lineaments -although spatial resolution remained low. Regional lineament detection using Landsat TM satellite imagery was first attempted by Smithurst (1990) and demonstrated good agreement with mapped major structures. A more detailed study by Rogers (1997) used two Landsat TM scenes which were reprocessed to 150 m pixels and applied four directional filters (N, E, NE, SE). Lineaments were manually digitised based on four criteria: a width/length ratio >0.5; within ± ∘ 30 of the directional filter; be larger than four pixels; a qualitative strike length significance (Rogers, 1997). The study was effective but was hindered by cloud cover and rejected lineaments <4 pixels long, therefore neglecting significant structures <600 m in length. A more recent study mapping strike-slip faults along the north Cornwall coast at Westward Ho! demonstrated that, in well-exposed areas, rapid structural mapping from aerial photography is a valuable approach (Nixon et al., 2011). Geological setting The bedrock geology of SW England comprises Devonian-Carboniferous sedimentary rocks, metamorphosed to sub-greenschist facies during the Variscan Orogeny and intruded by a Permian granite batholith (Fig. 1). The sedimentary successions were deposited in six E-W-trending extensional basins which were subsequently inverted during the Variscan Orogeny (Leveridge and Hartley, 2006). The Variscan Orogeny resulted in two phases of NNW-directed thrusting which formed an ENE-WSW to E-W-trending fault system Shail, 1995, 1996;Leveridge et al., 2002;Leveridge and Hartley, 2006). Steeply-dipping NW-SE and subordinate NNE-SSW strike-slip faults (Dearman, 1963) developed during the Variscan collision (Leveridge et al., 2002); some NW-SE faults may have been inherited from the pre-Variscan extensional regime (Roberts et al., 1993;Shail and Leveridge, 2009). The batholith was emplaced in an Early Permian post-orogenic extensional setting and the exposed plutons have a spatial correlation with major NW-SE faults (Dearman, 1963;Alexander and Shail, 1996). The extensional reactivation of ENE-WSW to E-W striking Varsican thrusts was accompanied by the creation of new steeply-dipping conjugate extensional faults, that often host granite-related W-Sn-Cu mineralisation, and was followed by minor Permian intraplate shortening episodes (Shail and Alexander, 1997). Subsequent latest Permian to Triassic ENE-WSW extension reactivated older NW-SE faults and created new NW-SE to N-S faults, "cross-courses", that cut earlier granite-related mineralisation and locally host epithermal basinal brine mineralisation (Scrivener et al., 1994;Shail and Alexander, 1997). The regional fracture network had essentailly formed by the late Triassic. However, several major NW-SE faults in the east of the region underwent Oligocene sinistral reactivation to form pull-apart basins (Holloway and Chadwick, 1986;Gayer and Cornford, 1992) and their infill is locally deformed in a presumed Miocene dextral strike-slip regime (Holloway and Chadwick, 1986). Whilst the structural evolution, and development of regional fault networks in SW England is complex, the dominant structural trends are ENE-WSW to E-W and NW-SE with subordinate NE-SW and N-S elements. Materials and methods The airborne geophysical data used in this study were selected to demonstrate the strength of the method for integrating multiple datasets to produce a composite lineament network. The methods herein describe data pre-processing steps and analysis in eCogntion software (v9.3, Trimble, Germany). Post-processing was also necessary for quality assessment of the data and to create and manipulate metadata. Full details of the software used in these steps can be found in the Supplementary Information (S1). The two OBIA methods presented here can be downloaded individually as an eCognition Rule Set from the University of Exeter at http://empslocal.ex.ac.uk/obld2. Airborne geophysical datasets Data from the Tellus South West project provide a new opportunity to accurately determine lineaments at high resolution across SW England. The airborne data were acquired in two surveys; the LiDAR data collected in summer 2013 and the magnetic and radiometric data in late 2013 to early 2014 (Beamish and White, 2014). The surveys have near-complete coverage, except for an inset of data within the magnetic and radiometric data which was not collected over the Tamar Estuary at Her Majesty's Naval Base (HMNB) Devonport (Fig. 1). The magnetic and radiometric data were collected simultaneously by CGG Airborne Survey (Pty). Final quality assurance and quality control of the data were undertaken by the British Geological Survey (Beamish and White, 2014). The data were gridded using a minimumcurvature algorithm and supplied at a pixel resolution of 40 m. The survey parameters are summarised in Table 1. In this study, the Total Magnetic Intensity (TMI) from the magnetic data (corrected using the International Geomagnetic Reference Field) and the Total Count channel from the radiometric data were used. The LiDAR data were collected as part of a separate survey at 1 point/metre 2 with 25 cm vertical accuracy. The data were acquired by the British Antarctic Survey, processed by Geomatics (Environment Table 1 Survey parameters for the Tellus South West airborne geophysical survey compiled from Beamish and White (2014) and Gerard (2014 Agency), and overseen by the Centre for Ecology and Hydrology (CEH). Some areas of low quality data acquisition exist in urban areas and dense vegetation (Gerard, 2014). The data were supplied in raster format as a DTM with a pixel resolution of 1 m, and are used here under an Open Government Licence (Ferraccioli et al., 2014). The initial airborne geophysical data from the Tellus South West project are illustrated in Fig. 2A-C. The three datasets each add different information to the analysis. The LiDAR data captures lineaments that are manifest in the geomorphology of the region. Lineaments derived from magnetic data may indicate the presence of geological structure extending at depth in the crust. The radiometric data, specifically the Total Count channel used here, senses lineaments that may have acted as fluid conduits resulting in mineral alteration. Lower values therefore represent the leaching of radiogenic elements such as Lineament Detection The two different OBIA methods described here represent a novel approach to geological lineament detection where airborne geophysical and remote sensing datasets can be integrated in order to produce a comprehensive and composite lineament network. The methods primarily use the eCognition software (v9.3, Trimble, Germany) and the Cognitive Network Language (CNL). The CNL provides a variety of tools for OBIA workflows. The workflow for the two stand-alone OBIA methodologies using top-down and bottom-up segmentation techniques is summarised in Fig. 3. Pre-processing Initial pre-processing was conducted in R project software (https:// cran.r-project.org) using the raster package (v2.6-7, Hijmans et al., 2017). All datasets were resampled to the same extent and resolution (40 m pixels) using a bidirectional operator before being clipped to the coastline. The dashed rectangular area missing in the magnetic and radiometric data over HMNB Devonport was not removed from the LiDAR data (Fig. 1). The magnetic and LiDAR data underwent further preprocessing steps in the Oasis Montaj programme (v8.5, Geosoft, Canada) to remove noise. These include cultural artefacts relating to buildings and infrastructure as well as minor corrugations across flight lines. A smoothing algorithm was applied in Oasis Montaj to mitigate the artefacts from densely vegetated and urbanised areas in the LiDAR DTM. As a result, the topographic expression of anthropogenic features such as road cuttings were drastically reduced and smoothed. The smoothed product was carefully compared to the original data to ensure minimal loss of geological information. The final pre-processing step applies the TDR function to enhance the continuity of the data (Equation (1)). The TDR transform uses the arctangent of the ratio of vertical derivative to total horizontal derivative and therefore normalizes the data to the range π/2 to − π/2 (Miller and Singh, 1994). The normalization allows comparison of the data and the ratio of the derivatives acts as a gain control to ensure minor gradient changes are not lost . The TDR transform is applied to all three datasets, illustrated in Fig. 2D-F. For the DTM and Total Count data, the vertical derivative is calculated using convolution rather than Fourier transform. The TDR transform is commonly used to identify the edges of isolated magnetic bodies at the zero contour of the transformed data Beamish and White, 2011;White and Beamish, 2011). The TDR transform is applied here to search for minima; these are considered an effective proxy for lineaments related to fault zones. It is assumed the minima are generated through preferential erosion (LiDAR DTM), breakdown of magnetic minerals (magnetic data) or leaching of radioelements (radiometric data). It is acknowledged that in some geological environments lineaments may be represented by maxima (indurated rocks or magnetic and radiometric highs). Some knowledge of the geology is key to deciding whether to search for minima or maxima, or both. Full details of the pre-processing steps can be found in the Supplementary Information (S2). Line extraction The line extraction algorithm in the CNL was used to create a newly derived lineness raster from each input dataset. The algorithm uses a user-defined, rectangular kernel filter to assess the similarity of pixels. The kernel was set to search at regular interval angles of ∘ 5 . The parameters for line extraction were optimised to be the same across all datasets to produce three lineness rasters with similar attributes that could be subsequently integrated. The resultant raster has a range of 0-255; higher values indicate a higher probability that a lineament exists. A detailed description of the algorithm within eCognition is provided in the Supplementary Information (S1). Top-down segmentation A top-down methodology involves the segmentation of an image into smaller and smaller objects and the downward propagation of object levels in the CNL (Diamant, 2004). Determining object relationships across different levels such as super-objects is advantageous for manipulating specific parts of the original object (such as the end) and is used in this instance to preferentially grow objects at their ends. Lineament detection through a top-down segmentation was achieved using the multi-threshold segmentation and chessboard segmentation tools in the CNL. The workflow is based on the Object-Based Lineament Detection (OBLD) algorithm of Middleton et al. (2015), which is further developed here to integrate all three input datasets to create a composite lineament network. The OBLD algorithm is discussed in full within Middleton et al. (2015). The general workflow for lineament detection using top-down OBIA methods from multiple datasets is summarised in Fig. 3. The Fig. 3. Workflow for creating a lineament network using both OBIA methodologies. The split in the flowchart indicates the slightly different path for each method rather than parallelised analysis. C.M. Yeomans et al. Computers and Geosciences 123 (2019) 137-148 segmentation steps and use of object-levels for extending objects are illustrated in Fig. 4A1-4. The integration steps developed here for the top-down OBIA method involve the merging of separate image object sets from each dataset. A complementary merged lineness raster is created by summing the individual lineness rasters and normalising to 0-255. Objects are extended in a similar fashion using a new sub-level (L0) under the existing data, as is the case in Middleton et al. (2015), and further segmented using the chessboard segmentation tool. However, in this step, extensions are determined from the merged lineness raster and therefore could be based on values derived from a different dataset. An additional requirement is that object-ends are only extended if in proximity to another object-end. A final cleaning step removes any spurious objects that are below a given area or asymmetry threshold. Bottom-up segmentation The bottom-up method for lineament detection involves the segmentation of the whole image, using the multi-resolution segmentation tool, into many, differently sized, objects which are subsequently merged (Dragut et al., 2010;Eisank et al., 2014). Merging can be based on spectral, statistical, textural or geometric properties and is highly versatile. The integration step occurs prior to segmentation in the bottom-up method. As before, a composite lineness raster is created by summing the lineness rasters and normalising to 0-255. Fig. 4B1-5 illustrates how the composite lineness raster is segmented using the multi-resolution segmentation algorithm and the subsequent merging of objects using the spectral difference algorithm. Next, the resulting objects are then classified based on the mean pixel value in the lineness raster into major and minor lineaments according to a threshold defined by the user. The threshold is defined heuristically based on user knowledge of the region, and is thus case-specific. Major and minor lineaments are refined using a border assessment to merge segments of a lineament which may contain both major and minor components into a continuous object set of either major or minor objects. Initially, major lineament objects are expanded by one pixel to minimise single pixel borders with minor lineament objects. Minor lineament objects which have the majority (60%) of their relative border in contact with major objects are converted into major lineament objects. Major lineament objects are shrunk by growing unclassified objects into the major object where the composite lineness raster is below the heuristically defined threshold for a major lineament. The remaining major lineament objects are then merged. Major objects are converted into minor objects where the majority (60%) of the relative border is in contact with a minor lineament. Cleaning steps remove objects with an area below a given threshold (in this case 50 pixels for major, and 30 pixels for minor lineaments) and also remove pixels at the edges of objects below a threshold in the composite lineness raster; this is tailored to major and minor objects. Vectorization and metadata The final step using the CNL was to convert objects to polylines and export these to a shapefile with the corresponding metadata. The polylines were vectorized into skeleton polylines of the objects they represent. Skeleton polylines are most efficient at capturing objects that represents a complex lineament network. The metadata for the top-down segmentation method includes binary fields detailing the source dataset for each lineament (extension, LiDAR DTM, magnetic, radiometric). The end-user can manipulate these fields to fully describe the source data for the lineament. For the bottom-up segmentation method, metadata was restricted to the classes of either major or minor lineaments. Post-processing Post-processing of the metadata involved the calculation of the orientation and length of polylines and was conducted in the ESRI ArcGIS software. The computed polyline length, orientation and a direction (N, NE, E, or SE) were included in the metadata of the vector file. The polyline data for both top-down and bottom-up methods were interrogated through database queries in the GIS. The data from the top-down method were found to have 29 artefact polylines associated with the extension step of the method. These artefacts, defined as disconnected polylines with lengths < 40 m, were removed from the final lineament dataset. Full details of the post-processing steps can be found in the Supplementary Information (S3). Results The polyline data for both top-down and bottom-up methods were assessed for line length and orientation. A qualitative analysis of the data is presented, as well as a comparison with the existing regional map compiled from the 1:50 000 DiGMap data from the British Geological Survey. The polyline output for the top-down and bottom-up methods are presented in Fig. 5. Statistical analysis Summary statistics for each polyline dataset are presented in Table 2 and are graphically displayed using boxplots (Fig. 6) and halfrose diagrams (Fig. 7) for polyline lengths and orientations, respectively. It is apparent from Table 2 that the mean polyline lengths are significantly skewed by outlying data in both datasets. Therefore, the median is considered a more appropriate measure of average polyline length. Due to the positively skewed distribution, the interquartile range, demarcated by the box in Fig. 6 is used to reflect the dispersion within each dataset. The bottom-up method shows less dispersion compared to the top-down method indicating a more consistent polyline length. Nevertheless, the top-down method produces a longer median polyline length. The distribution of orientation is presented as half-rose diagrams in Fig. 7. It is apparent from the rose diagrams that both methods capture a similar trend in the orientation of lineaments. The dominant trends range from E through to SE with subordinate NE and N-S trends. Between the two methods there is little difference in the overall distribution, although, the top-down method appears to emphasise SEtrending lineaments more than the bottom-up method. Discussion Two stand-alone OBIA methods for integrated lineament detection have been presented. Both methods can create similar data products but include different metadata which are further enhanced to include lineament length and orientation during post-processing. The initial use of line extraction in the CNL demonstrated by Middleton et al. (2015) is effective, especially where the TDR transform is applied to airborne magnetic data. This study extends the use of OBIA methods to TDRtransformed Total Count channel from airborne radiometric data and elevation data from an airborne LiDAR DTM. Application of the Tilt Derivative Image analysis often requires an effective enhancement to extract detailed information from an input dataset. The TDR transform has previously been applied for edge detection where the zero-contour corresponds to the edge of magnetic bodies and where the signal does not overlap with nearby sources Beamish and White, 2011;White and Beamish, 2011). The use of TDR minima for detecting weakness zones in the bedrock, such as shear zones or fault zones, was described for worming techniques by Airo and Leväniemi (2012). The selection of using minima does not preclude the use of maxima and can be adapted for geological environments where fault zones may have enhanced magnetic, radiogenic or hardness properties. The detection of geological lineaments from TDR minima of airborne magnetic data using OBIA methods was first applied by Middleton et al. (2015) and compared to a similar worming technique to that of Airo and Leväniemi (2012). Middleton et al. (2015) also applied to the TDR transform to airborne LiDAR DTM data but did not incorporate it into the semi-automated analysis. Here, the TDR transform has been successfully applied to airborne magnetic and LiDAR DTM, in addition to the Total Count channel of The bottom-up dataset is represented as major and minor lineaments based on the metadata. Black lines represent the coast, rectangular inset area of HMNB Devonport and the eastern limit of the study area. This figure includes OS data © Crown Copyright and database right (2018). Table 2 Summary statistics for the polyline length (in metres) of each datasets. The mean is not considered a representative centre value due to the high skew. The range, median and kurtosis describe the non-normal distribution whilst the IQR provides a better comparison of dispersion than standard deviation. SD = standard deviation, IQR = Interquartile Range. The datasets are denoted as TD (top-down) and BU (bottom-up) where the numbers 8 and 16 refer to the number of compass directions used in post-processing. airborne radiometric data. It is necessary to calculate the vertical derivative for the TDR transform using convolution, rather than a Fourier transform, for the LiDAR DTM and Total Count data. The resulting composite lineament networks are the first to use the TDR transform in this manner, and to successfully integrate multi-sourced data into a semi-automated analysis for geological lineament detection. Top-down vs. Bottom-up segmentation Top-down OBIA methods for geological lineament detection are effective at capturing the regional lineament network. The method does require user knowledge to define appropriate thresholds for segmentation, although these can be readily determined through a heuristic approach. Extending objects is computationally intensive and adds significant processing time to the top-down method. Furthermore, assigning object-ends from super-objects in a sub-level results in only two nodes for extension, which can be problematic for multi-branched objects. Additional extension after integrating objects is more sensitive as it requires another object-end to exist within proximity to that being extended. There is also no control over the direction of growth. Finally, the output metadata provides information on the data source of the lineament, which can be useful for geological interpretation. In contrast, the bottom-up OBIA methodology requires less user knowledge since the whole image is segmented and then merged objectively. The method takes a slightly different approach by integrating the different lineness rasters into a composite lineness raster prior to segmentation. It also uses a border assessment approach to merge major objects into more extensive structures where they are interspersed with minor objects and create more robust lineaments. Some information is lost though integrating the lineness rasters, however, this, coupled with the border assessment step, makes the bottom-up method much more computationally efficient compared to top-down methods. Although some detail is lost during integration, metadata can be produced by defining thresholds for major and minor lineament classes. Comparison with existing geological mapping Previous regional maps are often compiled for specific projects and tailored to focus on specific faults. The most complete regional map can be compiled from the 1:50 000 DiGMap data from the British Geological Survey (Fig. 8). This map incorporates fault data from thirty mapping districts in the region that was originally mapped at 1:10 000 scale in the field and later collated and reduced to 1:50 000. The regional 1:50 000 fault map is markedly inconsistent. Some districts have a high density of faults (Chulmleigh), whereas others are nearly completely devoid of them (Bodmin and Ivybridge). The poor outcrop exposure inland invariably makes conventional geological mapping difficult. However, regional variability is also influenced by lithostratigraphy (recognition of fault separation and geomorphological features for which Chulmleigh has a very favourable lithostratigraphy) and changes in historical emphasis and mapping techniques (data for the Bodmin and Ivybridge were acquired over a century ago). Faults in upland granite areas are generally under-represented. The lineament datasets created from OBIA methods (Fig. 5) are more comprehensive and consistent than the existing regional fault map (Fig. 8). Furthermore, lineaments generated capture similar, but more distinct, structural trends. OBIA lineaments are comparatively shorter in length, although they have not been subjected to cartographic drafting as is the case with the 1:50 000 data. Assessment of the final lineament networks The final lineament maps presented in Fig. 5 demonstrate two stand-alone OBIA methods for detecting lineaments. Both methods provide comprehensive and consistent lineament maps which are created semi-automatically with minimal prior knowledge by the user. By integrating data from multiple sources, a composite lineament network is created based on geophysical, geological and geomorphological properties. The results from the two OBIA methods produce similar overall lineament patterns despite their fundamental differences. The extracted lineament networks are therefore considered to be robust and an accurate representation of the regional geological structure. Statistical analysis of the orientation of lineaments from both methods concur with the trends recorded from field studies by Shail (1995, 1996) Previous attempts at using OBIA methods for lineament detection have only been applied to a single input dataset. These include application to SAR (Marpu et al., 2008), Landsat (Mavrantza and Argialas, 2008) and LiDAR DTM data (Rutzinger et al., 2007), which restricts the mapping to lineaments that are only present at surface. Conversely, Middleton et al. (2015) use airborne magnetic data to determine whether lineaments are related to structures present in the deeper crust. The OBIA methods developed here are the first to integrate three multisourced input datasets for both surficial and subsurface lineament detection to create a more comprehensive, composite lineament network. The two contrasting OBIA methods presented here have different applications. For regional datasets, both methods can create a semiautomated lineament map. However, the bottom-up method provides a computationally efficient lineament map which sacrifices polyline lengths and output metadata. The metadata from the top-down method provides useful information on whether a lineament has a geomorphological expression (LiDAR DTM), may pertain to a deeper structure (magnetic data) or have been leached during alteration (radiometric data). Where a lineament has been detected in all three datasets, it is considered that this is more certain and likely to significant. Irrespective of the OBIA method selected, the common pre-and postprocessing steps are crucial to ensuring quality of the final lineament maps. Careful pre-processing can mitigate the effect of artefacts in the input data, whereas thorough post-processing will remove any spurious lineaments generated during detection. There is, of course, scope to develop an algorithm that finds a better middle ground between the two OBIA methodologies. Revised methods could attempt to optimise lineament lengths for the bottom-up method or reduce computational time of the top-down method. Conclusions Two stand-alone OBIA methods for the detection of geological lineaments from airborne geophysics and remote sensing datasets, to create a composite lineament network, are presented. OBIA methods for lineament detection are highly effective and can make use of top-down and bottom-up segmentation techniques. The use of the TDR transform, and the assumption that lineaments are represented by minima in the magnetic, radiometric and LiDAR DTM data, is valid for the detection of geological lineaments in SW England, but does not preclude the algorithm from detecting maxima where appropriate. By applying the TDR transform to radiometric Total Count and LiDAR DTM data using convolution to calculate the vertical derivative, comparable datasets can be produced with similar properties for lineament detection. The further development of a top-down OBIA method for geological lineament detection allows the effective integration of multi-sourced datasets. The newly developed bottom-up OBIA method provides a Fig. 8. The BGS 1:50 000 linear data for faults in SW England. All faults have been grouped into observed or inferred with the number of objects noted in the legend. It is apparent that much of the BGS linework is based on inferred faults which are a likely combination of both geological and geomorphological information resulting in longer interpreted lengths than observed faults. The map also illustrates the inconsistencies across different mapping districts such as Chulmleigh (C), Ivybridge (I) and Bodmin (B). Furthermore, an inset is included to illustrate the orientation of all mapped (observed and inferred) faults. This figure is based on BGS 1:625 000 digital data and includes OS data ©Crown Copyright and database right (2018). more computationally efficient means of detecting lineaments across multiple datasets with less user input required at the expense of lineament length. Both OBIA methods produce similar composite lineament networks, which capture the orientations of existing mapped structures, and are therefore considered to be a robust representation of the regional structural geology. By integrating data from multiple sources, a comprehensive and composite lineament network is created based on geophysical, geological and geomorphological properties. The top-down method provides metadata that can aid geological interpretation and help determine the certainty and significance of a lineament. The bottom-up method is more computationally efficient but produces shorter lineament lengths and less detailed metadata.
7,565.6
2019-02-01T00:00:00.000
[ "Geology", "Environmental Science" ]
ECOLOGICAL PROBLEMS OF ENTERPRISES OF ALCOHOL INDUSTRY . The issue of industrial water treatment and food waste disposal is particularly relevant, especially for the alcohol industry. The problem of wastewater treatment and waste accumulation in the process of alcohol production is due to high chemical and biological oxygen demand, specific colour and odour, a large number of suspended substances, low pH. Therefore, the choice of promising wastewater treatment technologies is of paramount importance to ensure the ecological safety of the environment. The purpose of the paper is 1) to analyze the state of the alcohol industry in Ukraine in recent years, and methods of its waste disposal; 2) to investigate and determine effective methods of waste disposal in the alcohol industry. Introduction At present, the gradual introduction of the balanced use of natural resources concepts and ecologically safe, energy-efficient development of industrial and production enterprises is taking place. The major purpose of their implementation is to ensure that the human needs for resources do not conflict with the priorities of environmental protection and human health. Therefore, the capacity growth of the alcohol industry at the present stage of the food industry development leads to the formation of enormous volumes of wastewater from the production process (1 litre of ethanol is accompanied by the formation of 12-14 litres of wastewater). Due to the unsatisfactory current state of treatment facilities and low efficiency of technological processes of treatment and utilization in Ukraine, wastewater is one of the main factors of hydrosphere pollution, and sludge being one of the significant factors of lithosphere pollution, which in turn creates a number of environmental risks. In the conditions of constant deterioration of the environment, improving the quality of wastewater treatment before discharge and improving the organoleptic and other characteristics of the formed sediments, before their disposal is an urgent task today. Thus, there appears a scientific and applied problem of creating new and improving existing environmentally friendly technological processes of treatment and disposal of industrial and municipal wastewater, which will ensure the rational use of available renewable resources. Theoretical, practical, and methodological issues related to the use of food industry waste and their social and economic consequences are the subject of scientific interests of many domestic and foreign scholars [1]. Examining the economic aspects of the waste impact consequences on the world's natural resources, T. Zinchuk notes that all countries of the world, regardless of the level of their resources, feel the same negative impact of human production activities on the ecological balance, environment and natural resources, the state of which deteriorates dramatically and leads to the gradual loss of food security [2]. Problems of rational use of renewable resources, creation of new and improvement of existing ecologically safe technological processes of purification and utilization of industrial and municipal sewage, are reflected in the works of S. Shamansky. In particular, according to his assertion, treatment plants are a potential source of additional raw materials, which are now considered waste, as well as non-traditional energy sources, the potential of which is not fully used, primarily due to shortcomings of modern wastewater treatment technologies [3]. Current world trends in the use of alcohol products allow us to solve some interrelated problems of social and environmental nature. Thus, recently, technologies aimed at improving the quality of fuel with the use of high-octane oxygen-containing additives to gasoline have become increasingly important, which allows to include blended gasoline to biofuels. Solving this problem will simultaneously solve some problems in consumer industries, namely energy and refining industries. Also, the use of high-octane oxygencontaining additives to gasoline in the production of biofuels will significantly improve the environmental situation, especially in the cities [4]. Ukraine annually consumes about 200 million tons of fuel and energy resources and is an energydeficient country because it covers its energy needs by about 53 % and imports 75 % of the required volume of natural gas and 85 % of crude oil and petroleum products. Such a structure of the fuel and energy complex is economically impractical, creates the dependence of Ukraine's economy on oil and gas exporting countries, and is a threat to its energy and national security. It is also known that, based on the assessment of global oil reserves, the era of its depletion is approaching. Naturally, this will significantly exacerbate the energy problems of most countries. Therefore, the world economy pays great attention to the use as an energy source of renewable raw materials -ethyl alcohol as biofuel. Biofuel, which includes high-octane oxygencontaining additives to gasoline, is the energy resource of biological origin, the major feature of which is its renewable ability [5]. The goals of the article are the analysis of the current state and the main environmental aspects of the alcohol industry development, including Ukraine as well as identifying the promising areas of utilization and the use of waste from the alcohol industry of Ukraine as secondary raw materials, on the example of Ukrspyrt State Enterprise. The aim of the study is to find the effective methods of utilization and the levers of state regulation that can direct the enterprises of the alcohol industry to energy efficiency and implementation of socially important environmental and economic tasks. Experimental part The alcohol industry is focused on raw materials. Alcohol is used in more than 150 industries. The raw material base for alcohol production is molasses, defective sugar, grain, potatoes. Usually, distilleries are located in small settlements. Most alcohol in Ukraine is produced from waste from sugar industry. The alcohol industry of Ukraine includes 79 stateowned production sites, including 41 (only 12 of which operate) are the part of Ukrspyrt State Enterprise, a leading producer of food alcohol. The total loss of enterprises that form the alcohol industry in 2018 reached UAH 25.7 million, only 8 of the 21 state-owned enterprises got a profit, and 11 enterprises are in bankruptcy [6]. The total production capacity of Ukrspyrt State Enterprise is over 36 million decalitres per year. In January 2020, Ukrspyrt State Enterprise officially sold 440, 60 thousand decalitres of alcohol. At the same time, for the same period last year, only 255, 17 thousand decalitres were sold [7], which will proportionally lead to a decrease in the number of effluents by almost twice. Vodka and alcoholic beverages make up the largest share in the structure of sales of ethyl rectified alcohol by distilleries [7]. In terms of raw material consumption, alcohol production is the largest biotechnological production in the world, and ethanol is the third largest in terms of gross product value. Varieties of alcohol (ethyl alcohol, technical bioethanol, food alcohol, technical alcohol, medical alcohol, alcoholic beverages) depend on the type of raw material, yeast strains, quality of distillation, degree of dilution, denaturation, etc. Today, all fuel ethanol is produced biotechnologically by fermentation (yeast) or sugars (sugar cane), or starchcontaining raw materials (mostly corn). The main wastes of alcohol production, depending on the substrate, are post-alcohol molasses, post-alcohol grain and post-yeast molasses bard with pH = 4.5-7.0, which after evaporation is disposed of for feed, fertilizer, feed additive, feed yeast, production of drugs (acidin, glutamate), etc. Per 1,000 dal of alcohol, 140 m 3 of grain bard is formed, and 12,000 dal of molasses bard, on which fodder sugarmycetes are produces. Production of 1 litre of ethanol is accompanied by the formation of 12-14 litres of wastewater. [8]. According to the data given in the research of T. Melnychenko, V. Kadoshnikov, K. Zhebrovska, O. Petrenko, O. Puhach "Introduction of advanced technologies of waste utilization at the objects of alcohol industry -a guarantee of environmental protection" at different enterprises, the composition and amount of wastewater differ significantly. Table 1 shows the characteristics of wastewater from alcohol enterprises, which use molasses as raw material [9]. Traditionally, fresh bard in its natural form was used for fattening cattle on collective farms and fattening farms. Glycerin (only at Lokhvytsya Distillery), glutamic acid and monosodium glutamate, betaine, medical acid, feed yeast, feed concentrate of vitamin B12 (KMB-12) were obtained from it by processing [9]. Characterization of wastewater is necessary to determine the method of treatment, the possibility of discharge into reservoirs, the presence of valuable or toxic impurities. The methods for wastewater treatment, which is later submitted for further treatment in compliance with all water quality requirements provided by law, are determined then. The composition of wastewater and its properties are evaluated by the results of sanitary and chemical analysis, which consists of a number of physical, physicochemical and sanitary-bacteriological determinations. The great diversity of the composition of effluents and the impossibility of analyzing each of the polluting components leads to the need to choose such indicators that would describe certain properties of water without the identification of individual substances. Such indicators are called group (total). Complete sanitary and chemical analysis includes the characteristics of the following indicators: temperature, color, odor, transparency, suspended solids by volume and mass, permanganate oxidation, chemical oxygen demand (COD), biochemical oxygen demand (BOD), pH, dry residue, dense residue and puncture losses, surfactants, petroleum products, dissolved oxygen, microbial count, Escherichia coli bacteria (ECD), helminth eggs. nitrogen (total, ammonium, nitrite, nitrate), phosphates, chlorides, sulfates, heavy metals and other toxic elements. In addition to these indicators, the mandatory tests of complete sanitary and chemical analysis at municipal treatment plants include the determination of specific impurities entering the drainage system of settlements from industrial enterprises. Requirements to the composition and properties of wastewater of enterprises for their safe disposal by the sewerage network according to appendix 1 to the "Rules of wastewater acceptance of enterprises in communal and departmental sewerage systems of settlements of Ukraine", approved by the order of the State Construction Committee of Ukraine of February 19, 2002 N 37 [10]. In Ukraine, the technology of wastewater treatment of distilleries, developed by the Department of Ecology of UkrNDIspyrtbioprod, provides: -purification of Lutheran waters at bio-treatment plants and return to production; -cleaning of post-alcoholic grain bard by centrifugation or vacuum filtration to obtain 25 % of the pellet, which is dried on a drum or disk dryer with a yield of 7 tons of protein feed per 1000 dal of alcohol; -cooling of the liquid fraction (100 tons of filtrate per 1000 dal of alcohol) to 35-45 °C and the implementation of two-stage anaerobic-aerobic purification to obtain biogas [8]. Fig. 1 shows the basic technological scheme of the most effective mechanical and biological treatment of industrial and domestic sewage to pure ecologically safe, biologically high-grade water. For mechanical treatment, wastewater 3 passes through the grids 4, where coarse mechanical impurities are retained, then through the sand trap 5, where the sand is separated, and finally enters the primary settling tanks 6, where under gravity everything heavier than water settles to the bottom. After that, the water is pumped into the methane tanks 1 for fermentation, with the release of methane gas, and after a full period is released on sludge sites with drainage 2, and everything lighter than water rises to the water surface, where it is collected by special devices in the hopper and also goes to the methane tank. At all the stages of water passing through treatment plants, biological processes take place in it. But the most noticeable biological wastewater treatment occurs at the second biological stage. In a bioreactor with a biofilm, the biomass of aquatic organisms 7, which increases during water purification, is separated in secondary settling tanks 8, from here it is fed either to methane tanks 1 or to sludge sites 2. Water purification consists of its complete disinfection, that is, in the destruction of epidemically dangerous organisms and vibrios in the water. For this purpose chlorination, irradiation with ultraviolet light, and less often ozonation are used. When treating treated wastewater with chlorine, it is incubated for 20-30 min in contact tanks 10, and then discharged into open water. Some scientists consider chlorination of wastewater absolutely unacceptable. Therefore, a reliable scheme has been developed, which includes processes of treatment with flocculants and coagulants in the apparatus 11, settling 12, filtration through sand 13, and finally through activated carbon 14. The sludge in this scheme is concentrated on filters 15 and sent to the landfill. Fig. 1. Schematic diagram of mechanical, biological and chemical wastewater treatment: 1 -methane tank; 2 -silt sites; 3 -wastewater; 4 -lattice; 5 -sand trap; 6 -primary settling tank; 7 -bioreactor (aeration tank with biofilm); 8 -secondary settling tank; 9 -capacity for chlorination; 10 -contact tank; 11 -capacity for flocculation and coagulation; 12 -settling tank; 13 -sand filter; 14 -filter with activated carbon; 15 -sludge thickener [11] During methane fermentation of alcohol bard (4.2 % SR) from 1 volume of fermentation liquid 22 volumes of gas, 2.5 % of acid are formed (0.46 % of formic acid, 0.79 % of acetic acid, 0.86 % propionic, 0.39 % oil, etc.) [8]. Due to the formation of a large volume of postalcohol bard (12-15 dm 3 per 1 dm3 of alcohol), simple methods of its utilization, such as cleaning in the filtration fields, the use as raw feed for cattle, fermentation, do not solve the problem completely. Since bard has a fairly high humidity (90-95 %), the use of physical and chemical methods, namely drying to get dry feed DDGS, requires expensive equipment and high energy consumption. Cultivation of feed yeast biomass, methods of aerobic and anaerobic fermentation also have several disadvantages (formation of culture fluid, swelling of activated sludge, the inability of the system to reduce high BOD or COD, which requires high-quality integrated solutions). Cultivation of feed yeast biomass on PSB requires the installation of additional facilities for purification of culture fluid. To dispose PSB by anaerobic digestion, it is necessary to pre-treat it (separation of liquid and solid phases, ozonation, etc.) in order to provide additional substances to stabilize the pH and nutrients, which increase the cost of the process. Results of the investigation monitoring Examples of methods used to treat wastewater from the alcohol industry are adsorption, wet oxidation, ozone treatment and ion exchange, removal of chromophores using hydrogenation biological batch reactors, and colour (adsorbed organic halogens) removal by advanced oxidation processes. There are several ways to remove the colour from the industrial wastewater of distilleries. It is difficult to break down melanoidins by microorganisms. Various disadvantages of the above methods are overcome by the method of electrocoagulation. Therefore, wastewater from wastewater treatment systems still contains a high content of colour, melanoidins and COD. The energy consumption required for wastewater treatment in the food industry is from 10 to 15 % of total energy consumption. It is mainly used in aerobic treatment using oxidized organic compounds. At the same time, anaerobic methods of methane fermentation provide biogas with a methane content of 50 to 70 %, thus saving electricity compared to aerobic treatment methods. Wastewater treatment technology in France. In France, 15 food industry enterprises are equipped with anaerobic wastewater treatment systems with organic contaminants. They use methane tanks with a capacity of 25 to 5.000 cubic meters. The world's largest methane tank operates on the principle of an anaerobic filter and has a capacity of 13 thousand cubic meters (Puerto Rico, rum plant). The method of methanogenic wastewater fermentation is appropriate in the food industry due to the high concentration of contaminants in insignificant amounts of water, the soluble nature of organic contaminants, and the favourable temperature of wastewater (30-35 °C). India is the largest producer of ethanol in Asia. Most distilleries in India use the anaerobic blanket reactor or its variants with subsequent settling / filters and aerobic treatment processes. This process is able to remove approximately 70 % of COD and 80-90 % of BOD. Wastewater from settling tanks / filters is called effluent biodigrant (BDE). BDE still contains very high levels of COD and BOD. This BDE is further treated aerobically. Although large plants around the world use complex treatment technologies, including physical, chemical and biological methods, small enterprises (plants) suffer from the lack of proper treatment systems due to financial constraints. Analyzing the research of M. Potapova published in the article "Modern methods of processing and disposal of grain post-alcohol bard" we see that one of the promising methods of PSB utilization is aerobic fermentation. It has been argued that the use of cofermentation of the raw bard with bird droppings allows: to balance the nutrients, to bring the pH to the required for the process of methanogenesis, to achieve the desired ratio C: N. Under the conditions of cofermentation, a biogas yield of 265 cm 3 /g COP with a methane content of 72 ± 2 % is achieved. It is planned to develop a mathematical model for the control of the technological process of obtaining biogas from post-grain bard depending on environmental conditions and the ratio of PSB: manure [2]. Analyzing the activity of Ukrspyrt State Enterprise, we have found that one of the products of the industry is bioethanol dehydrated, denatured, which is used for the production of gasoline, biofuels, ethyl tert-butyl ether, functional additives and additives to motor fuel. Bioethanol dehydrated is used as a component of automotive fuel for vehicles with petrol engines, which meets the requirements of EN 228 and is used for mixing in any quantities. The organization of production of alternative motor fuel with bioethanol content allows the use of environmentally friendly component in motor fuel, which significantly reduces emissions of toxic substances, reduces dependence on energy imports, and improves the overall environmental situation in the country [9]. Currently, Ukrspyrt produces 150 thousand tons of bioethanol per year and carries out the reconstruction of enterprises, which will provide the Ukrainian market with this product. It is known that 15 distilleries are ready to produce biofuels. And with the beginning of full-scale production of bioethanol, Ukraine can establish exports of this product [12]. However, modern bioethanol production also requires wastewater disposal. Conclusions Increasing its capacity, the alcohol industry enhances the volume of waste, including the volume of wastewater that needs to be effectively treated before being released into the environment. At the factories of the alcohol industry, basically all wastewater is treated at the factory treatment facilities (stations). The methods of wastewater treatment are necessary before transferring wastewater to biochemical treatment plants, before the discharge into reservoirs, or before regeneration in the circulating water supply system of the plant. Biological methods of purification from organic substances play an important role in the enterprises of the alcohol industry. These methods are based on the use of microorganisms that convert organic compounds into nutrients and energy sources. Organic compounds decompose due to oxidation by anaerobic purification. The analysis of wastewater is required to determine the method of treatment, the possibility of release into water bodies, the presence of valuable or toxic impurities. Effective modern methods can clean wastewater from organic pollution by 85-95 %. They have only a small amount of surfactants, dissolved mineral salts, and other compounds. Oxygen-free wastewater treatment in methane tanks is widely used in the food industry to produce energy biogas and bioorganic fertilizers (activated sludge). One of the most effective environmental directions of the alcohol industry is the introduction of the set of measures that will ensure the production of alcohol and its by-products with the lowest content of harmful substances, and prevent the violation of ecological balance in the environment, which is realized by the development and implementation of the latest environmentally friendly technologies for the production and disposal of its waste.
4,526.2
2020-01-01T00:00:00.000
[ "Engineering" ]
Biomechanical Effects of Various Bone-Implant Interfaces on the Stability of Orthodontic Miniscrews: A Finite Element Study Introduction Osseointegration is required for prosthetic implant, but the various bone-implant interfaces of orthodontic miniscrews would be a great interest for the orthodontist. There is no clear consensus regarding the minimum amount of bone-implant osseointegration required for a stable miniscrew. The objective of this study was to investigate the influence of different bone-implant interfaces on the miniscrew and its surrounding tissue. Methods Using finite element analysis, an advanced approach representing the bone-implant interface is adopted herein, and different degrees of bone-implant osseointegration were implemented in the FE models. A total of 26 different FE analyses were performed. The stress/strain patterns were calculated and compared, and the displacement of miniscrews was also evaluated. Results The stress/strain distributions are changing with the various bone-implant interfaces. In the scenario of 0% osseointegration, a rather homogeneous distribution was predicted. After 15% osseointegration, the stress/strains were gradually concentrated on the cortical bone region. The miniscrew experienced the largest displacement under the no osseointegra condition. The maximum displacement decreases sharply from 0% to 3% and tends to become stable. Conclusion From a biomechanical perspective, it can be suggested that orthodontic loading could be applied on miniscrews after about 15% osseointegration without any loss of stability. Introduction Miniscrew has been extensively applied in orthodontic treatment as a temporary anchorage device because of its ease of placement, low cost, minimal anatomic limitations, and enhanced patient comfort. The existing evidence suggests a success rate of more than 80% for miniscrews [1]. Likewise, Albogha and Takahashi have stated a success rate ranging from 77.7% to 93.43% in their study [2]. However, the failure of miniscrew may have dramatic consequences and remain difficult to be anticipated by orthodontists [3]. Since the failure of miniscrew necessitates additional surgical interventions and prolonged treatment time, investigating the mechanical stability of miniscrew becomes imperative. The biomechanical properties of bone to implant interface are the key determinants for miniscrew stability. Initially, when the miniscrew is placed into bone, the retention of the implant is provided by mechanical locking. Later, with the progression of bone formation around the implant, bioactive retention can be achieved via physicochemical bonding. It is clinically evident that full osseointegration is a prerequisite for successful prosthetic (or dental) implants [4,5]. Nevertheless, some fibrous tissue formation at the bone-implant interface would be acceptable because orthodontic loading has to be applied as early as possible and also the miniscrew at the end of treatment must be easily removable [3]. That is to say, partial bone-implant osseointegration of the miniscrew might be permitted for orthodontic treatment. Therefore, the effect of the different degrees of boneimplant osseointegration on the stability will be of great interest from the orthodontist's point of view. The objective of this study was to investigate the influence of the different implant-bone interface conditions on the biomechanics of an orthodontic miniscrew and its surrounding tissue with the use of finite element analysis (FEA). FEA is particularly suitable for a biological structure analysis as it allows great flexibility in dealing with geometric complex domains composed of multiple materials [2,[6][7][8]. In the present study, an advanced approach representing the bone implant interfaces was adopted wherein [9] different percentages of bone-implant osseointegration were implemented in the FE models and the biomechanical behavior of miniscrew and the supporting tissue with the various bone-implant interfaces was predicted and compared. Materials and Methods The geometry of the partial maxilla, including both premolar and molar, was obtained from the dental hospital, and computed tomography images captured at 0.5 mm intervals were disposed with Mimics software (Materialise NV, Leuven, Belgium) and Geomagic Studio software (Geomagic Company, NC, USA). Maxillary trabecular bone was modeled as a solid structure in the cortical bone with an average thickness of 2 mm based on the CT images. Likewise, the periodontal ligament (PDL) was modeled based on the external geometry of teeth roots with a thickness of 0.20 mm. The implant was structured as a threaded endosseous miniscrew (8 mm length, 1.3 mm diameter, 0.1 mm thread ridge, 60 degrees screw top angle, and 0.5 mm thread pitch) by using a commercial CAD software SolidWorks (SolidWorks Corp., Dassault Systemes, Concord, MA, USA). The miniscrew was inserted into maxillary bone between the premolar and molar at a distance of 3 mm from the alveolar crest, as shown in Figure 1. The entire model was imported to the finite element package ANSYS Workbench (Swanson Analysis System Co., Houston, TX, USA). The finite element model was meshed using 10-node solid tetrahedral elements (Figure 2(a)). Following a convergence test [7], 0.5 mm was determined to be the appropriate element mesh size for bone and tooth, and even a miniature size (0.2 mm) was selected to accommodate the small feature in the model (e.g., PDL and miniscrew). The detailed element assignment is listed in Table 1. The contacts among the tooth, the related bones, tissue, and ligaments are defined in Table 2. For the realistic presentation, different amounts of boneimplant osseointegration were implemented, ranging from 0% to 100% (Figure 3). In the existing studies, it was found that there should be a small gap between the implant and peri-implant bone [9,10]. To evaluate the effect of different bone implant interfaces, a simulation method has already been developed by Lian et al. [9], which was used in the present study. Hence, based on the histological image ( Figure 2(c)) [11], it can be suggested that 0.1 mm (100 μm) thick mixed tissue exists around the miniscrew, constituting a blend of bony tissue and soft tissue to simulate varying bone-implant contact (Figure 2(b)). An ad hoc APDL (ANSYS Parametric Design Language) routine was developed to set the different bone-implant osseointegration. As shown in Figure 3, a certain percentage of mixed tissue elements were selected randomly and assigned the properties of bony tissue. The remaining elements within the mixed tissue were designated as soft tissue. In this study, a total of 13 different percentages of bone-implant osseointegration were considered (0%, 1%, 2%, 3%, 4%, 5%, 10%, 15%, 20%, 25%, 50%, 75%, and 100%). Mesial and superior maxillary bone surfaces were fixed in all directions as the boundary conditions (Figure 1(b)). To consider the loading effect of different clinical applications [9,12], two different kinds of orthodontic load (traction force and revolving torque) were applied at the head of the screw (Figures 1(c) and 1(d)). The direction of the traction force applied was 30 degrees declination to the occlusal plane ( Figure 1(c)), and the revolving torque was applied in the clockwise direction (Figure 1(d)). Fully anisotropic elastic components were used for both cortical and trabecular bones [13,14], as listed in Table 3. A nonlinear elastic stress-strain behavior of PDL was employed and inputted into FE models, following the approach proposed by Toms et al. [14]. Miniscrew and dentin were considered homogeneous, isotropic, and linearly elastic (Table 4). Results In the present study, a total of 26 analyses were performed, including the 13 different degrees (from 0% to 100%) of bone-implant osseointegration models with two different kinds of orthodontic load applications (traction force and revolving torque). The FE simulated results for the force and torque load in the peri-implant tissue (mixed tissue region) are shown in Figures 4 and 5, respectively. For the ease of observation, equivalent stresses/strains in the cross section of the FE models are displayed. As shown in the figures, the stress and strain distributions in the mixed tissue are changing with the various bone-implant interfaces. Initially, in the scenario of 0% osseointegration, a rather homogeneous equivalent stress/strain distribution was predicted. And then, remarkable stress/strain concentrations could be seen in the periimplant tissue with the 1% osseointegration interface. After the 15% osseointegration, the stress and strains were gradually concentrated on the cortical bone region rather than in the trabecular bone region. It is worth noting that, whatever kind of the orthodontic loads are subjected, there is a significant change in the first 7 degrees (0%~10%), but the variation range reduced obviously after the 15% osseointegration. Figures 6 and 7 show the equivalent stress and strain on the surrounding bone under the application of traction force and revolving torque, respectively. As evident from Figure 6, under the application of traction force, the stress induced in the cortical bone was much higher as compared to that in the trabecular bone. With the change in bone-implant interfaces, the stress distribution gradually concentrated on the bone around the neck of miniscrew. The strain distribution also showed a trend similar to the stress. However, in the initial phase (0%~15%), the maximum strain was located in the trabecular bone rather than the cortical bone. Furthermore, the equivalent stress and strain distributions with revolving torque application are shown in Figure 7. The changing trend of equivalent stress and strain is much similar to that of traction force application. At the beginning (0% osseointegration), the stress was longitudinally distributed along the whole miniscrew. With the integration of the bone and implant, the stress was highly concentrated in the neck of the miniscrew. Similarly, the strain distribution was concentrated on the trabecular bone initially, which later on shifted towards the neck of the miniscrew with the change in bone-implant osseointegration percentages. Figure 8 illustrates quantitatively the relationship between the degree of bone-implant osseointegration and biomechanical characteristics of bone-implant complex. Figures 8(a) and 8(b) represent the change in average equivalent stress and strain in the mixed (peri-implant) tissue, respectively. It is evident from Figure 8(a) that the stress increases progressively before the 10% osseointegration, followed by a slight decrease, and then increases again. As shown in Figure 8(b), the strain decreases significantly at the beginning, and then tends towards stability until the 100% osseointegration is achieved. From Figures 8(c)and 8(d) , it can be seen that the equivalent stress changes with the increase in the osseointegration degrees in the cortical bone and trabecular bone region, respectively. Besides, initially the stress increases sharply, and then drops down followed by a gradual increase again after the 15% osseointegration. As shown in Figure 8(d), it can be seen that the graph exhibits similar patterns to those presented in Figure 8(c), but a turning point is not observed at the 15% osseointegration in the trabecular bone. Relative to the displacement (Figure 8(e)), the miniscrew experienced the largest displacement under the condition of no osseointegration (0%). The maximum displacement decreases sharply from 0% to 3% and tends to become stable after completion of approximately 3~4% osseointegration. Discussion In this study, finite element models were generated to investigate the effect of different implant-bone interface conditions on the mechanical stability of miniscrew. From 0% to 100%, 13 different degrees of bone-implant interfaces were simulated. The stress/strain patterns generated by the miniscrews at the surrounding tissue were calculated and compared, and the displacement of miniscrew was also evaluated. In dental biomechanics, almost all the FE models generated currently simulated different bone to implant interfaces by employing frictional contact analysis [2]. In a typical FE model built by Yang and Xiang [15], three different contact types were used to represent the integration quality at the implant-bone interface. The "bonded" type simulates a full osseointegration; the "no separation" type indicates an imperfect osseointegration, and the "frictionless" contact implies no osseointegration. As a progressive technology of simulating partial contact, a random algorithm was developed by Gracco et al. to set a part of the nodes localized at the bone/implant interface as the tie constraint, and the remaining part of the interface was set as frictional contact [16]. However, although contact analyses with different frictional coefficients can be used to assess the biomechanical effects of many different implant-bone complex, the specific frictional coefficients is still difficult to determine by an existing biomechanical testing [17]. In order to overcome the limitations of the existing methodologies, an alternative method was proposed by Lian et al. [9]. In this method, an assumption was made that a small part of tissue surrounding the implant was constituted as a mix of hard and soft tissue. According to the observation of previous histological studies [18,19], this assumption has been proved to be reasonable. Therefore, considering the progressive change of the surrounding tissue around the miniscrew, this alternative method was advanced from a 2D simulation to 3D, to reproduce the different bone-implant interfaces in our FE models. Till date, the minimum level of bone-implant osseointegration for clinical success in orthodontics has not been clearly described. From the biomechanical viewpoint, the minimum amount of bone-implant osseointegration required can be inferred from our analytical results. For the equivalent stress and strain (Figures 4-7), the implant-bone interface conditions significantly affected the stress/strain distributions on the surrounding tissue when the osseointegration was less than 15%. Further analysis (15%~100%) reveals that, even though the stress/strain concentration appears around the implant neck region, the overall changes in the stress/strain distributions from 15% to 100% osseointegration can be neglected. Besides, according to the progressive biomechanical characteristics of bone-implant complex (Figure 8), the minimum amount of bone-implant osseointegration may vary between 2% and 10%. Some of the previous findings are also in support of our results. Deguchi et al. implied that implants with as little as 5% bone osseointegration at the boneimplant interface can successfully withstand orthodontic force [20], and also the study by Woods suggested that 2.2 percent BIC may be sufficient for light force [21]. However, during the low degrees of bone-implant osseointegration (0~15%), our results show that the presence of connective tissue (soft tissue) at the implant-bone interface might result in an increase of stress/strain magnitudes in the trabecular bone as compared with full osseointegration conditions. Because of the occurrence of fibrous tissue, miniscrew cannot be tightly held by alveolar bone, leading to miniscrew instability. The surrounding trabecular bone might be damaged due to the changing mechanical environment induced by the miniscrew, and excessive implant displacement may cause loosening, dislocation, or even loss of the implant. Therefore, it can be inferred that orthodontic loading can be applied over the miniscrew after completion of 15% osseointegration without a compromise of stability. That is to say, less than 15% osseointegration might be a risk factor in terms of implant stability, and hence should be avoided. Now, a critical question arises, that is, what should be the latency period to achieve the minimum percentage of boneimplant osseointegration during an orthodontic treatment? In reference to previous animal/ histological studies, several investigations have been done on the healing time of orthodontic miniscrew. However, no study has been conducted specifically to solve this problem. Even more, the existing results are inconclusive about the proper timing of orthodontic force application. A histological study done by Ramazanzadeh et al. concluded that healing time has no significant effect on miniscrew stability, but only a comparison of boneimplant contact between 4 weeks and 8 weeks was made in his study [22]. In an another study by Oltramari-Navarro et al., similar histomorphometric results were observed for the immediate and the delayed orthodontic loads groups, but it is important to note that the immediate group presented higher failure rate (50%) than the delayed group [18]. Likewise, the results of an animal study by Zhao et al. indicated that early loading may decrease the osseointegration of miniscrews, and the same investigators ; G ij represents shear modulus (GPa); ν ij represents Poisson's ratio; the 1-direction is radial, the 2-direction is tangential (circumferential), and the 3-direction is axial (longitudinal); the 1-direction is inferosuperior (the axis of transverse isotropy symmetry with the smallest of Young's modulus value), the 2-direction is mediolateral, and the 3-direction is anteroposterior. 0% 1% 2% 3% 4% 5% So tissue Bony tissue 10% 15% 20% 25% 50% 75% 100% Figure 3: Different degrees of the bone-implant osseointegration interfaces implemented in the FE models, varying from 0% to 100%. (b) suggested that a 4-week healing time is recommended before orthodontic loading [23]. Deguchi et al. also concluded that a minimal healing period of 3 weeks is required for orthodontic loading [20]. Above all, the existing animal experiments presented some useful conclusion; however, their results remain limited when it comes to understanding the various conditions of bone-implant interfaces playing a role in miniscrew stability. The research limitations and suggestions for future research should be pointed out. Firstly, additional animal research is required to answer the above-mentioned question. If the exact time for achieving 15% osseointegration of miniscrew could be confirmed, the appropriate time of miniscrew loading can be effectively ensured for orthodontists. Secondly, the bone remodeling process was not considered in the simulation. In fact, the bone remodeling occurs around the implant during the healing period. So the progressive process of bone remodeling should be included in further simulation to investigate mechanical stability of orthodontic miniscrew. Finally, the material nonlinear properties of the mixed tissue (hard and soft tissue) should be considered in the FE analysis. Because large deformation can be observed during this simulation, the incorporation of nonlinear properties can provide more accurate and reliable results. Conclusions Within the limitation of this study, it can be suggested that the orthodontic force can be applied at the miniscrew after completion of approximately 15% osseointegration which is the more beneficial for the mechanical stability of the miniscrew. Under this condition, the miniscrew can be tightly held in place by the surrounding tissue and employed as 0. 16 1.2 Cortical bone 0% 1% 2% 3% 4% 5% 10% 15% 20% 25% 50% 75% 100% Percentage of osseointegration (%) 0% 1% 2% 3% 4% 5% 10% 15% 20% 25% 50% 75% 100% (e) orthodontic anchorage without compromising implant stability. For clinical application of the results simulated in our study, a specifically designed study is required to confirm the appropriate time of orthodontic loading in the future. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper.
4,153.8
2017-06-19T00:00:00.000
[ "Engineering", "Medicine" ]
Global processes and their impact on the development of the macro-regional economy . Globalization is a process of worldwide economic, political and cultural integration and unification. The authors determined that the processes of globalization are due to economic feasibility. But, at the same time, since the effects of globalization are distributed, as a rule, unevenly and cause contradictions in the financial sector, the excessive integration and interdependence of countries exacerbate the crisis in social development. It is noted that, due to the evolution of social development, all states must to participate in the processes of globalization, however, one should adhere to a policy to reduce the impact of global problems at the level of an individual country and around the world. It is possible to avoid the negative impact of global processes by systematizing them and analyzing in detail the causes of problems. A study was made of the causes of global problems and their systematization was carried out according to the main features. It is concluded that states face the difficult task of finding new forms of interaction or transforming and improving existing elements of interaction between various international institutions and finding mutually beneficial compromises. Introduction At present, integration in the field of socio-economic development is increasingly being carried out at the world level. Economic and social interactions are transforming into global processes. As a result, at the same time, inevitably, new technologies and ways of organizing production are created and developed within the framework of new technological models. The process of globalization from the point of view of the evolution of the socio-economic development of society is natural and inevitable [1]. The growth in the volume of high-tech products leads to the need to accelerate the processes of building up innovative potential, which becomes possible as part of the creation of global chains that combine the capabilities of various participants in the innovation process, both at the national and international levels. The high share of research and development costs in the cost of innovative products leads to the need to increase production volumes, the value of which, as a rule, exceeds the capacity of one or even several countries. Therefore, globalization solves the issues of not only the formation of the necessary innovative potential for the creation and production of a product, but also the market that ensures the payback of investment costs. Along with this, globalization can act as a tool for solving global environmental problems by pooling resources and creating international institutions that ensure the development and implementation of common environmental standards. At the same time, globalization also carries risks of a political, social and environmental nature, influencing all spheres of society. Globalization has a very strong impact on the world economy. At present, countries cannot protect themselves from certain problems, since they will not be able to bypass them anyway. All states, one way or another, are involved in this process [2,3]. Problem statement Along with economic expediency, excessive integration and interdependence often exacerbate crisis phenomena in social development, since the effects of globalization are distributed, as a rule, unevenly and cause contradictions in the financial sector. The increase in risks in the global financial market causes periodic global financial crises. At the International Economic Forum in Davos (Switzerland), the countries of the United Nations considered the next phase transition of the world economy in 2016. The main issues in the field of sustainable development were the protection of the environment and the fight against climate change. At the same time, goals were formulated that ensure economic growth and the elimination of poverty, increasing prosperity while protecting the planet from the negative impact of the consequences of economic activity [4,5]. The level of social organization, political and environmental consciousness often does not correspond to the active transforming activity of man. The formation of a single sociocultural space has led to the fact that local contradictions and conflicts have acquired a global scale [6]. Global problems are generated by the contradictions of social development, the sharply increased scale of the impact of human activity on the world around us and are also associated with the uneven socio-economic and scientific and technological development of countries and regions. The solution of global problems requires the development of international cooperation [7]. Research questions The globalization of social development is a combination of various types of global processes: technospheric, socio-natural, biospheric, economic, etc. From the point of view of economic feasibility, globalization simplifies the access of participating countries to the world market. Thanks to the introduction of comprehensive standards and information at the global level, available to all participants in the world market, the activities of world financial institutions and governments of the leading countries become transparent. Accordingly, globalization increases the accessibility for all market participants to financial information, foreign resources, managerial and technological experience [5,8]. At the same time, globalization can act as an instrument of neo-colonial policy on the part of the leading countries, as well as an instrument of economic and political pressure on individual countries, turning globalization into a way of redistributing resources and income in their interests. Under these conditions, addressing the issues of protecting the interests of the countries participating in the processes of globalization requires studying the factors that cause the corresponding risks. The creation of mechanisms and institutions for protecting their interests should be based on taking into account the identified factors, which will make it possible to resolve issues not only of emerging contradictions between individual countries, but also to ensure the global stability of the world economic system. Research purpose The purpose of the study is to study the causes and systematize the problems of global processes, as well as to identify ways to reduce their negative impact, both on the world community and on an individual state. Research methods The interdependent activities of world market participants in various fields (especially in the technosphere and socio-natural) can have a negative impact on the environment, create natural and climatic challenges and the threat of man-made disasters [9]. Challenges and problems of global processes are considered as a planetary set of interrelated problems covering all countries and all aspects of people's lives. Global problems are dynamic; their occurrence is due to the objective processes of integration of international cooperation of countries, both at the economic level and at the civilizational level. The effectiveness of solving global problems is due to the need to unite the efforts of all mankind. Global processes have both positive and negative features and they do not always pose a threat to humanity, as they ensure its dynamic development in all areas. However, the desire to maximize the income of the leading countries in the economic sphere and, as mentioned above, the uneven distribution of income between the countries participating in the world market contributes to the emergence of global problems [3]. At present, the countries participating in global processes are under the influence of global political, socio-economic and environmental risks, as they are forced to accept global challenges. It can be said that global processes affect the development of all states at the same time and each state separately. Not a single state can stop the process of globalization, since the leading countries that have the greatest influence on world global processes are always interested in strengthening their leading positions and influencing the development of other less developed countries [10]. Findings Due to the evolution of social development, all states should participate in the processes of globalization, but one should adhere to a policy to reduce the impact of global problems at the level of an individual country and around the world. To avoid, to some extent, the negative impact of global processes, it is possible with their systematization and detailed analysis of the causes of problems [11,12]. According to theoretical provisions, global problems are classified according to the following main features, shown in Table 1. The study of the causes and problems systematization of global processes will reduce their negative impact on society. The study shows that the global problems of our time cover many areas of the development of society and concern the vital interests of all people on the planet [13,14]. Their solution is possible only if the efforts of all countries are combined. A consequence of the growth in recent years of military-political problems and the inability of the world community to reach agreement on key security issues due to differences in political and religious views and the formation of extremist values was the adoption in Russia of the Decree of the President of the Russian Federation of June 2, 2020. This Decree defines the following main military threats and dangers:  build-up by a potential adversary in the territories adjacent to Russia of groupings of general-purpose forces, which include means of delivering nuclear weapons;  deployment by countries that consider Russia a potential adversary, ballistic missiles, hypersonic weapons;  creation and placement in space of anti-missile defense and strike systems;  the state has nuclear weapons or other types of weapons of mass destruction that can be used against Russia and its allies;  uncontrolled proliferation of nuclear weapons, delivery vehicles and equipment for their manufacture;  deployment of nuclear weapons on the territories of non-nuclear states. The threat of nuclear war made it necessary to limit nuclear tests and armaments at the international level. At the present stage of human development, it has become clear that war cannot be a way to solve international problems. Military actions not only lead to mass destruction and loss of life, but also generate retaliatory aggression.
2,241.4
2023-01-01T00:00:00.000
[ "Economics", "Political Science" ]
The Multitasking Surface Protein of Staphylococcus epidermidis: Accumulation-Associated Protein (Aap) ABSTRACT The stratum corneum is the outermost layer of the epidermis and is thus directly exposed to the environment. It consists mainly of corneocytes, which are keratinocytes in the last stage of differentiation, having neither nuclei nor organelles. However, they retain keratin filaments embedded in filaggrin matrix and possess a lipid envelope which protects the body from desiccation. Despite the desiccated, nutrient-poor, and acidic nature of the skin making it a hostile environment for most microorganisms, this organ is colonized by commensal microbes. Among the classic skin commensals are Propionibacterium acnes and coagulase-negative staphylococci (CoNS) with Staphylococcus epidermidis as a leading species. An as-yet-unanswered question is what enables S. epidermis to colonize skin so successfully. In their recent article, P. D. Fey and his colleagues (P. Roy, A. R. Horswill, and P. D. Fey, mBio 12:e02908-20, 2021, https://doi.org/10.1128/mBio.02908-20) have brought us one step closer to answering this question. M icroorganisms that colonize the human skin as commensals must be physiologically equipped to live on this inhospitable habitat over long periods of time and must also be tolerated by the host immune system. Bacteria that stimulate the immune system too much are hardly found as compatible commensals on the skin in the absence of irritation (1). The first step to skin colonization is, however, a strong binding of bacteria to skin cells. P. D. Fey and colleagues (2) found that in Staphylococcus epidermidis the cell wall-anchored protein Aap mediates strong adherence to glycan moieties on corneocytes, making this protein seemingly responsible for the permanent colonization of the skin by coagulase-negative staphylococci (CoNS). Aap was first described by the group of G. Peters (3,4). The team figured out the role of this surface protein in intercellular aggregation and biofilm formation and accordingly named it accumulation-associated protein (Aap). By heterologous expression of Aap on the surface of Lactococcus lactis, Macintosh et al. (5) demonstrated that Aap mediates binding to corneocytes independently of other adhesive proteins. Like most cell wall-anchored surface proteins, Aap is divided into various domains, each with a different function: the amino-terminal signal peptide is followed by the A domain which comprises a variable number of 16-amino-acid repeats followed by a conserved globular L-type lectin subdomain (6). The A domain is followed by the B domain, the proline/glycine-rich region and, at the very carboxy terminus, the cell wall anchor sequence LPXTG. Although the domain responsible for Aap-mediated adherence to corneocytes turned out to be the lectin subdomain of the A region, the ligand of Aap on the surface of the corneocytes had remained unknown until recently (2). The stratum corneum (SC) barrier is rich in glycans, proteoglycans, and corneodesmosome glycoproteins, although the glycan pattern and quantity may change during SC maturation (7). Therefore, Fey and colleagues speculated that glycan ligands could be an ideal target for Aap-mediated adherence of S. epidermidis to corneocytes. Evidence for this assumption came from the following observations: a) A recombinant A domain was able to block binding of three different S. epidermidis strains to corneocytes, suggesting that the target sites were occupied by the A domain, thus preventing S. epidermidis binding; this also ruled out other surface proteins, teichoic acids, or the polysaccharide intercellular adhesin (PIA) (8,9) as potential receptors on the surface of the bacterial cells. b) Knocking out PIA and SdrF (Serine-aspartate repeat protein F) expression had no effect on the S. epidermidis adherence. c) Antiserum raised against the A or B domain of Aap decreased bacterial binding to corneocytes. d) Deglycosylation of the corneocytes' surface with various deglycosylases decreased adhesion to corneocytes. e) Antibody-mediated blocking of the highly expressed corneocyte proteins loricrin and cytokeratin-10 had no effect on the adherence of S. epidermidis. The deglycosylation experiments were the highlights of this work because they brought us closer to the target structures. The most common sugar tags on human skin surface proteins are N-linked glycosylation at Asn-X-Ser/Thr motifs and O-linked glycosylation at Ser/ Thr residues (10). Treatment with both peptide-N-glycosidase (PNGase) and O-glycosidase reduced adherence of S. epidermidis by approximately 50%, suggesting that both of these glycan linkages are expressed and serve as binding targets. Pretreatment of corneocytes with glucosidases that remove 5-N-acetylneuraminic acid, fucose, and galactose also decreased adherence while removal of N-acetylglucosamine and mannose residues did not affect adherence. These results suggest that the binding of S. epidermidis to corneocytes is due to the interaction of Aap's lectin domain with glycoproteins and possibly glycolipids. This is per se not surprising since lectins are carbohydrate binding proteins, i.e., they specifically bind sugar groups that are part of more complex molecules. Interestingly, however, Fey and colleagues observed that, although not directly involved in ligand binding, the B domain enhanced binding. They speculate that the B domain may contribute to reduce the electrostatic repulsive forces between the bacterial cell surface and the host surface or increase Aap flexibility. An alternative plausible explanation could be that by omission of the B domain, the distance of the lectin domain from the cell wall is shortened so much that the L-lectin domain becomes partly shielded by the peptidoglycan scaffold of the bacterial cell wall and thus cannot capture the glycan structures on the corneocytes' surface. A similar situation was described for a cell wall-anchored enzyme (a lipase) wherein a gradual shortening of the cell wall-spanning sequence of the enzyme negatively affected its folding to an active conformation and was correlated with a decreasing activity of the cell-bound lipase (11). Of note, to fully perform its intercellular aggregation of S. epidermidis, the N-terminal region of Aap needs to be processed since it apparently interferes with the Zn 21 -dependent dimerization of the B domain (12). The processing is carried out by the S. epidermidis specific metalloprotease SpeA (Sep1) (13,14). Only the resultant truncated Aap isoforms can efficiently promote cellular aggregation and/or biofilm accumulation through B-domain-mediated intercellular adhesion. The inhibition of biofilm formation by the A domain is attributed to its N-terminal region, not to the lectin subunit (2,13). Together with the report on the aggregation of the colonizing bacterial cells on the skin (15), these findings suggest that the A-lectin domain of Aap mediates adherence to the host ligand. Its cleavage is subsequently required to enable B-domain-mediated cellular aggregation, which in turn increases skin colonization. The downregulation of SepA expression by the global transcription regulator SarA may allow S. epidermidis to modulate the aggregation process in response to the environmental conditions (2). Bioinformatic analyses revealed that Aap homologous adhesins with a functional lectin-like domain are also present in the other commensal staphylococcal species including S. hominis, S. haemolyticus, S. saprophyticus, and S. capitis. Of note, the Aap-like protein sequences found in S. simulans and S. warneri lacked the A-lectin domain, explaining why these species showed decreased or peripheral adherence to corneocytes (2). This is in agreement with a recent microbiome analysis of the human skin microbiota where samples were taken from the forearm (antecubital fossa) of healthy subjects (16). The most abundant staphylococcal species included S. aureus, S. epidermidis, S. capitis, S. hominis, and S. simulans, followed by S. saccharolyticus, S. haemolyticus, S. pseudintermedius, S. xylosus, and S. warneri. Strikingly, S. aureus was almost as abundant as S. epidermidis, highlighting the need for reconsideration of the assumption that the primary habitat of S. aureus is the nasal cavity. S. aureus controls adhesion and clumping through activity of the ArlRS two-component regulatory system and its downstream effector MgrA. Inactivation of the ArlRS-MgrA cascade has been shown to inhibit S. aureus adhesion to a diverse array of host molecules through overexpression of a subset of surface proteins (Ebh, SraP, and SasG). Turning off this cascade contributes to the virulence defect of S. aureus strains and might allow these strains to adapt to skin during colonization where Ebh, SraP, and SasG may act as alternative adhesins (17,18). The Aap orthologue SasG seems to be the main contributor to adherence to corneocytes, as MgrA-negative mutants showed significant binding to corneocytes compared with the wild-type S. aureus and mutants lacking both mgrA and sasG (2). This brings us to the final point, namely, the old question of what distinguishes pathogenic from nonpathogenic staphylococcal species, or rather, what distinguishes S. aureus from S. epidermidis (19). There is no doubt that S. aureus is an aggressive pathogen due to its numerous virulence factors with cell-destructive power. S. epidermidis, on the other hand, is much less virulent, but it is a master of adhesion; there is hardly any implant material to which S. epidermidis has not been found to adhere (20). In this respect, S. epidermidis, although not aggressive, is a persistent commensal. The dangerousness of S. aureus is that it Due to the diversity and abundance of transmembrane glycoproteins, proteoglycans, and glycolipids, the corneocyte surface is highly decorated with various glycan moieties which represent the target binding site for Aap. Aap binds to the glycan structures via its lectin subdomain (glycan binding). (C) In the truncated isoforms, the B domain of Aap becomes unhindered and mediates a strong Zn 21dependent intercellular adhesion between S. epidermidis cells, leading to the formation of a biofilm and microcolonies. Commentary ® can be both: it can be an aggressive and toxin-producing pathogen that causes acute infections, and on the other hand, S. aureus can mutate to commensal nontoxigenic variants with high expression of adherence factors and increased antibiotic resistance which are less distinguishable from S. epidermidis and may cause chronic infections. In summary, the findings by Roy et al. demonstrate the predominant role of Aap in the colonization of the skin and its corneocytes by S. epidermidis. Aap fulfills two tasks at the same time. The lectin domain mediates binding to glycan structures of skin cells while the truncated isoforms, with their exposed B domain, cause cross-linking between S. epidermidis cells, leading to biofilm and microcolony formation (as illustrated in Fig. 1). We surmise that Aap is a multitasking protein, and its additional functions have yet to be deciphered.
2,322.6
2021-09-14T00:00:00.000
[ "Biology" ]
An Overview of Research on the Definition and Formation of Cities Urbanization, as the essential way to modernize and solve the issue of agriculture, rural areas and farmers, it provides strong support to promote the regional coordinated development. To sort out and summarize the definition and formation of cities can provide some ideas for rural areas to find their own urbanization road. Introduction Urbanization is the only way to modernization, an effective way to solve the problems of agriculture, rural areas and farmers, and strong support to promote regional coordinated development. The definition and formation of cities are the basis of urbanization related research. Without a clear definition of the concept of city, the development of the city is hard to implement. And without a comprehensive understanding of the formation of the city, the driving force of urbanization development cannot be fully grasped. So, it is particularly important to clearly define the concept of "city" and to explore the causes of the formation of cities Definition of City There are many definitions of city, but there is no final conclusion at present, which mainly include the followings. One view is that city is where has defensive walls. This view obviously confuses "city wall" with "city". At the same time, this opinion neglects the different processes of the emergence of cities in the East and the West. Another standpoint holds that city includes city wall and market, that is, city walls with defensive functions and markets for commodity exchange. This point ignores the political, religious and other reasons in the forming of cities. Due to city and civilization are derived from Latin which means civitas, some scholars equate city with civilization. Song Junling thought that city is the existence form of civilized human beings and the main carrier of human civilization. However, this view is inconsistent with the fact. For example, the civilization of the Maya lowlands in Central America did not have recognized cities. With further research, the errors of the above three viewpoints are gradually exposed. At present, more popular viewpoints emphasize the centralization of cities. The Encyclopedia Britannica defines cities a relatively permanent and highly organized place with a concentrated population, which is bigger and more Standards for the Formation of Cities As for the formation criteria of cities, scholars at home and abroad have put forward different criteria according to different disciplinary backgrounds. From the perspective of archaeology, British archaeologist Child proposed ten criteria for the formation of cities. Firstly, the population of the city is more denseness than that of the settlement population. Secondly, the population structure is diversified. Thirdly, taxes and surplus wealth are concentrated in the power organs. Fourthly, there are large-scale public buildings. Fifthly, social hierarchical differentiation has emerged. Sixth, there are inventions of characters. Seventh, geometry and astronomy are further developed. Eighth, there are full-time artists. Ninth, foreign trade has emerged. Tenth, the way of living is no longer based on consanguinity [2]. Guliangyev, a Soviet scholar, also put forward eight urban standards from the perspective of archaeology. One is the emergence of rulers and their palaces. The second is the emergence of large temples and religious areas. Third, the most important palaces, temples and buildings are sepa-rated from civilian houses. Fourthly, religious areas are obviously different from residential areas. Fifthly the luxurious mausoleums and tombs. Sixth, large-scale works of art have been produced. The seventh is the formation of characters (inscriptions and stone carvings). Eighth is the number of signs, such as large squares, a large number of residential and public housing, more dense residents and so on. Japanese scholar Qianqiu Ono synthesizes the views of archaeology and history, and divides the criteria for the formation of ancient cities into seven aspects. First is the establishment of primitive state organizations and kingship. The second is the dense population. Third is the differentiation of social classes and the specialization of professions. Fourth is the emergence of large monumental buildings. Fifth is the invention of characters and metal objects and the development of science and technology. Sixthly, emerge of participating knowledge-based activities in spare times because of the production of surplus products. Seventh are the emergence of Commerce and the development of trade organizations [3]. Gao, Songfan put forward three urban standards from the geographical point of view. First is the complex of multi-functions (at least two kinds). Then is the place where population, handicraft industry, trade, wealth, construction and public facilities are concentrated. The last is the high population density, mainly engaged in non-agricultural occupations [4]. Xu, Hong proposed three urban standards from the perspective of archaeology. First, as the center of power of a state, it has the function of political, economic and cultural center in a certain region. The monarch, as a symbol of power, arises from it. In archaeology, it is shown as the existence of large-scale rammed earth construction relics which include ceremonial buildings such as palace base sites, altars, etc.).Secondly, because of the differentiation of social strata and the division of industry, the city has the complex characteristics of residents. The development of non-agricultural production activities has made the city that becomes the first non-self-sufficient society in human history. The characteristics of political cities and underdeveloped commercial trade have also made the city mainly manifest itself as the center of social material wealth and consumption center. Thirdly, the urban population is relatively concentrated, but at the initial stage of urban-rural differentiation is not very distinct, and the degree of population density is not an absolute indicator to judge whether a city is a city or not [5]. Wu, Liangfu proposed seven urban standards from the perspective of urban planning. 1) Cities gather a certain number of people. 2) Cities are mainly non-agricultural activities, which are different from rural social organizations. 3) Cities are the functions of different regional centers in politics, economy and culture. 4) Urban requirements are relatively centralized to meet the needs of residents in production and life. 5) Cities must provide necessary material facilities and strive to maintain a good ecological environment. 6) Municipality is a social entity that coordinates the operation according to the common social goals and various needs. 7) Cities have the mission of inheriting traditional culture and developing it continuously [6]. Mao, Xi also put forward ten criteria for the formation of a city. He believed that if only two thirds of the criteria were met, it could be judged as a city. The criteria are as follows. First, political, religious and cultural centers are formed in the region. Second, emerge of settlements that military defense functions have been strengthened, and many of which are marked by the construction of city walls. Third is the formation of the state and civilization. Fourthly, the number and density of population agglomeration are larger than those of the villages in the region. Fifth, the composition of population is different from that of the countryside. Sixth is the center of wealth accumulation and consumption. Seventh, large settlements and buildings are appeared. Eighth is the emergence of metal objects. Ninth, invent of writing and the formation of science. Tenth, form of markets and trade. In my opinion, because of the different reasons for the formation of various cities, their standards are also different, but the following two conditions must be concluded. First, the number and density of population aggregation are greater than that of the adjacent areas. Second, it should be the political, economic and cultural centers in a certain region. Reasons for the Formation of Cities According to the definition and formation criteria of the cities above and the historical materials excavated by archaeology, it can be inferred that cities first appeared in the Mesopotamia (about 3500 BC), the Nile River Basin (about 3500 BC), the Indus River Basin (about 2500 BC), and the Yellow River Basin (about 1500 BC) [7]. Liu, Tao [8], Wang, Shengxue [9], Chen, Heng [10], Chen, Chun [11] and so on analyzed the reasons for the emergence of cities, which mainly include the followings. First, because of the surplus of agricultural products in cities, it can support non-agricultural population such as officials, craftsmen and businessmen, and eventually form cities, such as the capital of Aztec, South America [11]. The second, cities formed by the development of handicraft industry which can absorb more people to form cities, such as Jingdezhen in China [12]. Third, cities formed by commodity exchange that objectively requires a stable place and market to gradually develop into a city, such as Quanzhou, China [9]. Fourth, cities are formed by the promotion of religion. The scale of religion may affect the development of cities, such as Mecca [11]. Fifth, cities are appeared because of military defense. Due to the need of military defense, each tribe builds defensive fortifications around its own residence and gradually forms a city, such as Jiuquan in China [12]. Sixth, Cities are formed for political reasons. The rights of rulers centralize financial and material resources to form cities in the region, such as ancient Rome [11]. The above analysis can provide some effective methods for rural areas to seek the suitable way of urbanization. Conclusion In this paper, the definition and formation of cities are sorted out and summarized, which can provide reference for rural areas to realize local urbanization by using their own advantages, and effectively promote the process of urbanization. However, in order to formulate specific countermeasures for urbanization de-velopment in a certain region, we still need a further study in its own conditions in depth and fully, and draw on the advanced experience of each region. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2,256.8
2019-06-03T00:00:00.000
[ "Geography", "Sociology", "Economics" ]
Electromagnetic surface waves guided by the interface of a metal and a tightly interlaced matched ambidextrous bilayer The existence and characteristics of electromagnetic surface waves (ESWs) whose propagation is guided by the planar interface of metal and a tightly interlaced matched ambidextrous bilayer (TIMAB) were theoretically investigated, a TIMAB being a periodic unidirectionally nonhomogeneous material whose unit cell consists of one period each of two structurally chiral materials that are identical except in structural handedness. Thus, the structural handedness flips in the center of the unit cell. A canonical boundary-value problem was formulated and a dispersion equation was solved, the ESWs being classified as surface-plasmon-polariton (SPP) waves. Flipping the structural handedness once in the unit cell can greatly enhance the number of possible SPP waves, one or more of which may be superluminal. Introduction Electromagnetic Surface Waves (ESWs) propagate guided by the interface of two distinct mediums with different constitutive relations [1][2][3].The distinguishing characteristic of an ESW is that its amplitude generally decays in the direction orthogonal to the interface on both sides.Depending on the constitutive properties of both partnering mediums, the ESW is given a different name [3][4][5]. ESWs have attracted a lot of attention because their fields have appreciable magnitudes only in a stripe of finite width, making them useful when the dimensions of a guiding structure are small and strong field confinement is needed [6][7][8][9].ESWs can theoretically propagate infinitely along the interface when the partnering mediums are non-dissipative [10,11].ESWs can propagate for several wavelengths when the mediums are moderately dissipative [12][13][14][15]. Typically, only one ESW can propagate in a given direction in the interface plane at a specified frequency, if both partnering mediums are homogeneous [1][2][3], but there are exceptions [16].However, if one of both of the partnering mediums are periodically nonhomogeneous in the direction normal to the interface, it is possible to trigger at the same time the propagation of more than one ESW, each with its own propagation characteristics [17][18][19][20][21][22][23].In some instances it has been observed that two ESWs propagate with complex-valued wavenumbers that are very close to each other giving rise to a sort of a doublet [24,25].The simultaneous triggering of several ESWs is interesting to enhance reliability and detection accuracy in measurements but also for the possibility of sensing multiple analytes with the same device [26].In addition, superluminal ESWs can exist [16,27], which is attractive for the excitation of Čerenkov radiation [28][29][30][31]. Experiments have been conducted with structurally chiral materials with a helicoidal variation of the relative permittivity direction along a fixed axis (identified as the z axis of a Cartesian coordinate system in this paper) that is oriented normal to the planar interface, in the present context [19,32,33].Fabricated using well-known methods of physical vapor deposition [34,35], these solid-state structurally chiral materials are called chiral sculptured thin films (CSTFs) [36].A CSTF is either structurally right-handed or structurally left-handed, but not both.The planar interface of a metal and a CSTF can guide multiple ESWs called surfaceplasmon-polariton (SPP) waves in a given direction at a specified frequency [19,26,32,33,37].Since chiral smectic liquid crystals (CSLCs) and cholesteric liquid crystals (CLCs) [38,39] are structurally chiral materials with a similar helicoidal variation of the relative permittivity direction along a fixed axis, experimental and theoretical findings about SPPwave propagation guided by a planar metal/CSTF interface are also applicable to metal/CSLC and metal/CLC interfaces. CLCs and CSLCs have a 135-year-old history going back to 1888, when Reinitzer reported the discovery of temperaturedependent color effects in cholesteryl benzoate (C 34 H 56 O 2 ), an ester of cholesterol and benzoic acid [40,41].The history of CSTFs began 65 years ago with the fabrication of fluorite CSTF by Young and Kowal in 1959 [42,43], though research on these materials was dormant until revival in 1995 [36,44].A new type of structurally chiral material was proposed about 15 years ago [45] and first fabricated using physical vapor deposition [36] in 2022.Called a tightly interlaced matched ambidextrous bilayer (TIMAB), this material differs from CSTFs, CSLCs, and CLCs in one attribute: structural handedness.Whereas the structural handedness of a CSTF/CSLC/CLC is invariant along the z axis, the structural handedness flips in the center of the unit cell of a TIMAB because each unit cell consists of one period each of two structurally chiral materials that are identical except in structural handedness, as depicted in figure 1.As a consequence, a TIMAB exhibits a bandgap for both left-and right-circularly polarized incident plane waves [46], whereas a CSTF/CSLC/CLC exhibits a bandgap for either left-or right-circularly polarized incident plane waves [47][48][49][50][51][52][53][54][55].This difference has been shown both theoretically and experimentally [46]. In our quest to distinguish TIMABs from CSTFs, CSLCs, and CLCs in their optical response characteristics, here we theoretically examine the propagation of SPP waves guided by a planar metal/TIMAB interface, and contrast it with the propagation of SPP waves guided by the planar interface of a metal and a CSTF [19,26,32,33,37].This paper is organized as follows.First, the canonical boundary-value problem is formulated to yield the dispersion equation for SPPwave propagation.Next, this dispersion equation is numerically solved to find the complex-valued wavenumbers of SPP waves.Then, the wavenumbers are analyzed to extract the phase speed and propagation distance of every SPP wave guided by the metal/TIMAB interface.The paper concludes with some remarks on the significance of the obtained results. An exp (−iωt) dependence on time t is used, with ω as the angular frequency and i = √ −1 as the imaginary unity; k 0 = ω √ ε 0 µ 0 is the free-space wavenumber and λ 0 = 2π/k 0 is the free-space wavelength, with µ 0 as the free-space permeability and ε 0 as the free-space permittivity; c 0 = 1/ √ ε 0 µ 0 is the speed of light in free space; and η 0 = √ µ 0 /ε 0 is the intrinsic impedance of free space.Vectors are represented in boldface; dyadics are doubly underlined; and Cartesian unit vectors are denoted by u x , u y , and u z .Column vectors are in boldface and bracketed, and matrices are double underlined and bracketed.Finally, the asterisk denotes the complex conjugate, and the superscript T the matrix transpose. Constitutive parameters The geometry of the canonical boundary-value problem is schematically illustrated in figure 2. The half space z < 0 is occupied by a metal with complex-valued relative permittivity ε m , whereas the TIMAB occupies the half-space z > 0. The relative permittivity dyadic ε TIMAB (z) of the TIMAB is periodic along the z axis with so that the TIMAB period is 4Ω.In the first unit cell 0 ⩽ z ⩽ 4Ω we have ( In this equation, the reference relative permittivity dyadic is where ε a , ε b , and ε c are the three principal relative permittivity scalars.The rotation dyadic is a product of two dyadics, with denoting a rotation by an angle πυζ about the z axis and denoting a rotation by angle χ about the y axis.In equations ( 4) and ( 5), the parameter υ = 1 for structural right-handedness whereas υ = −1 for structural left-handedness. The TIMAB is structurally right-handed (resp.left-handed) in the region 0 ⩽ z ⩽ 2Ω if h = +1 (resp.h = −1) in equation ( 2).Also, s = −1 so that the TIMAB has the opposite structural handedness in the region 2Ω ⩽ z ⩽ 4Ω relative to that in the region 0 ⩽ z ⩽ 2Ω.In contrast, s = 1 for CSTFs, CSLCs, and CLCs so that their period is 2Ω along the z axis. Electromagnetic fields Let us consider an SPP wave propagating in the xy plane along the generic direction identified by the unit vector u prop = u x cos ψ + u y sin ψ.After defining u s = −u x sin ψ + u y cos ψ and the wavevector k m = qu prop − α m u z , where k 2 0 ε m = q 2 + α 2 m and q is the complex-valued SPP wavenumber, the electric and magnetic field phasors in the metallic half-space are expressed [3] and where A 1 and A 2 are unknown coefficients to be determined.Note that Im (α m ) > 0, in compliance with the condition of attenuation as z → −∞. In the half-space z ⩾ 0 occupied by the TIMAB, the electric and magnetic field phasors can be written as and The Cartesian components e x,y (z) and h x,y (z) of the functions e(z) and h(z), respectively, in accordance with the source-free Maxwell equations, can be arranged in a 4×4 matrix ordinary differential equation as [3,36] where is a column vector.A detailed expression for the 4 × 4 matrix is provided in the appendix.Finally, [f (4Ω)] can be obtained after solving numerically equation ( 9) using the piecewise uniform approximation technique [36] as where is the characteristic matrix of the unit cell of the TIMAB. Dispersion equation Boundary conditions on the interface z = 0 require where from equations (7a) and (7b).Substitution of equations ( 12) and ( 14) in equation ( 13) leads to the matrix equation where This is the dispersion equation that must be solved to evaluate the SPP wavenumber q.Not only do its solutions depend on the constitutive parameters of the metal and the TIMAB as well as on ω and Ω, but also on the direction of propagation in the xy plane as delineated by the angle ψ.A convenient way to solve the dispersion equation is to use the search method [56]. Numerical results Let us now present the solutions of equation ( 16) in terms of the normalized wavenumber q = q/k 0 as functions of the normalized angle ψ/π and limiting the range of data presented to 0 ⩽ ψ/π ⩽ 1.Note that if q is a solution for a certain ψ, then −q is the solution for ψ + π in view of the dependences of the matrix [ P ] on ψ and q; also u prop . Once q is known, the phase speed v ph = ω/Re (q) and the propagation distance ∆ prop = 1/Im (q) of that SPP wave can be calculated.It is worth pointing out that the solutions reported here are the ones that we were able to find, but this does not mean that other solutions cannot exist.In this connection, the search for the solutions was restricted to a reasonable limited range: 0.8 ⩽ Re (q) ⩽ 3. For all calculations, we fixed λ 0 = 633 nm.The metal was taken to be silver (ε m = −14.461+ i 1.1936) [57] corresponding to a skin depth .5 nm at the chosen wavelength.The TIMAB was supposed to have h = 1, Ω = 162 nm, ε a = 6.65313 + i 0.0429696, ε b = 7.35561 + i 0.050978, ε c = 6.53285 + i 0.042055 and χ = 50 • [54].For calculating the transfer matrix [Q (4Ω)], the period 4Ω of the TIMAB was subdivided in 324 homogeneous layers each of thickness 2 nm.We verified that this value is adequate for ensuring the convergence of the piecewise uniform approximation technique [36]. We also comment on the spatial profiles along the z axis of the components of the time-averaged Poynting vector for representative solutions.With the Cartesian coordinate system defined by the unit vectors u prop , u s , and u z , the components P prop (0, 0, z) = P (0, 0, z) • u prop , P s (0, 0, z) = P (0, 0, z) • u s , and P z (0, 0, z) = P (0, 0, z) • u z were computed by solving equation ( 15) after choosing B 2 = 1.Branches 3 and 5 possess both Re (q) and ∆ prop very close to each other for ψ/π ∈ [0.333, 0.422], suggesting the existence of a doublet as already found in previous research [24,25].The same conclusion follows from examining branches 6 and 7 in the angular range ψ/π ∈ [0.389, 0.489].Even more interestingly, branches 3 and 5 intersect in figure 3 (right) at ψ/π = 0.4, so that the solutions on the two branches have identical propagation distances but slightly different phase speeds.Likewise, branches 6 and 7 intersect in figure 3 (right) at ψ/π = 0.446. Metal/TIMAB interface Further insight is provided by an examination of the spatial profile of the time-averaged Poynting vector for two SPP waves with the same ∆ prop .In figure 4 the Cartesian components P prop,s,z (0, 0, z) are plotted against z for the SPP waves on branches labeled 6 and 7 in figure 3 at ψ/π = 0.446.Both SPP waves have the same ∆ prop but differ in Re (q); i.e. both decay at the same rate along the direction of propagation but have different phase speeds.According to figure 4, the time-averaged Poynting vector decays along the z axis differently for the two SPP waves.Of the two SPP waves, the one with the lower phase speed i.e. with higher Re (q), has a much slower decay along the z axis in the TIMAB half-space.Indeed, the spatial profiles of the Cartesian components are presented for 50 structural periods (z ∈ [0, 200Ω]).And the SPP wave on branch 6 has substantial power density at z = 200Ω in figure 4 (left), in comparison to the SPP wave on branch 7 in figure 4 (right). In addition to the 13 branches in figure 3, there is a 14th branch.As is clear from figure 5, every solution on branch 14 and 14 ′ is a superluminal solution because Re (q) < 1 =⇒ v ph > c 0 .However, the propagation distance ∆ prop on branch 14 and 14 ′ stays in the range of a few micrometers.This type of solution exists also in the ψ-range where six solutions have been identified in figure 3, thus bringing to seven the multiplicity of solutions in that ψ-range.Variations of the components Pprop(0, 0, z) (black solid lines), Ps(0, 0, z) (blue dashed-dotted lines), and Pz(0, 0, z) (red dashed lines) of P(0, 0, z) with z in the TIMAB half-space, for the SPP wave on branch 14 guided by the silver/TIMAB interface when ψ/π = 0.733. Figure 6 provides the spatial profiles of the Cartesian components P prop,s,z (0, 0, z) for the superluminal SPP wave on branch 14 at ψ/π = 0.733.The phase speed is very high (actually, greater than c 0 ) and the decay on the z axis is very slow.A comparison of figures 4 and 6 seems to indicate that the higher the phase speed the higher the decay rate. However, that is not a general conclusion.As a counter-example, we examined the fields for the silver/-TIMAB interface on branch 11 for (a) ψ/π = 2/9 and (b) ψ/π = 19/45.We have q = 1.721501 + i 1.3946 × 10 −2 for case (a) but q = 1.721501 + i1.3236 × 10 −2 for case (b).We found that the fields decay on the z axis four times faster for case (a) than for case (b), as is evident from figure 7. So, even with the same phase speed, we have two distinct decay rates on the z axis in the TIMAB half-space. Comparison with metal/CSTF interface To better understand the role of interlacing in the existence of SPP waves, we compare here the results obtained for the silver/TIMAB interface with those for a silver/CSTF interface.The CSTF is identical to the TIMAB except that s = 1 in equation ( 2), so that the structural period of the CSTF is half of that of the TIMAB. Figure 8 presents Re (q) and ∆ prop as functions of ψ/π for the metal/CSTF interface.There are four major differences between figures 3 and 8 as follows: 1.The number of solution branches shrivels to three for the metal/CSTF interface compared to 14 for the metal/TIMAB interface.2. For a specified ψ, the maximum number of SPP waves possible is just two for the metal/CSTF interface compared to seven for the metal/TIMAB interface.3.For a specified ψ, the metal/TIMAB interface can guide at least one SPP wave, but the metal/CSTF interface may not guide any SPP wave.4. The metal/TIMAB interface can guide a superluminal SPP wave, but evidently the very closely related metal/CSTF cannot.Parenthetically, metal/CSTF interfaces can guide superluminal waves for other constitutive parameters [37]. To clarify the effect of flipping of the TIMAB, we compared the Poynting-vector profiles for a SPP wave guided by the metal/TIMAB interface and one by the metal/CSTF interface, with solutions having Re (q) extremely close.Branch 11 in figure 3 when ψ/π = 0.176, has a solution q = 1.726105 + i 1.391177 × 10 −2 whereas branch 2 in figure 8, when ψ/π = 0.177, has q = 1.726105 + i1.330370 × 10 −2 . In figures 9(a) and (c) the Cartesian components P prop,s,z (0, 0, z) are plotted against z for the SPP waves on branch labeled 11 in figure 3 at ψ/π = 0.176.In figures 9(b) and (d) for the SPP wave on branch labeled 2 in figure 8, the same is done for ψ/π = 0.177.The SPP waves have the same Re (q) but differ in ∆ prop thus having different decay rates along the direction of propagation but same phase speed. According to figure 9, the time-averaged Poynting vector decays along the z axis differently for the two SPP waves.Among the two SPP waves, the one with longer propagation distance ∆ prop , has a much slower decay along the z axis in the CSTF half-space.Indeed, the spatial profiles of the Cartesian components are presented for 50 structural periods (z ∈ [0, 200Ω]) and the SPP wave on branch 2 for the metal/CSTF interface has substantial power density at z = 200Ω in figure 9(d), in comparison to the SPP wave on branch 11 for the metal/TIMAB interface in figure 9(c). Conclusions In this paper we have formulated the canonical boundaryvalue problem to obtain the dispersion equation of SPP waves guided by the planar interface of silver and a TIMAB.The dispersion equation has been numerically solved to find the complex wavenumbers q of all the possible SPP waves.The solutions for the silver/TIMAB interface have been compared with those for the silver/CSTF interface, in order to highlight the differences brought by the interlacing of the partnering chiral material.Analysis of numerical results shows that interlacing in the TIMAB significantly increases the number of possible SPP waves possibly guided in a specific direction by the silver/TIMAB interface compared to the silver/CSTF interface.In addition, a metal/TIMAB interface can guide a superluminal SPP wave, but evidently the very closely related metal/CSTF cannot. Figure 2 . Figure 2. Schematic of the canonical boundary-value problem.The half-space z < 0 is occupied by a metal and the half space z > 0 by a TIMAB. Figure 3 . Figure 3. Variations of (left) Re (q) and (right) ∆prop with ψ/π of SPP waves guided by the planar silver/TIMAB interface.The solutions of the dispersion equation are organized in 13 numbered branches.Ranges of ψ/π with the highest number of SPP waves are shaded gray. Figure 8 . Figure 8. Variations of (left) Re (q) and (right) ∆prop with ψ/π of SPP waves guided by the planar silver/CSTF interface.The solutions of the dispersion equation are organized in three numbered branches.
4,503.8
2024-05-31T00:00:00.000
[ "Physics" ]
Is the Rayleigh-Sommerfeld diffraction always an exact reference for high speed diffraction algorithms? In several areas of optics and photonics the behavior of the electromagnetic waves has to be calculated with the scalar theory of diffraction by computational methods. Many of these high-speed diffraction algorithms based on a fast-Fourier-transformation are approximations of the Rayleigh-Sommerfeld-diffraction (RSD) theory. In this article a novel sampling condition for the well-sampling of the Riemann integral of the RSD is demonstrated, the fundamental restrictions due to this condition are discussed, it will be demonstrated that the restrictions are completely removed by a sampling below the Abbe resolution limit and a very general unified approach for applying the RSD outside its sampling domain is given. © 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (050.1960) Diffraction theory; (110.1758) Computational imaging; (110.1650) Coherence imaging; (090.1995) Digital holography. References and links 1. Sommerfeld, Optics, Lectures on Theoretical Physics, Vol. IV, New York (Academic, 1954). 2. J. W. Goodman, Introduction to Fourier Optics, 4th Ed., (W. H. Freeman, 2017). 3. M. Born and E. Wolf, Principles of Optics, New York (Cambridge University, 2005). 4. Y. M. Engelberg and S. Ruschin, “Fast method for physical optics propagation of high-numerical-aperture beams,” J. Opt. Soc. Am. A 21(11), 2135–2145 (2004). 5. V. Nascov and P. C. Logofătu, “Fast Computation algorithm for the Rayleigh-Sommerfeld diffraction formula using a type of scaled convolution,” Appl. Opt. 48(22), 4310–4319 (2009). 6. F. Shen and A. Wang, “Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula,” Appl. Opt. 45(6), 1102–1110 (2006). 7. F. Zhang, G. Pedrini, and W. Osten, “Reconstruction algorithm for high-numerical-aperture holograms with diffraction-limited resolution,” Opt. Lett. 31(11), 1633–1635 (2006). 8. H. Scheffers, “Vereinfachte Ableitung der Formeln für die Fraunhoferschen Beugungserscheinungen,” Ann. Phys. 434, 211 (1942). 9. E. Lalor, “Conditions for the Sampling of the Angular Spectrum of Plane Waves,” J. Opt. Soc. Am. 58(9), 1235– 1237 (1968). 10. K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18(17), 18453–18463 (2010). 11. A. Ritter, “Modified shifted angular spectrum method for numerical propagation at reduced spatial sampling rates,” Opt. Express 22(21), 26265–26276 (2014). 12. T. C. Poon and J. P. Liu, Introduction to Modern Digital Holography with MATLAB (Cambridge University, 2014). 13. P. Picart and J. C. Li, Digital Holography (Wiley, 2012). 14. M.-K. Kim, Digital holographic Microscopy (Springer, 2011). 15. E. Wolf and E. W. Marchand, “Comparison of the Kirchhoff and the Rayleigh–Sommerfeld Theories of Diffraction at an Aperture,” J. Opt. Soc. Am. 54(5), 587–594 (1964). 16. M. W. Farn and J. W. Goodman, “Comparison of Rayleigh-Sommerfeld and Fresnel solutions for axial points,” J. Opt. Soc. Am. A 7(5), 948–950 (1990). 17. R. J. Marks II, The Joy of Fourier (Baylor University, 2006). 18. H. Gross, Handbook of Optical Systems, Volume 1: Fundamentals of Technical Optics (Wiley-VCH, 2005). 19. E. Kreyszig, Introductory Functional Analysis with Applications (John Wiley & Sons, 1989). Vol. 25, No. 24 | 27 Nov 2017 | OPTICS EXPRESS 30229 #306576 https://doi.org/10.1364/OE.25.030229 Journal © 2017 Received 7 Sep 2017; revised 7 Nov 2017; accepted 10 Nov 2017; published 17 Nov 2017 Introduction The scalar theory of diffraction can be used in many different applications like wave propagation, digital holography, holographic microscopy, diffraction imaging, biomedical imaging and diffractive optics. One exact method for the scalar diffraction calculation is the Rayleigh-Sommerfeld diffraction (RSD). In contrast to approximations such as Fresnel or Fraunhofer diffraction, the RSD gives an exact solution for the output field of a given input field [1][2][3]. However, to the best of our knowledge there is no general analytical solution for the calculation of the RSD. Therefore, numerical methods have to be used. All numerical simulations are in principle based on a sampling of the analogue continuous field. Thus, with usual computational power, only high-speed algorithms make the utilization of the diffraction theory possible. These high-speed algorithms use approximations of the RSD integral such as a quadratic phase [2,4] or a frequency-cut in convolution RSD [5][6][7] and in the angular spectrum method (ASM) [6,[8][9][10][11]. Whereupon the latter is only valid for small propagation distances [2,12,13]. Another restriction in the ASM is the identical sampling spacing in the input and output plane [14]. In order to solve this problem some methods are developed. One proposed method uses interpolation [5], however, this causes a new numerical error dependent of the interpolation method. In another one, the object extension is limited [7]. Contrary to the Kirchhoff solution of the diffraction problem [2,3,15], the mathematical solution of the RSD is not inconsistent when the observation point is close to the diffracting screen. Thus, the accuracy of the high-speed alternatives, even for very short propagation distances, could be verified by referencing them with exact solutions given by the RSD. Since there exist only a few analytical solutions [16], they have to be compared with a discretized RSD. In this work, the RSD is treated as a Riemann integral, which has to be discretized. However, here we will demonstrate that the use of the discretized RSD can cause enormous aliasing. We will give the boundary conditions for the usage of the discretized RSD and show possibilities to completely remove the aliasing. Since in this paper we are presenting a method for the computer-based calculation of the Rayleigh-Sommerfeld diffraction integral without aliasing, throughout the paper we assume the object and all other images as already discretized. Figure 1 shows a typical setup for the calculation of the diffraction problem. The diffraction of a discretized input plane 1, generated by a coherent illumination of an analogue object, is reproduced by a discretized sensor like a CCD camera in plane 2 or 3. Sampling condition for the Rayleigh-Sommerfeld-diffraction According to the Rayleigh-Sommerfeld diffraction integral, the field distribution in an output plane 2 in the distance 12 z , parallel to the input plane is [1][2][3]:  is the field distribution in the input (object) plane 1 and k is the wave number. The two vectors 1 r  , 2 r  are position vectors in plane 1 or 2, respectively and Ω is the integration area. Please note that, since all computer algorithms can only work with discretized data, although in real world an object would be analogue, for the computational calculation the input in plane 1 has to be discretized. Sometimes the object can be defined analytically by a plane wave or a finite chirp function. In such cases the requirement for the numerical treatment is discretizing all terms of the diffraction method and the object itself considering the Nyquist criterion [2]. However, in most cases, the image is discretized by a CCD-Camera. Consequently, for a numerical treatment of Eq. (1), only the propagation dependent harmonic term 2 1 exp( ) ik R r −   has to be sampled, according to the well-known Nyquist sampling criterion [17]. The sampling frequency must at least correspond to twice the highest frequency contained in the harmonic term. If we consider the phase of the propagation term as: ( ) with the transversal Cartesian coordinates in the input ( ) This spatial frequency is a monotonically increasing function of 1 x and 2 fp x are the distances of the farthest point from the center in x -direction in the relevant computational plane 1 or 2 with non-negligible amplitude value (see Fig. 1). Due to zero values of the amplitude around the boundary, the maximum width can be smaller than the real width of the computational domain. Thus, it follows for the maximum spatial frequency ,max The sampling frequency s f is related to the sampling spacing via Thus, it follows for the sampling according to the Nyquist criterion ( ) Therefore, the sampling condition for the numerical treatment of the RSD can be written as: 1 . fp fp Consequently, for a given sampling spacing 1 x δ , 1 y δ in the object plane and a maximum size 1 fp x , 1 fp y of the object and its image 2 fp x , 2 fp y in the output computational plane, there is a critical minimum propagation distance 1 z allowed, in which the output plane can be calculated by: An analog derivation results in a similar condition for the y -direction: Thus, the condition for the minimum propagation distance (critical distance) is: The critical distance is the minimum distance in which an output field (image) can be numerically calculated by RSD without violating the Nyquist criterion for the interplay of the sampling conditions of input and output planes and the distance. The reconstruction of the object from the diffracted image is only possible, if the Nyquist theorem is fulfilled in the reverse direction too. This results in equations analogous to Eqs. (7)-(9) with reversed index 1 and 2. Thus, the total critical distance c z for a forward and reverse transformation of the field has to be at least ( ) 1 2 Aliasing if In this subsection, the aliasing due to the violation of the aforementioned sampling condition is shown. However, the discussion of aliasing is more appropriate and required in methods dealing with the reduction of the aliasing or methods, which investigate the domains with strong or weak aliasing. In these cases the aliasing can provide invaluable information about the advantage and capability of the developed methods. However, in section 4 a method for preventing the occurrence of aliasing in RSD is shown. Thus, aliasing is not discussed indepth, but is used to compare the RSD method suffering from aliasing with our aliasing-free proposed approach. To show the aliasing, we have used an amplitude object (constant phase) as described in Figs. 2(a) and 2(b) and calculated its diffraction pattern at a distance c z z < . For the sake of simplicity but without loss of generality, the phase distributions are compared indirectly. Thus, the magnitude and real part of the complex amplitude will be discussed throughout the paper. As can be seen in Fig. 2, for this object both values are identical because it has a zero phase. The color-bar shows the normalized amplitude and real part values. The simulated distance between the object and the image plane is 12 20μm z = , which according to Eqs. (7)-(9), is much smaller than the critical distance 534μm c z = . The calculated amplitude and real part of the diffracted field 12 u at a distance 12 z can be seen in Figs. 3(a) and 3(b) respectively. The error due to the violation of the sampling condition can be seen by a reconstruction of the original object from the diffracted image 12 u . Thus, the reverse RSD (for 12 z z = − ) has to be applied. Here the term "reverse" instead of "inverse" transform will be used in order to avoid confusion. The result can be seen in Figs. 3(c) and 3(d). A similar pattern like in the original object occurs but, with very strong aliasing. Especially the nonzero values of the field in areas, which originally exhibit zero values, lead to completely wrong results. The correlation coefficient r between 1 u and 121 u , i.e. the field after transforming from position 1 to 2 and back to 1 is 0.72 r = and 0.75 r = for the magnitude of the complex field and its real part, respectively. The obviously strong deviation to the input is a consequence of the insufficiency of the propagation distance is satisfied for the numerical calculation of the RSD, there will be no loss of information due to the transformation. For an object with a fixed sampling spacing the critical propagation distance is fixed. Accordingly, if the propagation distance is longer than c z , a correct treatment of the computational RSD should be achieved for the given sampling spacing. In Figs. 4(a) and 4(b) the diffracted image is numerically calculated by the RSD under the assumption that the propagation distance 13 730μm z = satisfies the sampling condition. To confirm the correctness of the field 13 u in the plane 3 as a necessary condition, the reverse transform is considered to reconstruct the input object in the input plane, as reported in Figs. 4(c) and 4(d). The correlation coefficient is 0.97 r = for both, magnitude and real part of the complex amplitude. Since a small part of the whole information will be lost by the spatial limitation of the computational plane (The numerical aperture is smaller than one). Comparing the correlation coefficients for the reconstructed object in Fig. 3 and Fig. 4 shows a more than 20% improvement. Therefore, at the distance 13 z almost the whole information of the object plane is preserved. However, in some cases the RSD might be applied as a reference for different algorithms at a propagation distance outside of the sampling domain. Additionally, a good object reconstruction by a forward and reverse calculation is just a necessary, but not a sufficient condition for testing the sampling of a diffraction algorithm. It does not necessarily mean, that the output corresponds to the expected result of the diffraction theory. In other words, a combination of an arbitrary propagation operator (physically or nonphysically) with its inverse always results in an identity operator, and consequently the reconstruction of the input is expected automatically. Thus, in the next subsection a sampling condition, which always satisfies the sampling condition and which can be used as a reference for the diffraction will be presented and in section 4 a general procedure, which makes the RSD a feasible method for arbitrary propagation distances will be shown. Sampling spacing below the Abbe resolution limit for fine structures larger than the Abbe limit According to inequality 6, the left side and the second term on the right side are always positive whereas, the first term on the right side can change its sign. For a sampling lower than half of the wavelength 1 / 2 x δ λ < , it will become negative and consequently the inequality will be fulfilled for all propagation distances. z . Thus, the sampling condition is always satisfied, if the sampling spacing of the harmonic term is smaller than the Abbe resolution limit. It should be emphasized that this condition only holds for the harmonic term. As will be shown in subsection 4, this does not contradict the Abbe resolution limit. According to the discussion above, substructures smaller than the Abbe limit could be resolved and consequently be reconstructed by the Nyquist criterion. However, there are different meanings of "reconstruction" in respect to diffraction (restricted due to the Abbe limit) and in respect to the sampling theory, restricted by the Nyquist criterion. In the context of the sampling theory, a direct reconstruction of the original field after sampling will be possible, whereas for the diffraction theory the reconstruction of the original field is indirect, since it takes place after a propagation over the distance z . Thus, the sampled data is exposed to the diffraction effect and consequently restricted by the Abbe limit. Eventually, a sampling below the wavelength does not lead to the breaking of the Abbe rule for our approach, but it enables the calculation of a diffracted image without violating the RSD sampling condition. , except that for this simulation the object was sampled with a sampling spacing below the Abbe limit. However, the structures in the object are still larger than the Abbe limit. The reconstructed object is shown in Figs. 5(c) and 5(d). The correlation coefficient between the reconstructed and input object is 0.997 r = for both the magnitude and the real part of the complex amplitude. The minor loss of information is just due to the limited aperture. Thus, the sampling of the harmonic term with a sampling spacing smaller than the Abbe limit can be used as a reference for the evaluation of the quality of the numerical calculations. Sampling spacing below the Abbe resolution limit for fine structures smaller than the Abbe limit To investigate the effect of the sampling below the Abbe limit for under Abbe limit structures, the input area, the output area and the sampling spacing have been rescaled (10 and 20 times smaller than the object in Fig. 2), so that both the object's fine structures and the sampling spacing are below the Abbe limit. In Figs. 6(a)-(d) the rescaled object and the reconstructed object are shown respectively. 10. F N = As can be seen, due to the violation of the Abbe resolution limit, the fine structures of the object in Fig. 6 cannot be resolved anymore. The calculated correlation coefficients are 0.88 r = and 0.89 r = for the magnitude and the real part of the complex amplitude respectively. field 12 u at the same distance. This field 12 u was calculated for a sampling spacing below the Abbe limit and can be used as a reference, as discussed in section 3-3. In Fig. 9 the reconstruction of the object 13231 u by the use of the operator 132  for the forward and 231  for the backward propagation is presented. The correlation coefficients for the magnitude and the real part of the complex amplitude are 0.97 r = . Again, the capability of the proposed approach can be seen. Conclusion In this paper the numerical treatment of the Rayleigh-Sommerfeld diffraction in its Riemannian integral form was investigated in detail. A sampling condition for the numerical calculation was derived. As have been shown, for a fixed sampling spacing, widths of the input and output and wavelength the allowed propagation distance is restricted to a minimum value. However, the restriction can be completely removed if the sampling spacing (not the structure in the object) is lower than the Abbe limit. As have been shown, this results in the maximum obtainable information in the output plane under the consideration of the limited computational domain, and was therefore used as a reference. Moreover, a very general approach for the calculation of the output field for arbitrary propagation distances was presented. This operator is based on a combination of forward and reverse RSD transforms and leads to very high correlation coefficients of 0.97 r = . An about 30% improvement in the magnitude of the complex amplitude and about 45% for the real part confirms the reliability of the new operator. A comparison of the results of the below Abbe limit sampling with the results of the composed operator is an additional verification of both methods and the consistency of the theoretically derived sampling condition for the RSD. The developed approach can be used as a reference for the testing of high-speed algorithms and other methods, which are based on the approximation of the exact scalar diffraction theory. One of the most important advantages of this method over other methods is that it doesn't need any changes in the sampling spacing 1 x δ and 2 x δ to prevent aliasing.
4,421.2
2017-09-26T00:00:00.000
[ "Physics" ]
Induction of Poly(ADP-ribose) Polymerase in Mouse Bone Marrow Stromal Cells Exposed to 900 MHz Radiofrequency Fields: Preliminary Observations Background. Several investigators have reported increased levels of poly(ADP-ribose) polymerase-1 (PARP-1), a nuclear enzyme which plays an important role in the repair of damaged DNA, in cells exposed to extremely low dose ionizing radiation which does not cause measurable DNA damage. Objective. To examine whether exposure of the cells to nonionizing radiofrequency fields (RF) is capable of increasing messenger RNA of PARP-1 and its protein levels in mouse bone marrow stromal cells (BMSCs). Methods. BMSCs were exposed to 900 MHz RF at 120 μW/cm2 power intensity for 3 hours/day for 5 days. PARP-1 mRNA and its protein levels were examined at 0, 0.5, 1, 2, 4, 6, 8, and 10 hours after exposure using RT-PCR and Western blot analyses. Sham-exposed (SH) cells and those exposed to ionizing radiation were used as unexposed and positive control cells. Results. BMSCs exposed to RF showed significantly increased expression of PARP-1 mRNA and its protein levels after exposure to RF while such changes were not observed in SH-exposed cells. Conclusion. Nonionizing RF exposure is capable of inducing PARP-1. Introduction Damage to the genetic material (DNA), due to normal endogenous metabolic processes, occurs in cells at a rate of up to 1,000,000 molecular lesions per cell per day [1]. Genotoxic agents are known to potentiate these lesions which include alterations in bases and single-and double-strand breaks (SSB-DSB) resulting in structural damage that can alter or eliminate the ability of the cells to transcribe the gene that the affected DNA encodes. In order to deal with problems under which the DNA is vulnerable to injury, an elaborate and complex set of surveillance mechanisms were evolved in eukaryotic cells to reverse and/or remove potentially deleterious damage. These include a cascade of signal transduction processes which consist of multiple interconnected pathways that transmit the damage signals and trigger responses to repair the DNA, cell cycle arrest, and apoptosis [2]. There is ample evidence that poly(ADPribose) polymerase (PARP), a family of nuclear enzymes in eukaryotic cells, plays an important role in genomic stability by regulating DNA repair, gene transcription, cell cycle progression, chromatin function, and cell death. Among these nuclear enzymes, PARP-1 is more abundant and acts as a "molecular nick sensor" to signal the cells about strand breaks in the DNA and to assist in their repair [3][4][5][6][7][8][9][10][11]. Numerous investigators have demonstrated that extremely low doses of ionizing radiation (IR) exposure in animal and human cells, in the absence of measurable induction of DNA damage, were able to alleviate the DNA damage induced by subsequent exposure to a high dose of IR or other similar genotoxic agents suggesting that efficient DNA repair mechanism(s) might be playing a role in such cells [12]. The evidence for one such mechanism was provided by the significantly increased PARP-1 mRNA expression and its protein levels in mice and cultured mouse lymphoma cells exposed to a nongenotoxic dose of IR, and such increase was negated when the mice were injected and the cells were treated with 3-aminobenzamide (3-AB), a potent inhibitor of PARP-1 [13,14]. To the best of our knowledge, there were no published reports about whether nonionizing radiofrequency fields (RF) exposure is capable of inducing PARP-1 in mammalian cells. Nonetheless, the results from our more recent studies indicated that the whole body of mice and cultured mouse bone marrow stromal cells (BMSCs) exposed to 900 MHz RF for several days showed significantly reduced levels of strand breaks in the DNA as well as faster kinetics of their repair when challenged with genotoxic dose ofradiation or bleomycin (BLM), a radiomimetic chemotherapeutic drug [15]. Therefore, we have conducted a preliminary investigation on BMSCs to examine whether 900 MHz RF (continuous wave) exposure at 120 W/cm 2 power intensity for 3 hours/day for 5 days is capable of inducing PARP-1. The rationale for using this power intensity was our earlier observation of significant survival advantage of lethally irradiated mice which were preexposed to 900 MHz RF at 120 W/cm 2 power intensity compared to those which were preexposed to RF at 12 W/cm 2 or 1200 W/cm 2 [16]. The results obtained in this study were compared with those in sham-exposed (SH) control cells as well as in those exposed to 1.5 Gyirradiation (GR, positive controls) and the data discussed. Materials and Methods The experimental protocol was approved by the institutional animal care and ethics committee of Soochow University. Bone Marrow Stromal Cells (BMSCs). In the current in vitro study, cultured BMSCs were used. BMSCs exhibit multiple characteristics/traits of a stem cell population including regulation of cytokine production and release of growth factors required for hematopoiesis which are considered a model of hematopoiesis [17]. The collection of bone marrow cells and the culture of stromal cells were described in detail in our earlier study [18]. Briefly, 4 adult Kunming mice were initially purchased from the Animal Center in Soochow University (Suzhou, Jiangsu, China; the Animal Care/Use Ethical Committee of Soochow University, Suzhou, China, has reviewed and approved our handling of animals). After 7 days of quarantine, the animals were sacrificed by cervical dislocation. From each mouse, bone marrow was flushed with phosphate-buffered saline (PBS, Gibco, Shanghai, China) and single cell suspension was prepared in complete IMDM medium (Iscove's modified Dulbecco's medium, Hyclone, Suzhou, China) containing 10% fetal bovine serum (FBS, Gibco, Shanghai, China), 100 units/mL penicillin, and 100 g/mL streptomycin (Bio Basic, Hangzhou, China). For each mouse, aliquots of approximately 2 × 10 5 cells in 3 mL medium were placed in 30 mm Petri dishes (Nunc, Shanghai, China) and cultured for 48 h in an incubator (Heal Force Bio-Meditech, Hong Kong, China) at a temperature maintained at 37 ± 0.5 ∘ C with humidified atmosphere of 5% carbon dioxide and 95% air. Then, for each mouse, the nonadherent cells were discarded and the adherent BMSCs were cultured further in fresh complete medium. Cultured BMSCs in 3-6 passages, from a single mouse, were used in 3 independent investigations. On the day before starting the RF and SH exposures, aliquots of approximately 5 × 10 5 BMSCs/mL (8 mL total) were seeded into 24 separate 100 mm Petri dishes and were left in the incubator at a temperature maintained at 37 ± 0.5 ∘ C with humidified atmosphere of 95% air and 5% carbon dioxide. On the next day, the medium was replaced in all Petri dishes; 16 petri dishes were used for RF and SH exposure (8 Petri dishes each) for 3 hours/day for 5 days while the other 8 were left in the incubator for an acute exposure of GR (at the end of RF and SH exposures). The medium in all dishes was changed once during this time and the cells in all Petri dishes were confluent by 5 days. RF and SH Exposures. The exposure system was built in-house at Soochow University, Suzhou (Jiangsu, China), and described in detail earlier [16]. Briefly, it consists of GTEM chamber (Gigahertz Transverse Electromagnetic Chamber; 5.67 m length, 2.83 m width, and 2.07 m height), a signal generator (SN2130J6030, PMM, Cisano sul Neva, Italy), and a power amplifier (SN1020, HD Communication, Ronkonkoma, NY). The continuous wave 900 MHz RF signal was generated, amplified, and fed into the GTEM chamber through an antenna (Southeast University, Nanjing, Jiangsu, China). The RF field inside the GTEM was probed using a field strength meter (PMM, Cisano sul Neva, Italy) to determine the precise position which provided the required 120 W/cm 2 power intensity. The power was monitored continuously and recorded every 5 min in a computer controlled data logging system which indicated 12.178 ± 0.003 W/cm 2 during the 3-hour RF exposure. The GTEM was installed in a room in which temperature was maintained at 37 ± 0.5 ∘ C (87% relative humidity, without CO 2 ) and the temperature inside the GTEM was also similar during exposure of the cells to RF. For RF exposure, the BMSCs in 8 separate Petri dishes (arranged in two rows of 4 each and all dishes touching each other) were placed on a nonconductive table/platform at a height of 100 cm at the precise location where the required power intensity of 120 W/cm 2 was measured. The distance between Petri dishes and the exposure unit (probe) was 18 cm. At the input 120 W/cm 2 power intensity and the direction of propagation of the incident field parallel to the plane of the medium, the peak and average SARs estimated were extremely low: they were 4.1 × 10 −4 and 2.5 × 10 −4 W/kg, respectively [19]. The RF exposure was 3 hours/day for 5 days. BMSCs in the other 8 separate Petri dishes were exposed in the GTEM chamber, without RF transmission, for 3 hours/day for 5 days, and these cells were used as SH-exposed control cells. Gamma Radiation (GR). The BMSCs in 8 Petri dishes (which were left in the incubator) were exposed to an acute dose of 1.5 Gy -radiation (Nordion, Ottawa, ON, Canada; dose rate: 0.5 Gy/min) from 60 Co source which was located in another building. There was an interval of ∼10 minutes between irradiation of the cells and their transport to the laboratory. RT-PCR (mRNA Expression of PARP-1). Immediately after RF and SH exposures (∼10 minutes after GR exposure), the cells in all Petri dishes were kept in the incubator at a temperature maintained at 37 ± 0.5 ∘ C, with humidified atmosphere of 95% air and 5% carbon dioxide. At different intervals, namely, 0, 0.5, 1, 2, 4, 6, 8, and 10 hours, the cells in separate dishes were collected, washed in phosphate-buffered saline (PBS, Gibco, Shanghai, China), and divided into 2 aliquots. The cells in one aliquot were utilized to extract total RNA using Trizol agent (Tiangen Biotech, Beijing, China) while those in the other aliquot were used for protein extraction (see below). The cDNA was synthesized from the messenger RNA (mRNA) using the Thermo Scientific Rever-tAid First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. This was followed by RT-PCR amplification with an initial step of 2 minutes at 50 ∘ C and 10 minutes at 95 ∘ C, followed by 40 cycles of 15 s at 95 ∘ C and 1 min at 60 ∘ C (ABI Prism 7500 Sequence Detection System, Applied Biosystems, USA). The PCR products were stained with Fast Start Universal SYBR Green Master (Roche Group, Basel, Switzerland) as double-stranded DNA-specific fluorescent dye. PARP-1 expression was normalized by subtracting the mean of GAPDH Ct value from RF-, SH-, and GR-Ct value (ΔCt). The fold change value was calculated using the expression 2 −ΔΔCt , where ΔΔCt represents ΔCt treatment group − ΔCt control group . The results represented were average (± standard deviation) from three independent experiments. and mouse monoclonal anti-GAPDH (Good Science, Shanghai, China), overnight at 4 ∘ C and washed three times in TTBS. The membranes were further incubated with horseradish peroxidase-conjugated antibodies for PARP-1 and GAPDH (Beyotime, Shanghai, China) for 1.5 hours at room temperature. This was followed by washing the membranes three times with TTBS. The immunoreactive proteins on the membranes were detected with enhanced chemiluminescence reagents (Millipore Corporation) using G:BOX Chemi XRQ (Syngene, UK). The blots were quantified by densitometry and normalized for GAPDH to correct for differences in loading of the proteins in RF-, SH-, and GRexposed cells. The results presented were average (± standard deviation) from three independent experiments. Statistical Analysis. The results were subjected to statistical analyses of variance (ANOVA) test using Statistical Product and Service Solutions for Windows [20]. Comparisons were made between cells exposed to RF and SH, RF, and GR and a value of <0.05 was considered as significant difference between the 2 groups. Results The PARP-1 mRNA expression profiles, ascertained from RT-PCR analyses, in BMSCs exposed to RF, SH, and GR are presented in Figure 1 and Table 1. The average coefficients of variability (CV) in PARP protein levels in RF-, SH-, and GR-exposed cells were 6.2% (range: 1.8-9.7%), 7.3% (range: 0.0-11.9%), and 5.4% (range: 3.9-8.6%), respectively (CV was taken into consideration while calculating the significant difference between groups, values). The data indicated that the expression levels were significantly higher/upregulated in RF-exposed cells at 0 hours compared with that in SH-and GR-exposed cells, and this was sustained at 0.5, 1, 2, 4, 6, 8, Table 1: Poly(ADP-ribose) polymerase-1 (PARP-1) mRNA expression levels, relative to housekeeping gene GADPH, in mouse bone marrow cells at different times following exposure to 900 MHz radiofrequency fields (RF), sham (SH), and 1.5 Gy gamma radiation (GR). and 10 hours after exposure. However, the levels decreased slowly over time but were significantly higher even at 10 hours after RF exposure compared to those in SH-exposed cells. Compared with SH-exposed cells, those exposed to GR had significantly higher mRNA expression levels at 0 (i.e., ∼10 minutes after exposure), 0.5, 2, and 4 hours but decreased over time at 6, 8, and 10 hours after exposure where the difference between the two groups of cells was not significantly different. The levels of PARP-1 protein, assessed from Western blot analysis, at all times examined, were presented in Figure 2 and Table 2. The average coefficients of variability (CV) in PARP protein levels in RF-, SH-, and GR-exposed cells were 7.1% (range: 5.3-8.9%), 3.5% (range: 0.0-6.9%), and 7.8% (range: 3.3-14.8%), respectively (CV was taken into consideration while calculating the significant difference between groups, values). The data showed a positive correlation with that of mRNA expression levels in both RF-and SH-exposed cells. In GR-exposed cells, the PARP-1 protein levels were significantly higher than in SH-exposed cells at 0 hours (i.e., ∼10 minutes after exposure), 0.5 hours, and 1 hour but were similar and not significantly different between the two groups at 2, 4, 6, 8, and 10 hours after exposure. Discussion The poly(ADP-ribose) polymerase-1 (PARP-1) was the focus of research for numerous investigators since this nuclear enzyme has been shown to be involved in genomic instability, repair of DNA strand breaks, gene transcription, cell cycle progression, chromatin function, and cell death [3][4][5][6][7][8][9][10][11]. To the best of our knowledge, thus far, induction of PARP in cells exposed to nonionizing electromagnetic fields was not reported in the scientific literature. However, the results from two recent investigations have provided the evidence that a very low, nongenotoxic dose of IR was able to upregulate/increase the PARP-1 mRNA expression and its protein levels. The observations of Zhang et al. [13] included the following: (i) mice exposed to low dose IR (0.05 Gy 12 C 6+ ion beam) showed significantly increased PARP-1 enzyme activity and its protein levels while the incidence of chromosomal aberrations (CA) in spermatogonia and spermatocytes was similar to those in unexposed controls; (ii) mice exposed to high dose (2 Gy 12 C 6+ ion beam) showed significantly increased CA and decreased levels of PARP-1 activity and its protein; (iii) mice that received both low and high doses had significantly decreased CA and restored levels of PARP-1 activity and its protein; (iv) the effects observed in mice which received low and high doses were blocked when they were additionally injected with 3-AB (immediately after the low dose). Thus, the authors suggested that the increased PARP-1 activity and its protein might have played a role in decreasing the CA/genotoxicity in mice irradiated with low and high dose IR. In a more recent investigation, Cheng et al. [14] exposed cultured mouse lymphoma EL-4 cells to low (0.075 Gy) ± high doses (1, 1.5, and 2 Gy) of X-rays. Some cells were also treated with 3-AB one hour before low and high doses. The results indicated that the expression of PARP-1 and p53 mRNA and the protein levels as well as % cells in apoptosis were significantly increased in cells exposed to low dose compared with those exposed to high doses. In addition, treatment of the cells with 3-AB resulted in downregulation of PARP-1 and p53 and negated the effects induced by high dose X-rays. Thus, the authors concluded that PARP-1 and p53 may have played important roles in cells exposed to low dose Xrays. The results obtained in our current study, indeed, suggested that nonionizing 900 MHz RF at 120 W/cm 2 power intensity exposure for 3 hours/day for 5 days was capable of increasing/upregulating the PARP-1 mRNA expression and its protein levels in BMSCs (compared to SH-and GRexposed cells), and their decreased levels from 0 hours to 10 hours may be due to their degradation over time. Such induction/degradation of PARP-1 mRNA expression and its protein was not observed in SH-exposed cells. In GR-exposed BMSCs, there was an increase in PARP-1 mRNA expression and protein levels at 0 hours, that is, ∼10-minute intervening time between exposure and performing the assays. These increases may be due to the immediate response of the cells to the damage induced by GR exposure and/or due to the "stress" during the transport of cells from one building to the other. However, the subsequent decreases observed at 4 and 2 hours might be due to their use in the repair of GR-induced DNA strand breaks. There were several reports indicating the involvement of PARP-1 in inducing apoptosis or programmed cell death [21][22][23][24][25] and PARP inhibitors are considered as potential therapeutic agents for life-threatening diseases [26,27]. In the context of animal and human cells exposed to RF, there were contradictory reports on the induction of apoptosis [28][29][30]. In our earlier investigation, we did not observe induction of apoptosis when HL-60 cells were exposed to 900 MHz RF [19]. In the current investigation, the extent of apoptosis was not assessed, but this needs to be examined further in view of the increased PARP mRNA expression and PARP protein levels in BMSCs. Exposure of animal and human cells to nonionizing RF may generate some "stress" which may cause undetectable DNA damage and may stimulate signal transduction pathways leading to the activation of cell defense mechanisms. The activated cell defenses provide the cells with the ability to resist higher level damage induced by subsequent exposure to genotoxic agents. Such defense, also referred to as adaptive response (AR), has been reported in animal and in human cells preexposed to RF (reviewed in [31,32]). The data was from our most recent study in which mice which were preexposed to RF and then challenged with a genotoxic dose of BLM showed significantly reduced levels of strand breaks in the DNA as well as faster kinetics of their repair [15]. The increased PARP mRNA and PARP protein levels observed in the current study provide mechanistic evidence for such DNA damage repair and thus RF-induced AR. Nonetheless, this needs to be confirmed in appropriate RF-induced AR investigations, that is, animal and human cells preexposed to RF and then challenged with genotoxic agents. Conclusion The overall observations in our investigation indicated that nonionizing 900 MHz RF exposure at 120 W/cm 2 power intensity in BMSCs was capable of increasing/upregulating
4,415.6
2016-04-14T00:00:00.000
[ "Medicine", "Biology", "Physics" ]
Modeling and Optimizing the Performance of an Industrial Trigeneration Unit : Trigeneration provides an effective means of power, heat, and cold production on-site. Proper design and well-managed operation of such units can bring in substantial savings in consumed primary energy as well as in the amount of greenhouse gases released to the atmosphere, compared to separate production of all three media. The studied sub-MW-sized trigeneration unit comprises an internal combustion engine combined with an absorption chiller and a heat management system, delivering all three media to a nearby industrial facility. A mathematical model is developed based on available design and process data, a profit function is set up, and subsequent sensitivity analysis of economic parameters is realized. Lowered efficiency of summer operation is analyzed, and a suitable solution is proposed; with estimated total investment cost of 114,000 € and anticipated simple payback period less than 2 years Introduction Constant increase in energy prices forces consumers to operate more efficiently and in a more sustainable way.The industry sector in the European Union was responsible for 26.1% of the total energy consumption in 2020, with electricity and natural gas making up more than 60% (6000 PJ) [1].Therefore, it is important to reduce the use of energies and fuels.Additionally, due to the developing environmental crisis, it is crucial to decrease the production of greenhouse gas emissions.Trigeneration is a combined cooling, heating, and power (CCHP) production from one or more energy sources.A common trigeneration unit consists of a cogeneration unit (CHP) in combination with an absorption chiller [2].The sources of energy in CCHP vary from fossil fuels, e.g., oil, coal, or natural gas (NG), to renewables [3].Additionally, it depends on whether the unit is installed in a residential sector, a small industrial sector, or in a large industry.Design and management of the trigeneration unit is a challenging task because the heat and cold consumption in both industrial and residential applications vary on an hourly or daily basis [4,5]. Alongside an increased overall efficiency, CCHP also offers a reduction in greenhouse gas emissions [6,7].Usually, the overall efficiency of CCHP varies from 80% to 93% [8] and the efficiency of the CHP itself can reach 80% [3,6].For a comparison, a conventional power plant where no heat or cold is produced reaches an efficiency from 27% to 40% [9,10].A decreased CCHP efficiency may result from off-design operation and may negate the expected economic and environmental benefits.An example of this is summer operation, when space heating is not required, and cold production unit is unable to consume heat produced at the nominal CCHP load.This work analyzes the operation of an industrial trigeneration unit and proposes means of its improvement by debottlenecking the current design.A simple linear model is developed to describe the main features of the CCHP operation, yielding a profit function and enabling profit maximization. Materials and Methods The analyzed trigeneration unit and the related heat and cold consumers located in an industrial facility are schematically depicted in Figure 1.Two identical internal combustion engines are operated in the CHP, each with a nominal power output of 250 kW and a heat output of 362 kW, respectively.The absorption chiller produces up to 618 kW of cold with the related coefficient of performance of 0.836.Hence, it can consume all the heat obtainable from combustion engines to produce chilled water.A plate heat exchanger with rated capacity of 350 kW is installed as a heat killer, i.e., an emergency solution for summer conditions with low consumption of both heat and cold.According to the trigeneration unit operators, installation of the heat killer in the past partially helped to solve the problem, but the unit operation is still not optimal.The insufficient heat consumption results in high temperatures of the return water flowing to the CHP, which then results in an emergency shutdown of the CHP.This issue could be solved by decreasing the CHP electricity production, but the plant's goal is to produce as much electricity as possible.Thus, if the temperature of the return water is too high, the operation of the CHP at full power cannot be guaranteed.As a solution, an emergency air cooler enabling the disposal of all heat produced by the CHP at maximal power production (500 kW) is proposed and designed. Design procedure was adopted from [11,12] for summer on-site conditions: temperature of 32 °C, air pressure of 97 kPa.The associated total investment cost (TIC) estimation proceeded via guidelines provided in [13].The TIC was converted to current equipment prices by the latest known Chemical Engineering Plant Cost Index (CEPCI) value of 799.1, valid for March 2023 [14].Hourly profit values obtainable by emergency air cooler operation were determined as the difference in the profit function values, Equation (1), valid for the actual (representative summer period operation data available from 1 July to 11 September) and the maximal power production.Seasonal profit was obtained by summing the hourly profit values over the whole representative summer period.Single payback period was estimated as a ratio of the TIC and the seasonal profit. The profit function was constructed to define the optimal trigeneration unit operation at various heat and cold demands as shown in Equation ( 1): where PROFIT is the calculated profit (€.h −1 ), PEE CHP is the income from electricity produced by the CHP (€.h −1 ), PEE CHILLER is the income from electricity saved in the electric chiller (€.h −1 ) by producing cold in the absorption chiller instead, Cheat is the income from heat produced by the CHP (€.h −1 ) substituting the heat production in a separate gas boiler in the technology, CCO2 technology is the income from CO2 emissions avoided in the separate gas boiler in the technology (€.h −1 ), CNG is the cost of natural gas consumed in the CHP (€.h −1 ), and CCO2 CHP is the cost of CO2 emissions produced by the CHP (€.h −1 ).Individual items in Equation ( 1) were calculated as the product of calculated production (MWh.h−1 , t.h −1 ) and the unit price (€.MWh −1 , €.t −1 ) of the given energy or medium.Heat and cold production were estimated as functions of power produced by the combustion engines based on their datasheets.Reduction of the power consumption in the electric chiller, PEE CHILLER, was estimated as the ratio of cold production in the absorption chiller and the energy efficiency ratio of the electric chiller (which, according to the datasheet, equals 3.2).An emission factor of 2.75 t of CO2 per 1 t of natural gas was considered in the calculations. Results Figure 2 shows values of the profit function for two extreme cases of CCHP operation: a. at the minimum possible power production (130 kW), and b. at the maximum power production (500 kW).The figure was constructed assuming the following model prices: 370 €.MWh −1 for electricity, 35 €.MWh −1 for NG, and 85 €.t −1 for CO2. As can be seen, profit of the trigeneration unit increases with both increasing the heat production for space heating and technology and increasing heat delivery to the absorption chiller.However, producing cold in the absorption chiller is economically more attractive, regardless of power production.This leads to a defined hierarchy of optimal heat production and use, depicted in the form of a decision diagram for plant operators in Figure 3.This diagram is valid for model prices of energies and media; however, it can be updated based on the profit function analysis for any combination of prices. The air cooler design yields two identical units with two fans per each unit, nominal cooling air flow per fan of 15 m 3 .s−1 , and total nominal power consumption of 8 kW.Anticipated TIC reaches 114,000 € and, with the model prices of energies and media, the seasonal profit reaches almost 65,000 €.The resulting expectable single payback period is around 2 years. Discussion Figure 2 shows the profit function values for model prices, reflecting the prices currently valid for the industrial site within which the trigeneration unit is located.As can be recognized, the price ratio of electricity and natural gas exceeds the value of 10, which favors power production maximization even with zero heat delivery.This means the unit may operate in a purely power-plant mode as the favorable economics prevails over energy efficiency and environmental issues [15].This fact is also reflected in Figure 3 which always recommends cogeneration unit being in operation.Even if the on-site power demand is low, the excess power can be exported to the outer grid.The second-in-hierarchy question is whether a fraction of the heat from the internal combustion engines could be used to produce cold in the absorption chiller.This way, the cold produced in the electric chillers in the technology would be replaced and the facility's power consumption would decrease.The least favorable, yet still viable and, therefore, being placed the lowest in the hierarchy, is the option to use the available heat for space heating and in the technology.As a result of this, the heat produced in the hot water boilers in the technology would be replaced and, thereby, a reduction in natural gas consumption would be achieved. The discussed situation underlines the pressing need to solve the issues coupled with the summer operation of the unit.During this season, only a part-load operation of the unit is often possible.Eventually, it can even become impossible due to insufficient heat consumption.In the most extreme cases, the difference in profit function may reach almost 100 €.h −1 when comparing pure electricity production modes (see Figure 2).This explains the significant saving of almost 65,000 € which is achievable by air cooler commissioning and operation in summer, as it enables smooth full-load operation of internal combustion engines even in the pure electricity production mode by disposing of all waste heat.Obviously, with significant power and natural gas price changes [16], the situation may become different, i.e., the true trigeneration mode operation may be preferred to pure power production. Conclusions The presented study aims at analysis and optimization of a sub-MW-sized trigeneration unit supplying an industrial facility with power, heat and cold.The current layout of the unit hinders its efficient operation in summer due to the absence of a relevant heat consumer and due to the limited capacity of the "heat killer".The resulting part-load operation and frequent emergency shutdowns lower the operation profit and reduce the unit´s expected lifetime.Optimal operation conditions are studied based on developed model in the form of a linear profit function.The analysis yields a useful tool for plant operators-a decision diagram-to ensure the most economic plant operation based on a set of hierarchically arranged criteria.Basic design parameters of a new air cooler are obtained and the economics of its installation and summer operation is evaluated; yielding a two-unit design with four air fans in total, internal power consumption of 8 kW, and expected seasonal profit of almost 65,000 €.As a result, an attractive simple payback period of 2 years can be expected. Figure 1 . Figure 1.Depiction of the trigeneration unit layout and heat and cold consumers.ABS = absorption, ACC = accumulation, ADMIN BUILDING = offices, CHP = cogeneration unit, EL = electric, K = boiler, SOC = social.Colors: red and pink = hot water pipelines, blue = chilled water pipelines, green = cooling water, dark blue = natural gas and flue gases.Source: own elaboration. Figure 2 . Figure 2. Profit function values as a function of the heat production for space heating (CHS) and technology and the heat consumption in the absorption chiller for a. minimal (130 kW) and b. maximal (500 kW) power production.Abs = absorption, CHS = space heating.Source: Own elaboration. Figure 3 . Figure 3. Decision diagram for optimal trigeneration unit operation valid for model prices of electricity.CHP = cogeneration unit (internal combustion engines).Source: Own elaboration. Funding: This study was supported by the Slovak Research and Development Agency under contract nos.APVV-18-0134 and APVV-19-0170, and by the Slovak Scientific Agency, grant no.VEGA 1/0511/21.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.
2,770.6
2023-10-27T00:00:00.000
[ "Engineering", "Environmental Science" ]
Developing an IoT Identity Management System Using Blockchain : Identity (ID) management systems have evolved based on traditional data modelling and authentication protocols that are facing security, privacy, and trust challenges with the growth of Internet of Things (IoT). Research surveys reveal that blockchain technology offers special features of self-sovereign identity and cryptography that can be leveraged to address the issues of security breach and privacy leaks prevalent in existing ID management systems. Although research studies are recently exploring the suitability of blockchain based support to existing infrastructure, there is a lack of focus on IoT ecosystem in the secured ID management with data provenance of digital assets in businesses. In this paper, we propose a blockchain based ID management system for computing assets in an IoT ecosystem comprising of devices, software, users, and data operations. We design and develop a proof-of-concept prototype using a federated and distributed blockchain platform with smart contracts to support highly trusted data storage and secure authentication of IoT resources and operations within a business case scenario. Introduction Identity management is a method used to recognise and validate confidential data, as well as determine and authorise user access to such data.Since there is lack of an identity management protocol via the Internet, several service providers are given the responsibility of providing identities and maintain user credentials with granting access rights [1,2].This method is not efficient as information is duplicated across different service providers and some centralised identity protocol could also be used by several organisations.Further, personal information is constantly being collected without user's knowledge or consent for various purposes, such as profiling and data mining, that could be exploited.There is difficulty in maintaining security and privacy of data as evidenced by data breaches and malicious attacks over the Internet [3,4].Hence, such a centralised identity management system has serious issues when users are deprived of the identity and rightful ownership of data.In this paper, we propose the use of a new concept of self-sovereign identity where a person's consolidated digital identity is owned and managed entirely by the individual.It includes verified attributes and cryptographically trusted parties thereby enabling users to possess rights and control of their own data and usage.Such a federated and distributed digital identity management system is possible with the recent state-of-the-art blockchain based identity management and authentication frameworks [5,6].Blockchain provides a secure solution as users can have ownership of their own data and, also, allow or revoke any individual's consent for access [7,8]. In this digital world, the Internet of Things (IoT) that connects devices, individuals, and businesses services presents escalating security and privacy risks due to the current Internet digital identity management solutions [9].We are gaining importance of IoT in the cyber world for various online transactions such as banking, shopping, tele-health, and many more day-to-day services.In such an IoT intertwined cyber-physical system that primarily focus on scalability, interoperability and mobility, the growing need of digital identity and data privacy could be addressed using a blockchain platform.It is important for the owner/authorised user of an IoT device to know its identity and have secure access of their own data.Device democracy could be achieved using blockchain to enable self-management of data, as well as access control policies for each IoT device [10].Public blockchains such as Bitcoin and Ethereum, create public addresses independently to represent the device and latest activities could be used for verification purposes [11].Multiple virtual nodes in blockchains could represent each physical node.On the other hand, in private blockchains, the devices need to be authorised before being added into the blockchain network.Overall, there is a need for an identity management of private blockchains.Blockchain opensource platforms, such as Hyperledger Fabric provides identity management and implementation of smart contracts for data transactions.However, there are limitations with such private and permissioned blockchain platforms [12,13].The pros and cons of existing platforms forms the motivation of this research to develop a blockchain based distributed system as a proof-of-concept for IoT users to have full control of their own data and access control policies.Recent models in developing programmable smart contracts emphasise the use of blockchain-based approaches to support device democracy [14 -16].Smart contracts for data accountability and provenance tracking are based on the identity of the data controller or a particular data.Blockchain technology could also be used as a database for storing transactions and access control policies.For each IoT access request, the transaction for granting access could be broadcast to the blockchain network for either approving or rejecting the request, notifying the user accordingly.These approaches form the motivation of our proposed IoT ID management using blockchain technology. Overall, with the recent growth in IoT, applications, and services, the lack of a robust dynamic identity management solution calls for much research in this direction [17].Many previous studies have focused on privacy-preserving approaches and its applications particularly to healthcare scenarios [18,19].In this paper, we propose and develop a blockchain based identity management system for an organisation's IoT ecosystem giving importance to device democracy access control, data privacy, and trust.The main contributions of our paper are: (a) We present a novel blockchain-based modelling for IoT ID management to overcome the data privacy, and security concerns associated with centralised architectures; (b) we demonstrate the application of smart contracts to enable data accountability and privacy preserving transactions with access control policies, and (c) we show the implementation of our proposed model for a business case scenario as a proof-of-concept prototype. The rest of the paper is organised as follows.In Section 2, we provide a literature review of the recent works on blockchain based identity management solutions and the lack of studies in the context of IoT.We describe a business case scenario requiring a blockchain sovereign identity solution for IoT and propose a suitable blockchain structured model in Section 3. We present the implementation of our proof-of-concept prototype for the IoT case scenario in Section 4. Finally, in Section 5, we summarise our work and the promising research plans for the future in building more features in IoT identity management systems. Literature Review In the cyber world of today's digital age, digital identity is of paramount importance as there is a growing reliance on the Internet to perform online transactions involving multiple devices and communication protocols.We review the recent studies on various upcoming blockchain sovereign identity solutions.We present below a summary of the state-of-theart blockchain based identity solutions and their limitations, thereby identifying the gaps found in the literature. A blockchain-based personal data and identity management system (BPDIMS) is proposed in [20] based on human-centric personal data.The identity management using blockchain and smart contracts is based on the MyData initiative under the European Union's new General Data Protection Regulation (GDPR).The BPDIMS approach is limited towards establishing the transparency and control over personal data alone and lacks consideration of IoT devices and various interactions among different users of the system.To address this lacuna, our proposed approach considers authorisation and identity management of IoT and various users of the system.Our proposed approach will develop the required smart contract rules to safeguard the IoT resources and end-user authentication. An encrypted member authentication scheme is proposed in [21] in order to support blockchain concept of identity management systems using a cryptographic membership authentication scheme.In this approach, a new transitively closed undirected graph authentication (TCUGA) scheme is used to enhance the security and efficiency of the proposed scheme.This scheme can dynamically add or remove nodes and edges and demonstrate the security of proposed TCUGA in the standard blockchain oriented model.However, this scheme does not deal with the data minimisation and any request of nodes uses the false positive method to send the certificates to other nodes in the system.Hence, network congestion in sending several certificates during transactions could be a potential practical bottleneck.To overcome this, our proposed approach would be to use blockchain technology as it is efficient in creating encrypted hash for secure digital identities, as well as to use the concept of smart contracts and group policy for member authentication. Another blockchain-based identity management in [22] proposed access control mechanisms within blockchain for edge computing.Their provision of data security is aimed toward industrial Internet of things (IIoT) by including authentication, auditability, and confidentiality.In order to ensure the IIoT security, their approach at first embeds the generated implicit certificate to its identity and constructs the identity and certificate management mechanism based on blockchain.Secondly, an access control mechanism based on the Bloom filter is designed and integrated with identity management.However, it lacks key agreement protocol and certificate management mechanism.Further, there is a need to have key optimisation mechanism for improving the performance for practical deployments.These drawbacks can be addressed in our proposed approach by creating smart contracts and business rules in order to incorporate an efficient authentication of IoT users. A hybrid blockchain gateway solution is proposed and developed in [23] in order to support legal compliance and traditional identity management features and also to deal with issues caused by centralised trust systems in organisations.The solution establishes a secure and privacy friendly middle ground between the blockchain and the mundane world (off-chain) using a hybrid solution that comprises of a blockchain gateway and a blockchain framework.However, the undefined and inappropriate interests, policies, and responsibilities between different agents or end-users may cause some challenging issues for authentication and authorised users.This identified problem can be resolved in our proposed approach by managing user access to the blockchains through well-established smart contracts and group policy. A smart contract-based identity management system (DNS-IdM) [24] is proposed which is able to enable users to maintain their identities associated with certain attributes, accomplishing the self-sovereign concept.The secure and trustworthy management of identities have been maintained by the use of authenticated and unauthenticated blockchains along with smart contracts.However, there are no management policies identified in order to maintain and develop the compliance with digital standards.This problem can be addressed in our proposed approach by embedding standard business rules/policies in a blockchain network in order to ensure the authentication of every IoT user. A smart contract on a blockchain is used in various domain in order to enable an architecture for the flexible solution, where authentication no longer involves the credential service provider (CSP) [25].In this approach, an identity management system (IDMS) is proposed by using the features of federated identity management (FIM) that helps users to access multiple systems using a single login credential.In such an approach, a user can authenticate and transfer attributes to a relying party without having the involvement of a CSP (thereby heightening privacy and reducing costs).One of the limitations of this work is revealing a user's identity by creating clone identity for another user.Any prior knowledge about a user's identity attributes could be used to make this clone identity for some other users.In our proposed approach, this problem can be addressed as a user's identity will be verified and access will be granted only for IoT devices with encrypted digital identities. A comprehensive literature review has been conducted on the study of blockchainbased identity management systems by [26].Many possible challenges have been identified in order to outline the risks involved in the blockchain-based identity management system reviewing recent state-of-the-art advances on the topic.Critical analysis and surveys have been conducted based on blockchain-based identities in a professional environment.The abovementioned studies have provided some holistic guidelines that are mostly met with a fair amount of criticism and reservation in professional environments.In [27], an evaluation framework consisting of 75 criteria has been applied to assess 43 blockchain-based identity solutions and their state-of-the-art approaches.The outcome of the investigation has been related to different features, prerequisites, market availability, readiness for enterprise integration, costs, and (estimated) maturity.However, the study lacks any suggestion of possible generic blockchain based solutions for the identified challenges from other research work considered in the survey.We find gaps in literature exploring blockchain based solutions specifically for identity management of IoT. In [28], it has been argued that self-sovereign identity (SSI) solutions based on blockchain technology have a more technical motivation that obscures key challenges and long-term repercussions.With ubiquitous private data collection in the context of IoT, a range of ethical issues surrounding human identity have been identified.To address privacy and ethical concerns, any proposed approach should cater towards a secure sharing of private and sensitive information and identity of IoT users within a blockchain based identity management system.Every digital asset within an IoT network requires trusted security and user access.In [29], an end-to-end trust has been aimed by applying blockchain to IoT devices.Blockchain has been used by IoT devices to automatically register, organise, store, and share streams of data.However, the solution is restricted towards providing end-to-end trust only for trading and it identifies future research challenges in developing a trustworthy trading platform for IoT ecosystems.Another recent research has focused on designing the main functions of identity management, such as registration, authentication, and revocation using lightweight considerations [30]. Modelling choices for blockchain-based data accountability and provenance tracking have been explored with respect to the design of smart contracts and for addressing performance and authentication issues [31].The solution choices relate to managing transactions, such as authorisation and auditability properties while adopting a public, consortium/semi-public, or private blockchains.The study was limited to considering contract design, implementation, and performance of the solution to the open source Ethereum Virtual Machine (EVM) only. Another blockchain solution platform called Hyperledger Fabric is a permissioned blockchain platform where the network members use transactions controlled by the chaincode, a software code installed and executed on its nodes for secure access to the shared ledger.The IoT identity management along with transactions and data are restricted to a separate subset of the network members called a channel.This way the shared ledger of transactions of digital assets is maintained within the channel members only.The subnetworks of a Hyperledger Fabric contain blockchains in the file system of the node and a database of current states of all keys residing in memory to make chaincode interactions efficient [31].However, different algorithms are used for reaching consensus with ambitious scalability and performance targets.Another work focuses on efficient distributed computing power and network bandwidth for blockchain and smart contracts based on edge computing [22].Similarly, recent works have proposed a blockchain based framework for Software-Defined Cyber Physical Systems (SD-CPS) as a distributed resource management solution towards addressing the storage and computing issues of IoT devices [32,33].These studies lack the consideration of specific issues, such as ID management for a business IoT ecosystem. In recent years, implications of blockchain technologies in business ecosystems, such as within the supply chain and power industries, have been explored [34][35][36].Blockchain technology has also been considered as providing support infrastructure for security considerations in e-government services [37].However, it has been identified that implementing a robust blockchain-based identity (ID) management for IoT ecosystem in businesses as future research [38,39]. Overall, there is lack of studies in literature for developing a trusted IoT ecosystem in a business scenario using a blockchain platform.In this work, we aim to take an initial step by proposing a blockchain based design and implementation of a decentralised ledger with smart contracts in providing a trusted ID management for an organisation's IoT ecosystem.The purpose is to develop a blockchain based ID management modelling for a simple IoT case scenario as a proof-of-concept prototype development.Our proposed solution aims to provide an integrated blockchain based IoT ecosystem for the organisation's ID management of IoT computing resources, such as devices, software, users, and data assets. Blockchain Based Modelling for an IoT Identity Management System We consider an organisation facing problems with the identity management system currently in use as our business case scenario.There is lack of technical solutions to support the growth in computing resources with several IoT devices connected to the organisation's network infrastructure.Their current identity management system does not automatically keep track of transactions of the usage or upgradation of company's assets in a highly secure and federated or distributed trust framework.Currently, manual auditing and tracking of their transaction records for each computing resource or digital asset is susceptible to accountability issues and threats of unauthorised usage of information.With antivirus and firewall systems, there is minimal security provided for the IoT networks from certain threats.More recently, possible data breaches and unauthorised access due to hackers breaking in through the network to manipulate the information require attention.Further, misuse of confidential information and difficulty to keep track of the users involved in the IoT transactions form the key motivation in considering blockchain technology to protect the organisation's computing assets from hackers and unauthorised users. We model ID management system for the organisation as the business case scenario in developing a blockchain based proof-of-concept prototype.The main objectives in our proposed model are three-fold.Firstly, blockchain identity management is to be modelled to store the necessary information and key transactions of all IoT devices and computing assets in blocks which use strong cryptographic hashing to share the information securely within the system.The information stored in a block cannot be tampered since it allows a user to obtain the information from the blockchain through an automatic verification and matching of the hash that links one block to the next.Secondly, our model is designed to have provisions for business rules as smart contracts developed in a blockchain platform with various agreements required to be met while performing the core transactions on the IoT computing assets.Thirdly, our distributed trust model is to address security, privacy, and identity theft issues by managing identity verification of not only IoT assets but also authorised users of various digital assets and recovery of lost identities. In our business case scenario, we focus the core transactions to protect the computing assets, such as computing hardware including IoT devices, the software running on them, the users operating these resources and the digital asset backup/update transactions.In our blockchain identity management model for such an IoT landscape, the identity attributes for hardware resources, software, users, and digital asset transactions stored in each block will be verified in order to control the disclosure of the attributes which depend upon the acceptability.Our model would enable the employees/authorised users to manage their identities to make use of the existing identity attributes [40].In addition, a trusted compliance platform using smart contract modelling would be created to incorporate the business rules for enforcing privacy, security, and trust.A smart contract consists of executable codes and memory from where it is invoked by blockchain identity (ID) management system providing the required trust.Smart contract platform would be secured using cryptographic code and the data or information stored on the blockchain which would be audited automatically [41].Our blockchain model does not store the business events in the contract.Instead, based on temporal and other smart status changes, the block's data through the previously observed events will be updated.For this purpose, we develop reactive policies to trigger an event using smart contracts. Figure 1 provides a pictorial representation of a high-level modelling of our proposed blockchain ID management system.In this model, the different IoT resource owners or users (employees) shown in Figure 1 represent authorised users who can transfer access control rights and management of identities by implementing smart contracts within the blockchain platform.In both permissioned as well as permissionless blockchain solutions, the identity of IoT/computing resources, digital assets and users can be stored and updated in the specific blocks of the blockchain and a strong cryptographic link (chain) connects one block to the next block securely.Thus, authenticated users have the right to update the ID information of an organisation's IoT assets stored in the blocks without compromising privacy.Users could self-generate identity attributes in a block and update them or endorse other users for transactions via a smart contract.With the rise of adopting blockchain technology for ID management to eradicate trust issues, the organisation would require modelling the business rules for the smart contract platform to handle the identity management process.Figure 2 shows a typical representation of a unique ID denoted as "Computer ID" that represents each IoT resource and the related information stored in data blocks in a blockchain, with Block 1 acting as a Node (server).Information of each IoT added to the network will be stored subsequently as Block 2, Block 3, and so on as a chain via the encrypted unique hash that links the blocks securely.For example, the blocks in Figure 1 store the attributes of an IoT, such as computer ID, hardware type, ISP, VPN, resources, and contains the encrypted block hash. In blockchain based ID management modelling, blocks use a cryptography link (Chain) to share the information securely within the system.The strong cryptography method of links every block as identified with a hash for allowing any authorised user to obtain the information from these blocks.It needs to verify the hash from the previous block so these blocks must have the same hash to give the information.Overall, public and private keypairs are used for generating this, which are correlated to the mined blockchain identity.Therefore, our proposed model prevents attacks from malicious third-party identity providers and ensures user privacy and trust.Figure 3 provides an overview of how new blocks become added to the existing blockchain ledger when the transaction is verified by the network of blockchain nodes.Existing works have predominantly deployed smart contracts in a public blockchain that makes the contract state variables and transactions publicly available leading to anyone's access to all data related to the IoT.Rather than having a focus on the pros and cons of blockchain platforms, their constraints, performance comparisons, and other related issues, our proposed blockchain system modelling for IoT ID management aims at ensuring the integrity of data provenance records in the system using smart contracts.Data ownership is maintained by deploying smart contracts on a blockchain as tokens representing the digital asset of the organisation [42,43].In the next section, we provide a proof-of-concept implementation of our proposed blockchain model coded in Solidity, a smart contract programming language and deployed in Kaleido, a blockchain platform as an enterprise system for the business case scenario. System Implementation In this section, we present a system implementation as a proof-of-concept prototype of our proposed blockchain-based IoT ID management for the business case scenario described in Section 3.With a growing number of IoT devices in the organisation's network, our blockchain ID management solution aims at identifying, authenticating, and authorising employees to have access to applications, systems, or networks by associating user rights and restrictions with established identities.ID verification costs corporations and governments billions of dollars annually and yet is prone to malicious attacks.Using open source blockchain platforms, such as Kaleido and Solidity, our model implementation provides a cost-effective and secure identity and authentication for the organisation's IoT devices.Figure 4 shows the configuration of our ID management system for IoT resources in a blockchain platform called Kaleido.The system involves blockchain-based implementation of ID management confining to four main categories of IoT related resources of the business case scenario as listed below: 1. Computer ID management which includes all the IoT devices; 2. Software ID management which includes various software resources associated with IoT; 3. User ID management which includes employees of the organisation assigned to the IoT resources; 4. Data backup ID management which includes digital operations, such as backup, recovery, and other transactions of the IoT resources. Each ID management listed above is composed of attributes and smart contracts.Figure 5 lists the above four data models implemented and deployed in the blockchain environment.Figure 6 illustrates Computer ID block deployed in Kaleido blockchain platform.An authorised user can create a blockchain identity and update attributes as an individual user or a collective group of users according to the business rules implemented as smart contracts.Figure 7 shows the configuration of our system with four smart contract projects in the blockchain environment.Some of the smart contract rules for the above four categories of the IoT ID management system are listed below: 1. Computer ID management smart contracts.A smart contract rule could be used to authorise transactions, such as addition, update, or deletion of a Computer ID block in the blockchain.An example rule implemented makes checks before adding a new block into the blockchain when a new IoT device is procured in the organisation for an employee use.Another rule could be to check if the number of connections made with computer IoT devices in the network does not exceed the threshold limit for optimum performance of the system.2. Software ID management smart contracts.A typical smart contract could be associated with the software license of an IoT's software resource or a common software used in the organisation via the IoT network.An example smart contract rule implemented was an automatic notification to an authorised person to extend the license or to upgrade the software two weeks before expiry date.Another rule could enforce load balancing of software upgrades in a phased out manner to lessen the burden of software maintenance management. 3. User ID management smart contracts.It is a common practice to incorporate user login rules as a security measure for restricting unauthorised access when certain number of login attempts fail.An example smart contract rule implemented verifies if a user's login attempts fail with three attempts, when user access will be denied.The implementation of this smart contract using Solidity programming language and within the blockchain environment is illustrated in Figure 8.In such instances, the user is denied access to the MAC address of the organisation's network or make any further transactions with the system.However, smart contracts could also be implemented for an authorised person's user ID to be reinstated with a new password for resolving the denial of access issue for any legitimate user of the organisation.4. Data backup ID management smart contracts.When a user attempts to back-up a data storage or makes an update to the data records, a smart contract could allow such transactions only if there is no data breach.Further, for any data breach occurring, a smart contract could be created for assisting the user to quarantine the data from future user transactions with the organisation's data records.An example smart contract implemented was to optimise the backup storage by checking based on the timestamp of previous backup and to archive only those data records that were updated after the previous backup.Figure 9 shows the code of a data backup rule implemented in Solidity as a smart contract that was successfully deployed in Kaleido blockchain environment.Further, a smart contract could be allowing for data retrieval from the backup storage to be possible with two security questions that need to be answered correctly.Overall, our blockchain based ID management system for IoT resources for an organisation was implemented as a proof-of-concept prototype.The purpose was to address some of the critical challenges of privacy and security of traditional ID management system based on relational database modelling.Even though blockchain-based ID management solutions are emerging, developing an effective ID management for IoT remains a challenge in terms of managing access controls, privacy, and trusted transactions.Our blockchain based shared and distributed ledger stores the identity information of IoT resources and transactions that are verified through smart contracts to establish the required data provenance.Further, the latest transactions mined could be monitored in the blockchain platform, as shown in Figure 10.This helps to verify the status of each transaction, when mined and from which block with a commit hash that maintains the required security using blockchain encryption of the system.Hence, the system implementation of our proposed blockchain modelling of ID management of digital assets could provide a solution to the existing issues of identity theft, information leakages, and other malicious practices over the organisation's networked IoT resources.This paper reports the successful completion of a pilot research work with limited scope as mentioned above.Our blockchain based proof-of-concept prototype for a business case scenario was tested and is suitable to be deployed on any EVM-compatible blockchain platform.Although some recent works [31,44] have developed a proof-of-concept implementation of smart contract in public permissionless blockchains, such as Ethereum and Ethereum Classic, the focus in this research work was towards developing ID management for IoT ecosystem in a business case scenario.The described model and system implementation could be deployed on any blockchain platform that supports scripting capabilities and other open-source software including Github. Figure 11 shows the CPU utilisation that was quite reasonable performance of the proof-of-concept prototype deployment in Kaleido blockchain platform.The blockchain platform can be integrated with AWS CloudWatch to monitor the performance of the running system.Figures 10 and 11 give examples of such performance metrics, such as transaction monitoring and CPU time.In addition, monitoring log activities and other actionable system-wide insights could provide a unified view of the system's operational health.Further, Kaleido provides a seamless integration with AWS CloudWatch to visualise several key performance metrics that would help businesses to optimise resource utilisation quickly and aid in prompt and accurate diagnostics to resolve any problem that may occur.A recent study considered several performance metrics for measuring the performance of blockchain platforms using measures, such as latency, throughput, computation, storage, and communication costs, scalability, and other security and privacy comparison factors [45].Studying the scalability of the blockchain based real-time communication in real-life business environments is becoming important [46].Scalability, adaptability, and other blockchain based business use case development would be the focus of future research work.In this context, business oriented blockchain-based platforms, such as Hyperledger supported by leading companies is a noteworthy avenue for further investigation [42].Our future study would consider evaluating the use of our business blockchain approach as a Hyperledger solution comparing the performance of different consensus algorithms.Some existing studies have investigated Hyperledger's scalability and performance with an ambitious achievement of thousands of transactions per second [32,33,47].However, such studies are yet to deploy and evaluate scalability and performance issues in any blockchain platform with several related smart contracts added in real-life business scenarios.Recent studies have focused on proposing blockchain frameworks for specific business applications, such as supply chains in the distribution industry and forensics in vehicle management contexts [41,48].Business process oriented blockchain development is gaining importance [49,50].Intelligent automation of business processes using smart contracts is yet to explored.Our motivation towards another future research direction would be to consider automating an Agile process of developing and maintaining smart contracts from business and security policies and applying their updates dynamically whenever there are changes happening from time to time.The future of the IoT ecosystem sees blockchain as an enabling technology in a distributed cyber physical system to achieve the required privacy, trust, and data provenance targets for businesses [51][52][53].Some such studies have considered comparison measures of power consumption, latency, and throughput of transactions in a blockchain tree, including concepts of sidechains and parallel data provenance, including the impact of trust and consensus algorithms [54,55].A performance analysis of the running system in a real-life business would consider specific metrics for a deeper insight into the context to aid in a quicker evaluation and diagnosis of any problem.Although this work does not duplicate existing works, the discussions highlighted in this paper opens up many research directions for future investigations. Conclusions and Future Work In this paper, we have presented a proof-of-concept prototype in developing a blockchain based IoT ID management system.We provided the data modelling for the ID management of IoT resources in an organisation as a business case scenario.The implementation of the system in a blockchain platform demonstrated the practical viability of the system.In addition, the four different identities related to IoT resources, such as Computer (Device) ID, Software ID, User ID, and Data Backup ID in a blockchain along with customised smart contracts have established the required information security, privacy, and trust in a networked organisation. This research is limited to establishing a design model and development of IoT ID management using blockchain technology for a business case scenario.There is a need to study the extendibility and adaptability for large scale business operations.Future work would explore the scalability issue, which is one of the key limitations of blockchain technology.With the non-availability of any existing work related to our business scenario, our restricted scope of this research with a proof-of-concept prototype development would be expanded in the next phase of our ongoing research.Future work will also explore the performance and computation time of blockchain transactions and running time for the proposed method in real-life business environments and compared with other possible emerging works.Several performance metrics would be considered to compare, evaluate and benchmark the running our system. Figure 1 . Figure 1.High-level modelling of proposed blockchain ID management system. Figure 2 . Figure 2. Modelling ID management of IoT resources using blockchain. Figure 3 . Figure 3. Overview of verified transactions in a blockchain. Figure 4 . Figure 4.A configuration setup of ID management of IoT resources using blockchain platform. Figure 5 . Figure 5. Data model of IoT ID management system deployed in a blockchain environment. Figure 6 . Figure 6.Computer ID block deployed in the Kaleido blockchain platform. Figure 7 . Figure 7. Configuration of smart contracts for ID management in a Blockchain environment. Figure 8 . Figure 8. Example implementation of a User ID smart contract in blockchain platform using Solidity. Figure 9 . Figure 9. Example implementation of a User ID smart contract in blockchain platform using Solidity. Figure 10 . Figure 10.Transaction monitoring in the implemented blockchain system. Figure 11 . Figure 11.CPU utilisation of the prototype deployment of the proposed blockchain system.
7,730.8
2022-03-18T00:00:00.000
[ "Computer Science" ]
Maximum Power Extraction from a Standalone Photo Voltaic System via Neuro-Adaptive Arbitrary Order Sliding Mode Control Strategy with High Gain Differentiation : In this work, a photovoltaic (PV) system integrated with a non-inverting DC-DC buck-boost converter to extract maximum power under varying environmental conditions such as irradiance and temperature is considered. In order to extract maximum power (via maximum power transfer theorem), a robust nonlinear arbitrary order sliding mode-based control is designed for tracking the desired reference, which is generated via feed forward neural networks (FFNN). The proposed control law utilizes some states of the system, which are estimated via the use of a high gain differentiator and a famous flatness property of nonlinear systems. This synthetic control strategy is named neuro-adaptive arbitrary order sliding mode control (NAAOSMC). The overall closed-loop stability is discussed in detail and simulations are carried out in Simulink environment of MATLAB to endorse effectiveness of the developed synthetic control strategy. Finally, comparison of the developed controller with the backstepping controller is done, which ensures the performance in terms of maximum power extraction, steady-state error and more robustness against sudden variations in atmospheric conditions. Introduction Entire world electricity demand is rising continuously, which motivates the researchers to focus on those energy resources which are efficient, environment-friendly and costeffective [1]. So far, fossil fuels are considered the major contributor to fulfilling the needs of energy in the world. However, they have some harmful impacts on the environment, which will cause greenhouse effects and global warming. Therefore, to overcome these limitations, it is necessary to exploit energy resources that emit low carbon than fossil fuels [2,3]. The photovoltaic (PV) based generation is a very suitable choice among the resources described above because of its environment-friendly nature. Moreover, it is free having low maintenance cost [4]. The use of PV panels in the energy sector is growing rapidly with an increase of 30% per year [5]. These PV systems are used, so far, in grid-connected and standalone systems [6]. The PV system portrays nonlinear electrical characteristics, which can be affected. The integral backstepping controller has been found robust and effective in maximum power extraction under abruptly changing meteorological conditions. However, a significant overshoot and steady-state error have been witnessed during the implementation of this controller. Pros and cons of developed and existing techniques are summarized in Table 1. Main challenge that is faced during the implementation of the controller is that the tracking error is needed to converge to zero under the settling time. Moreover, the nonlinear behavior of the PV system is also the major challenge under changing environmental conditions. To resolve the problems state-above, an adaptive nonlinear Sliding Mode Control (SMC) based method is presented, which is insensitive to parameter uncertainties, and internal/external disturbances [30]. These techniques need many parameters, such as crossover rate, mutation size and chromosome selection, whose estimation is difficult. Artificial Intelligence (AI) These techniques have fast tracking speed and low computation requirement. Require a large memory size and need more time of training to track MPP. Nonlinear Controllers These techniques are efficient in tracking of MPP. They are robust in extracting maximum power under changing atmospheric conditions. When implementing these techniques, a significant overshoot and steady-state error has been observed. In this study, a nonlinear arbitrary order SMC scheme is proposed to collect maximum power from a PV system using DC-DC buck-boost converter. This strategy is illustrated in the Figure 1. With detailed analysis of the overall closed-loop stability, Feed Forward Neural Network (FFNN) is designed to generate peak the desired voltage that is tracked by the proposed control law. In addition, the high gain differentiator is used to observe the system's states, which are utilized by the proposed control algorithm. Results validate the applicability of the developed model for control law in terms of maximum power extraction, steady-state error, and robustness against abrupt variations in atmospheric conditions compared to the conventional MPPT methods. The proposed control strategy finds potential in power applications of PV where one may have PV output from a system with unknown nonlinear dynamics. The proposed control strategy finds potential in power applications of PV where one may have PV output from a system with unknown nonlinear dynamics. The rest of the manuscript is presented in the following structure. In Section 2, an equivalent mathematical model of a PV system is given. Afterward, the detailed modeling of the DC-DC buck-boost converter is provided in Section 3. Section 4 derives the control algorithm for changing duty cycle of buck-boost topology of converter to ensure maximum power extraction. The simulations results are compared with the standered literature results in Section 5, which endorses robustness of the developed controller. At last, concluding remarks of this work are presented in Section 6. PV System Modeling A PV system consists of modules that are connected in parallel and series. PV modules are the basic building blocks of a PV system, consisting of a series and parallel combination of PV cells. In general, a single diode model is used for modelling and simulation of a PV system [31,32]. This model is composed of a series resistance R s , a shunt resistance R sh , a diode D, and a current source I ph (see Figure 2) [33]. PV system I pv represents output current, which is mathematically modeled as where I s h is the current going through the shunt resistance, I d is the current travelling through the diode, N p is the number of parallel linked cells, and I ph is the source current, which is temperature and irradiance dependent. In the above expression, T signifies temperature, K denotes temperature coefficient, G denotes solar irradiance, and I sc denotes short circuit current at standard solar irradiance G re f and temperature T re f . Shockley's model may be used to obtain the expression for diode current, as shown below. where n is the ideality factor of the diode, I rs is the reverse saturation current of the diode, and V t is the thermal voltage provided by where q is the electron charge, T is the temperature and k = 1.38 × 10 −23 J is the Boltzmann's constant. The voltage across diode V d is where V pv denotes the array output voltage, N s denotes the number of series-connected cells, and R s is the series resistance. Incorporating (4) and (5) in (3), the expression for diode current becomes The current through shunt resistance can be mathematically expressed as Considering (2), (6) and (7), the output current of PV system can be expressed as The PV system curves like P-V and I-V are shown in Figures 3 and 4 under varying irradiance and temperature. We have modeled the current of the PV system in (8). The non-inverted DC-DC buck-boost topology converter detailed modeling is discussed in subsequent section. Modeling of Non-Inverted DC-DC Buck-Boost Topology of Converter DC-DC converters are normally used to change the current and voltage level provided by photo voltaic modules that is required by electrical loads [34][35][36]. The non-inverted DC-DC buck-boost topology of converter [37,38] increase/decrease the output voltage of the PV system in order to get the maximum power point voltage V mpp . To operate the system at V mpp , the converter is controlled periodically by changing its duty cycle with the help of controller proposed in the existing literature. The duty cycle of DC-DC converter is defined as u = t on /T, where T = t on + t o f f shows the total converter time period. Figure 5 depicts converter topology. In Figure 5, V pv represents the input voltage coming from PV system [39]. The input capacitor C 1 is used for limiting the ripples in input voltage of converters. On the other hand, output capacitor C 2 limits the ripples in the converter's output voltage. D 1 &D 2 are diodes and S 1 &S 2 are Insulated Gate Bipolar Transistors (IGBT) switches. R and L represent the resistor and the inductor of converter, respectively [40]. Before moving into further developments of the converter, the following assumptions are made. • Diodes and switches are considered ideal, i.e., losses are negligible. • Converter operation is considered in continuous conduction mode (CCM). • CCM have two switching intervals in the. The assumption for the first interval is that two switches are turned on, diodes are operating in reverse biased, and the inductor is charging from PV voltage. Complete discussion of both switching interval is as follows. Applying Kirchoff's laws for the first switching interval, the state space equations can be derived as Similarly, apply Kirchoff's laws to find second switching interval using he state space equations, where the load is taken current from the inductor, the diodes are operated in forward biased and both the switches are turned off. The state space equation for this interval is as follows The mathematical expression for both switching intervals of non-inverted buck boost topology converter having a resistive load can be presented using the principles of capacitor charge balance and inductor volt-second balance as follows The output load voltage V R of non-inverted DC-DC buck boost topology converter is computed as Considering the ideal power transfer P pv = P R and eliminating the losses, the relationship between output impedance R and the input impedance R pv is computed using (13) Assuming the average values for V pv , i L , V C2 and µ as x 1 , x 2 , x 3 and u, respectively, the final state space representation looks like We have modeled the PV system accompanied by the non-inverting DC-DC converter, which extract maximum power when operates. Therefore, in the forthcoming sections, a control algorithm will be designed for changing duty cycle to meet the maximum power extraction. Proposed Control Strategy for Maximum Power Extraction Primary objective of the proposed research work is that the output voltage V pv of a non-inverted buck-boost converter should trace a reference voltage V re f . The V re f is characterized as the maximum power obtained at each of its points. So, the maximum power from the actual system can be extracted by tracking the V re f . Reference voltage Tracking is achieved via continuously varying duty cycle u of converter using an output feedback controller. The proposed control law needs the V re f values, which will be estimated via FFNN in the presence of varying temperature and irradiance. In addition, the controller also uses the inductor current x 2 as a known data, which is generally not available in a practical scenario. Therefore, an appropriate states estimator along with flatness property is proposed in this research. In the later part, we will focus on the arbitrary order sliding mode control strategy. So, we proceed further by designing FFNN for reference voltage generation. Reference Voltage Trajectory via FFNN The objective is to estimate V re f while considering the environmental parameters, i.e., temperature and irradiance as an input to the FFNN block [41]. For this task, a two-layer feed forward neural network is used, whose schematic diagram is shown in Figure 6. The input data is generated by varying the irradiance (first network input) from (600-1000) W/m 2 with a suitable increment of 1 W/m 2 and changing the temperature (second network inputs) during 2-75 • C interval with an increment of 2 degree Celsius, accordingly. Based on these available inputs, the inputs (or activation) to the next layer are computed using (15) where p i is input of node i, b jo represents respective reconstruction bias/error and j = 1, 2, 3, . . ., jo shows number of hidden layer neurons. The outputs of the activation function f (chosen as tanh) at the hidden layer, while invoking a j , appears as follows Similarly, the weights in the next layer are named as w kj (w kj is a scalar and is a weight between the jth hidden layer node) and the kth output layer node with y i as inputs from the hidden layer on which the activation of the output layer will be operated looks like where k = 1 represents number of neurons in output layer. Estimated V re f output voltage, a function of inputs and weights between hidden layer and output layer is expressed as follows or Equation (19) can be alternatively expressed in vector form as Equation (20) can be more explicitly written as There is a maximum voltage level on a PV module characteristic curve for each temperature and irradiance level. This maximum voltage is considered as the desired V re f which is the target data during the training of FFNN. The training technique used for updating the weights of a neural network at each iteration is the Levenberg-Marquardt training algorithm. FFNN Simulation Results The network parameters used for the estimation of V re f are iterations and hidden layer neurons. The final structure of FFNN has ten neurons in hidden layer. The desired V re f for different values of temperature and irradiance produced from the FFNN in 3-D plane is illustrated in Figure 7. The training of NN is calculated using Mean Squared Error (MSE) depicted in Figure 8. There is a significant amount of error at the start, but the error reduces to the minimum possible value as the number of epochs increases. The best training performance with the lowest MSE is 1.0836 × 10 −6 at 1000 epochs. The regression graph for V re f estimation is given in Figure 9. It shows that the regression value of R = 1 indicates the close resemblance of the output data with the target data. The error histogram is calculated with 20 vertical bins. Figure 10 depicts the estimation error histogram associated with V re f , which reveals that a very small error occurs and is nearly close to zero. Arbitrary Order Sliding Mode Control Design In this section, a NAAOSMC technique is proposed to obtain maximum power from the PV module. The output of the controller u controls duty cycle of switches of DC-DC converter. For designing the control strategy, the system (14) is converted in canonical form as ẏ 1 = y 2 The system (22) is the controllable canonical form in term of output and its derivative, which is assumed convenient for designing a control strategy. Now, defining tracking error as difference between the reference and maximum PV voltage i.e., e = y 1 − y re f (23) where y re f = V re f and We can achieve our results by converging the tracking error to zero. Taking derivative of (23) and simplify using (24), we geṫ As the reference voltage has a fix value soẏ re f = 0. Taking derivative of (25) Now, the sliding manifold σ in terms of error is characterized as follows Here, λ has a constant value and f (t) is a forcing function, which carries following expression for AOSMC Now, taking the time derivative of (27) σ =ë + λė + f (t) (29) Now, integrating the values ofė,ë and posingσ = 0, we get the following control law equation, which operates dynamics of system on the sliding manifold σ = 0 Since a practical system works during uncertainties, so, equivalent control law given in (30) will not be able to enforce sliding mode. The overall control law which can enforce the sliding manifold is computed asu =u equ +u d (31) whereu d is given byu Here, k 1 and k 2 are positive gain constants. Stability Analysis The objective here is to prove the zero dynamic stability of the system. AOSMC law has been designed using dynamics of first two equations of the system (14). The dynamics of the third equation of the system (14) is clearly the internal dynamics of the given PV system i.e.,ẋ Since, the control input u directly affects control driven states like x 1 and x 2 therefore, zero dynamics of a system can be achieved by putting u = x 1 = x 2 = 0 . So, one can geṫ Since, the typical parameters R L and C 2 are positive, which ensures that the (34) has its poles in the left half-plane at − 1 R L C 2 . This shows that the system (34) has exponentially stabilized the zero dynamics and validated the PV system's minimum phase nature. Furthermore, to ensure sliding mode enforcement, a Lyapunov stability function in aspects sliding surface is chosen and modeled as Taking time derivative of Equation (35), we can geṫ Now, incorporating the components of (30) and (32), we can geṫ (40) becomes negative definite, when From (40) and (41), one can writė The differential inequality of (43) demonstrates a finite-time convergence, forcing the function V approaches to zero, i.e., σ → 0. It ensures that the sliding mode is established and the error is converged to zero. States Estimation via High Gain Differentiator State estimation is the process that deals with estimating the internal states of a real system with its perspective outputs. This process also involves the reduction of chattering effect from the input signal. The method used for states estimation in this paper is High Gain Differentiator (HGD). HGD is quite useful in estimating the derivatives of the states (x 1 , x 2 , x 3 , I pv ) of the given PV system. The equations for HGD [42] to estimate derivative of the states of the system areẏ where 0 < < 1 and α 1 , α 2 are gains of HGD. The HGD technique used for estimating the derivatives of the system states is used further by invoking flatness property for recovery of the unknown states. Definition 1. A flat system is defined as the system whose states/parameters/inputs can be expressed in terms of the flat outputs and their derivatives, For example where x i ,ẋ i , x k ,ẋ k , u,u, u r are the states and the derivatives of the system's variables. In general, the measurement of x 2 is not available, therefore, using HGD and the flatness property, one can estimate the value of x 2 as Simulation Results and Discussion The simulation results are performed using Simulink environment of MATLAB (R2018b) environment to check the applicability of the developed HGD based AOSMC. The solar array system is connected with load using DC-DC buck-boost converter. Solar array system used for the simulation in this study has 16 PV modules. The specification parameters of a single PV module are given in Table 2 and the specification of the buck-boost converter and designed constants of the proposed controller are listed in Table 3. Section 5 results are presented and discussed into two sections. The results under varying irradiance levels are given in Section 5.1 and results under varying temperature levels are shown in Section 5.2. Table 3. MATLAB simulation parameters. Parameters Value Unit Constant Results under Varying Irradiance To perform imulation for varying irradiance levels, the temperature was kept constant at 25 • C, and the irradiance was changed as shown in Figure 11. The generated reference voltage V re f using FFNN for changing irradiance profile was tracked by the proposed AOSMC. It can be seen from Figure 12 that the V re f was effectively obtained using our developed controller in 0.01 s with a slight overshoot of 3 V. Similarly, in Figure 13, the PV system output power is presented with reference power curves as a result of varying irradiance levels, which depicts that the MPP was attained with minor oscillations and in a short time of 0.02 s. Results under Varying outdoor Temperature In this case, if solar irradiance is maintain fixed i.e., 1000 W/m 2 , and outdoor temperature is continuously changed, the obtained observations are depicted in Figure 14. It is clearly seen in the Figure that generated voltage from FFNN with changing outdoor temperature profile is successfully tracked using the developed AOSMC. However, an overshoot of 7 V was observed (See Figure 15). Similarly, for varying temperature levels, the PV system output power is presented in Figure 16, which portrays that the maximum power was obtained in 0.01 s without oscillations. From both of the above cases, it is evident that the proposed AOSMC can extract the maximum power, which is transmitted with 98 % efficiency to load under changing temperature and irradiance levels. To evaluate performance of the developed controller, the backstepping controller [26] is considered as a benchmark under similar changing conditions. Comparison Results under Varying Irradiance Comparative evaluation of the developed controller with the backstepping controller was made as shown in Figure 17 using similar irradiance variation as given in Figure 11. We observed that the proposed controller tracked the MPP in 0.01 s, whereas the backstepping controller attained this in 0.025 s (see the zoomed view in Figure 17). It can also be noted that the developed controller's rise time was 0.002 s, which is less than the backstepping controller. The comparison of solar array system output power is depicted in Figure 18, which portrays that the proposed AOSMC had less power loss and was 5 % more efficient than the backstepping controller. Comparison Results under Varying Temperature In this case, using the similar temperature variation of Figure 14, a comparison was made between the proposed AOSM controller and the backstepping controller as displayed in Figure 19. The figure depicts that the proposed controller extracted the maximum power in 0.01 s, which was 0.02 s in the case of the backstepping controller during the changing temperature levels. The solar array system output power comparison of both controllers is shown in Figure 20. The proposed AOSMC transmit maximum power with 97% efficiency to load, and thus outperformed the existing backstepping controller. Conclusions In this article, we proposed an HGD-based arbitrary order sliding mode nonlinear MPPT control design for a PV system. The PV module was connected to the load using a non-inverting DC-DC buck-boost converter. FFNNs were used to produce the reference voltage. After developing the reference voltage, the proposed controller was used to track the reference voltage. HGD technique was used to estimate the internal states of the system. The observer also used the flatness property to recover the non-observable state. The simulation results were performed in Simulink environment of MATLAB, which shows that the proposed controller performed well under changing atmospheric conditions. Simulation results of AOSMC were compared with the backstepping control technique results under abrupt variations in temperature and irradiance. The results verify that the AOSMC outperformed the existing backstepping controller. Hence, we can conclude that the proposed controller was validated for efficiency and effectiveness with improved robustness.
5,306
2022-03-08T00:00:00.000
[ "Engineering", "Mathematics" ]
Recognition of damage-associated molecular patterns related to nucleic acids during inflammation and vaccination All mammalian cells are equipped with large numbers of sensors for protection from various sorts of invaders, who, in turn, are equipped with molecules containing pathogen-associated molecular patterns (PAMPs). Once these sensors recognize non-self antigens containing PAMPs, various physiological responses including inflammation are induced to eliminate the pathogens. However, the host sometimes suffers from chronic infection or continuous injuries, resulting in production of self-molecules containing damage-associated molecular patterns (DAMPs). DAMPs are also responsible for the elimination of pathogens, but promiscuous recognition of DAMPs through sensors against PAMPs has been reported. Accumulation of DAMPs leads to massive inflammation and continuous production of DAMPs; that is, a vicious circle leading to the development of autoimmune disease. From a vaccinological point of view, the accurate recognition of both PAMPs and DAMPs is important for vaccine immunogenicity, because vaccine adjuvants are composed of several PAMPs and/or DAMPs, which are also associated with severe adverse events after vaccination. Here, we review as the roles of PAMPs and DAMPs upon infection with pathogens or inflammation, and the sensors responsible for recognizing them, as well as their relationship with the development of autoimmune disease or the immunogenicity of vaccines. INTRODUCTION Host cells are equipped with numerous types of receptors to discriminate self from non-self. When cells are attacked by infectious pathogens, host cellular receptors such as Toll-like receptors (TLRs), nucleotide oligomerization domain (NOD)-like receptors (NLRs), retinoic acid-inducible gene-I (RIG-I)-like receptors (RLRs), C-type lectin receptors, and other non-classified receptors recognize pathogen-associated molecular patterns (PAMPs), small molecular motifs conserved amongst microbes. Through the recognition of PAMP molecules, innate immune responses are induced, and inflammatory cytokines are produced that aid in the elimination of the pathogens. However, in some circumstances host inflammatory responses can cause host cell death leading to tissue injury, and the release of host cellular components to the extracellular environment. These cellular components could be considered "messengers" for danger; they are also known as "damage-associated molecular patterns" (DAMPs). DAMPs include lipids, sugars, metabolites, and nucleic acids such as RNA and DNA species. DAMPs are important for the elimination of pathogens, but are also implicated in the development of autoimmune disease and chronic inflammatory disease, and are used as adjuvants for vaccines. Interestingly, high numbers of PAMP receptors also recognize endogenous DAMPs and can augment inflammatory responses against pathogens, whereas continuous inflammatory responses owing to impaired regulation of inflammatory signaling results in chronic inflammatory disease or autoimmune disease. Therefore, "bipolar sensors" for both PAMPs and DAMPs appear to be the mostly responsible for dysregulated inflammation. Here, we describe the various types of DAMPs and their receptors, with a special focus on nucleic acids as DAMPs. LIPOPOLYSACCHARIDE (LPS) A representative lipid for the induction of inflammatory responses is LPS, a PAMP present in gram-negative bacteria. Upon recognition by TLR4, LPS promotes the production of various inflammatory cytokines following bacterial infection (Table 1). However, Shi et al. reported that, TLR4 also recognizes endogenous fatty acids and can activate inflammatory responses in adipocytes and macrophages (Shi et al., 2006). In addition, TLR4-deficient mice developed reduced inflammatory cytokine production in response to a high fat diet (Shi et al., 2006). Previous studies have revealed that saturated fatty acids are released from hypertrophied adipocytes in the presence of macrophages, and that released fatty acids are sensed by macrophages in a TLR4-dependent manner, following excessive production of inflammatory cytokines such as tumor necrosis factor (TNF)-α (Suganami et al., 2007). Because the production of pro-inflammatory or inflammatory cytokines is dysregulated in obese adipose tissues, obesity can be thought of as a chronic inflammatory disease caused by fatty acids acting as DAMP molecules (Berg and Scherer, 2005). SERUM AMYLOID A PROTEIN (SAA) Some lipoproteins can also act as DAMP molecules. In 1982, Hoffman and Benditt revealed that the treatment of mice with LPS of Salmonella typhosa increased SAA levels (Hoffman and Benditt, 1982). According to several studies, SAA functions in cholesterol transport as well as in the production of proinflammatory cytokines, suggesting that SAA is a DAMP molecule that responds to bacterial endotoxins (Banka et al., 1995;He et al., 2003). In support of this, increased levels of SAA may be closely related to various diseases such as atherosclerosis, rheumatoid arthritis, and Crohn's disease (Chambers et al., 1983(Chambers et al., , 1987Malle and De Beer, 1996). SAA binds to two receptors, TLR4 and TLR2, which also recognize bacterial PAMP molecules such as triacyl lipopeptides (in cooperation with TLR1), diacyl lipopeptides or lipoteichoic acids (together with TLR6) (Schwandner et al., 1999;Takeuchi et al., 2001Takeuchi et al., , 2002Cheng et al., 2008;Hiratsuka et al., 2008) (Table 1). Recently, Loser et al. showed direct evidence for the local production of the SAA molecules myeloid-related protein-8 (Mrp8) and Mrp14, which induced autoreactive CD8 + T cells and systemic autoimmunity through TLR4 signaling in mice (Loser et al., 2010). Taken together, these findings suggest that TLR4 may be a key receptor in the discrimination of lipid PAMPs from lipid DAMPs molecules, because promiscuous recognition of lipids via TLR4 unfortunately causes inflammatory disease. Although a consensus recognition structure for TLR4 has not yet been identified, antagonists of TLR4 signaling by lipid-DAMPs might be candidate drugs for the treatment of chronic inflammatory disease. SUGAR-RELATED DAMPs Hyaluronic acid (HA) is a non-sulfated linear polysaccharide, and a major component of the extracellular matrix. Weigel et al. revealed that HA is induced and degraded during inflammatory responses and that it functions in immune cell activation or new blood vessel formation (Weigel et al., 1986). Interestingly, small molecular weight HA (sHA), produced by the degradation of HA during inflammation, can induce the maturation of dendritic cells (DCs) for pathogen elimination (Termeer et al., 2002). Bone marrow-derived DCs from mice expressing non-functional TLR4 could not be activated by sHA, while DCs from TLR2deficient mice retained the ability for sHA-mediated activation. This suggests that sHA can act as a DAMP molecule signaling through TLR4 to induce DC maturation upon pathogen infection (Termeer et al., 2002). Consistent with this, excessive sHA levels appeared to be closely associated with inflammatory autoimmune diseases such as rheumatoid arthritis, sarcoidosis, systemic sclerosis, and pancreatic cancer (Hallgren et al., 1985;Witter et al., 1987;Sugahara et al., 2006;Yoshizaki et al., 2008) (Table 1). URIC ACID Uric acid is a metabolite of purine nucleotides and free bases in humans and other primates, and it functions as an antioxidant to protect erythrocyte membranes from lipid oxidation (Kellogg and Fridovich, 1977). However, it was previously shown that soluble uric acid-induced inflammatory cytokines such as monocyte chemoattractant protein-1 in rat vascular smooth muscle cells (Kanellis et al., 2003). Shi et al. also reported that uric acid is produced in ultraviolet-irradiated BALB/c 3T3 cells, and activates DCs (Shi et al., 2003). In addition, high levels of uric acid in the blood are associated with the development of hyperuricemia and gout (Johnson et al., 2005), suggesting that it acts as a DAMP during cell injury and can induce inflammatory responses that are related to autoinflammatory diseases such as gout ( Table 1). Receptors that recognize uric acid have been reported and Liu-Bryan et al. revealed that TLR2, TLR4, and their adaptor molecule MyD88 are important for uric acid-mediated inflammation (Liu-Bryan et al., 2005). In contrast, the uric acid-mediated activation of DCs was shown to be TLR4-independent, suggesting the possible existence of other receptors that recognize uric acid in addition to TLR2 and TLR4 (Shi et al., 2003). To solve this question, Martinon et al. demonstrated that uric acid could be sensed by another receptor, NOD-like receptor family, pyrin domain-containing 3 (NLRP3), and induced to produce interleukin (IL)-1β through caspase-1 activation (Martinon et al., 2006). NLRP3 is a member of the NLR family, and a component of the inflammasome, a platform that induces IL-1β and IL-18 production. NLRP3 senses various types of pathogen infections or irritants such as Candida albicans, Legionella pneumophila, Listeria monocytogenes, Malaria hemozoin, alum, silica, and asbestos as well as uric acid (Kanneganti et al., 2006;Martinon et al., 2006;Dostert et al., 2008Dostert et al., , 2009Eisenbarth et al., 2008;Gross et al., 2009). Collectively, these results revealed that NLRP3 is a promiscuous receptor that senses PAMPs and DAMPs and can induce inflammatory responses. ADENOSINE TRIPHOSPHATE (ATP) ATP is an essential purine base required for almost all physical responses such as glucose metabolism, muscle contraction, biosynthesis, and molecular transfer. However, extracellular ATP from injured cells or non-apoptotic cells also serves as a danger signal through the activation of NLRP3 and caspase-1 (Communi et al., 2000). Previous detailed research has shown the importance of other ion channel molecules, namely, P2X7 and pannexin-1, in inducing extracellular ATPmediated caspase-1 activation following IL-1β maturation (Ferrari et al., 2006;Kanneganti et al., 2007). The formation of the NLRP3 inflammasome requires an adaptor molecule, apoptosis-associated speck-like protein containing a carboxyterminal caspase recruitment domain (ASC). ASC-deficient mice cannot activate caspase-1 and thus do not produce mature IL-1β following exposure to large amounts of ATP, suggesting that ATP-mediated IL-1β production is dependent on the NLRP3 inflammasome (Mariathasan et al., 2004). However, although extracellular ATP has been suggested to act as a DAMP molecule, there is no correlation between high amounts of extracellular ATP acting as DAMPs in vitro and physiological conditions in vivo. Eckle et al. suggested that most extracellular ATP might be immediately hydrolyzed by ectonucleotidases (Eckle et al., 2007). Taken together, investigation into the roles of extracellular ATP in inducing pathological and immune responses in vivo may provide important clues regarding the mechanism underlying inflammation induction by DAMP molecule recognition or in the development of inflammatory diseases. NUCLEIC ACID-RELATED DAMPs UNMETHYLATED CpG MOTIF AND GENOMIC DNA As described above, uric acid and ATP are products of purine metabolism. Nucleic acids such as adenine or guanine are also purine metabolites. Nucleic acids exist in all organisms including pathogens, and function as a store of genetic information for protein translation and synthesis. Bacterial genomic DNA can be recognized as a PAMP, as it contains unmethylated CpG motifs whose frequency is higher in genomic DNA derived from pathogens compared with that of vertebrates. The earliest research related to bacterial genomic DNA as PAMPs was reported more than hundred years ago. Bruns et al. investigated heat-killed gram-negative or gram-positive bacteria as an immunotherapeutic agent termed Coley's toxin, for cancer (Swain, 1895). Although LPS is a major factor in mediating anti-tumor effects, other factors may be connected with its physiological function, as gram-positive bacteria do not express LPS. A hundred years on from the discovery of Coley's toxin, several studies have shown that bacterial DNA can activate natural killer (NK) cells or B cells, suggesting that the bacterial genomic DNA in Coley's toxin could contribute to its antitumor activity by stimulating NK cells (Shimada et al., 1986;Messina et al., 1991). Krieg et al. further revealed that bacterial genomic DNA contains unmethylated CpG motifs that can stimulate B cells and NK cells, and induce inflammatory cytokine production. Interestingly, methylated bacterial DNA failed to stimulate immune cells, indicating that unmethylated CpG motifs may act as PAMP molecules (Krieg et al., 1995;Klinman et al., 1996). However, whether genomic DNA containing methylated CpG motifs is incapable of innate immune activation remains controversial. In 1962, Glasgow et al. reported that ultraviolet-inactivated vaccinia virus, a DNA virus, resulted in IFN production in mouse cells (Glasgow and Habel, 1962). In addition, Suzuki et al. showed that viral DNA, vertebrate DNA and bacterial DNA induced the upregulation of major histocompatibility complex (MHC) class I expression and the type I IFN-related activation of transcription factors such as STAT3 in rat thyroid cells, suggesting that genomic DNA also activates innate immune signaling in a CpG-motif-independent manner (Suzuki et al., 1999). Interestingly, the structure of DNA strongly affects DNA-mediated innate immune activation. Double-stranded, right-handed B-form DNA, but not the lefthanded Z-form DNA, strongly induced type I IFN production. Genomic DNA has a high content of B-form DNA, indicating that it may also function as a PAMP or DAMP (Ishii et al., 2006). Mitochondrial DNA has been also reported to function as a DAMP molecule. Zhang et al. reported that cellular injury caused the release of mitochondrial DNA, and induced systemic inflammatory responses via p38 MAPK activation in a TLR9dependent manner. In addition, trauma patients had higher amounts of mitochondrial DNA than did healthy volunteers, suggesting that mitochondrial DNA could be considered a marker of inflammatory disease . When the clearance of mitochondrial DNA by autophagy was inhibited, IL-1β production was augmented via the NLRP3 inflammasome to activate caspase-1, indicating that the amount of mitochondrial DNA DAMP activity is regulated by autophagy to suppress erroneous activation of innate immunity (Nakahira et al., 2011). Indeed, it was revealed that autophagy negatively regulates RNA-mediated type I IFN production, possibly to maintain cellular homeostasis (Jounai et al., 2007). CORRELATION BETWEEN AUTOIMMUNE DISEASE AND DNA DAMPs Both DNA and RNA can function as PAMPs and DAMPs, and are closely connected with inflammatory responses and the development of inflammatory disease. Direct evidence for DNA acting as a DAMP was shown using DNase-deficient mice. DNase I is present in extracellular compartments such as the sera and urine, and functions to degrade single-stranded DNA (ssDNA), doublestranded DNA (dsDNA), or chromatin, which are released from damaged or necrotic cells. Napirei et al. constructed DNase Ideficient mice, and reported that they presented with the classical symptoms of systemic lupus erythematosus (SLE) and glomerulonephritis (Napirei et al., 2000). In addition, DNase II deficient mice showed a similar phenotype to DNase I knockout mice. DNase II in the lysosomes of macrophages degrades DNA from apoptotic cells or nuclear genome DNA from liver erythroblasts. Interestingly, DNase II-deficient mice presented with lethal anemia owing to high levels of type I IFN production, caused by the accumulation of non-degraded genomic DNA in liver macrophages (Yoshida et al., 2005). In support of this, DNase II and IFNRa/b double knockout mice showed a non-lethal phenotype, but developed rheumatoid arthritis-like symptoms (Kawane et al., 2006), which could be attenuated by anti-TNF-α antibody treatment. This suggested that the accumulation of genomic DNA in macrophages induced inflammatory cytokines, including type I IFNs and TNF-α, and the synergistic action of these inflammatory cytokines resulted in lethal systemic inflammation (Kawane et al., 2006). Furthermore, studies on DNase III, also known as TREX1, also revealed that DNA could function as a DAMP. TREX1 is the major 3 → 5 DNA exonuclease for DNA editing in DNA replication or DNA repair. Morita et al. showed that trex1-deficient mice had a reduced survival rate owing to high susceptibility to inflammatory myocarditis, although null mice showed no spontaneous mutations or tumor development (Morita et al., 2004). To explain why trex1-deficient mice develop inflammatory myocarditis, Crow et al. demonstrated that the mutation in the trex1 gene that abolished TREX1 enzyme activity was responsible for the development of Aicardi-Goutieres syndrome (AGS), a severe neurological brain disease with high levels of IFN-α in cerebrospinal fluid or serum, suggesting that TREX1 is a suppressor of DNA DAMP-mediated inflammatory responses (Crow et al., 2006). Furthermore, it was previously shown that the abolishment of interferon regulatory factor 3 (IRF3) or IFN-α receptor 1 ameliorated the AGS symptoms in trex1-deficient mice (Stetson et al., 2008). Collectively, these findings suggest that the dysregulation of self-DNA results in severe inflammatory responses such as high levels of type I IFNs leading to autoinflammatory disease. NUCLEIC ACID SENSORS Host cells are equipped with numerous types of receptors to recognize nucleic acids as PAMPs or DAMPs. These receptors function to protect the host from pathogen infection, but may also cause autoimmune disorders by inducing the constitutive activation of inflammatory responses (Figure 1). In this section, we introduce the well-characterized nucleic acid sensors. TLRs A large body of research exists demonstrating the TLR-mediated sensing of nucleic acids. TLR3 preferentially senses doublestranded RNA (dsRNA) species, which can originate from some viruses, and TLR3 is associated with induction of innate immunity in response to infection with West Nile virus, respiratory syncytial virus, and encephalomyocarditis virus (Wang et al., 2004;Groskreutz et al., 2006;Hardarson et al., 2007) (Figure 2). In addition, artificial dsRNA, poly (I:C), has been well-characterized as a ligand for TLR3. Although pathogen-related dsRNAs act as PAMPs, Kariko et al. reported that host messenger RNA could be sensed by TLR3 to induce inflammatory responses (Kariko et al., 2004). RNA released from necrotic cells can also elicit type I IFN production, suggesting that host RNA might function as a DAMP upon cellular injury (Kariko et al., 2004). TLR7 and TLR8 recognize single-stranded RNA (ssRNA), and induce anti-viral innate immune responses against influenza virus or vesicular stomatitis virus (Lund et al., 2004) (Figure 2). Regardless of their common ligands, the cellular and tissue distribution of TLR7 expression is in contrast to that of TLR8. Human TLR7 is highly expressed in plasmacytoid DCs that preferentially induce type I IFN production, and is expressed at lower levels in myeloid cells. Conversely, the level of TLR8 expression is higher in monocytes and in monocyte-derived DCs than in plasmacytoid DCs (Hornung et al., 2002). Furthermore, mouse TLR8 did not respond to ssRNA, but human TLR8 did, suggesting that TLR8 might be inactivated in mice, although several papers have also linked mouse TLR8 with neuronal apoptosis and autoimmunity (Heil et al., 2004;Gorden et al., 2006;Ma et al., 2006). In addition to the recognition of PAMPs, Vollmer et al. revealed that promiscuous recognition through TLR7 or TLR8 causes the development of SLE with high levels of type I IFNs and TNF-α production (Vollmer et al., 2005). Because the sera from SLE patients contains high levels of autoantibodies against self-antigens, such as small nuclear ribonucleoprotein particles (snRNPs) including ssRNA, TLR7, or TLR8 could recognize the immunocomplex of snRNPs with autoantibodies thorough Fc receptor-mediated internalization (Vollmer et al., 2005). Interestingly, TLR7 appears to be a specific sensor for the induction of type I IFN production from plasmacytoid DCs, whereas TLR8 is specific for TNF-α production from monocytes in SLE patients, suggesting that plasmacytoid DCs and monocytes collaborate to develop inflammatory responses in SLE via distinct sensors. TLR9 senses ssDNA containing unmethylated CpG motifs. Previous studies have revealed that TLR9 recognizes genomic DNA from pathogens such as murine cytomegalovirus and Herpes simplex virus type 1 or type 2 as PAMPs (Hemmi et al., 2000;Lund et al., 2003;Krug et al., 2004a,b) (Figure 2). With regard to the development of autoinflammatory disease, TLR9 has been also been reported to recognize self-antigens complexed with autoantibodies. Leadbetter et al. revealed that autoreactive B cells were activated by a chromatin-autoantibody complex in a TLR9-and MyD88-dependent manner (Leadbetter et al., 2002). In addition, self-DNA-containing immune complexes, which are a well-characterized marker for SLE, were recognized by TLR9 through FcγRIIA-mediated internalization in plasmacytoid DCs FIGURE 1 | Autoimmune disorders may be induced by promiscuous sensing of nucleic acids. (Means et al., 2005). Thus, immune complexes containing self-DNA may signal as DAMPs through TLR9, although extracellular receptors such as FcγRIIA may be required for the delivery of autoimmune complexes to the TLR9-localizing compartment. As described previously, the subcellular localization of TLRs is important for the recognition of DNA, because TLR3, 7, 8 and 9 localize to the endosomal compartment. Previous studies identified three adaptor molecules, Unc93B1, PRAT4A, and gp96, which are important for the trafficking of TLRs to sites for sensing their ligands. Unc93B1 functions to control the trafficking of TLRs 3, 7, and 9 from the endoplasmic reticulum (ER) to the endosome. PRAT4A is localized in the ER and acts as a regulator of the subcellular distribution of most TLRs except for TLR3. Gp96 is a member of the heat shock protein (HSP) 90 family, and resides in the ER where it controls the maturation of TLRs 2, 4, 5, 7, and 9 (Saitoh and Miyake, 2009). Because TLR7 and TLR9 are regulated by the same molecular machinery, the crosstalk between TLR7 and TLR9 may affect the sensing of auto-nucleic acids and the development of autoinflammatory disease. Christensen et al. showed that a deficiency of TLR9 results in malignant symptoms in a mouse model of lupus, despite the levels of antibody production specific for DNA and chromatin being downregulated (Christensen et al., 2005). In contrast, TLR7-deficient mice developed attenuated lupus symptoms (Christensen et al., 2006). In addition, a recent study revealed that TLR9 suppressed the progression of autoinflammatory disease by antagonizing TLR7, suggesting that TLR9 counteracts TLR7 upon the recognition of self-immunocomplexes containing ssRNA or ssDNA (Nickerson et al., 2010). To support the interaction between TLR7 and TLR9 upon the development of autoimmune disease, Fukui et al. generated Unc93B1 D34A/D34A knock-in mice to show that TLR9 competes with TLR7 for binding to Unc93B1 in the healthy state, while TLR7 is constitutively activated upon autoinflammatory responses because TLR9 has a lower affinity for the Unc93B1-like Unc93B1 D34A/D34A mutant (Fukui et al., 2011). RIG-I-LIKE RECEPTORS (RLRs) Although TLRs can sense both non-self and self nucleic acids, fibroblasts, and endothelial cells that do not express TLRs also produce type I IFNs in response to infection with pathogens, indicating the existence of other receptors that sense nucleic acids. Yoneyama et al. determined that a cytoplasmic DExD/H box RNA helicase, RIG-I, senses infection by RNA viruses as well as artificial dsRNA, and induces innate antiviral immune responses mediated by type I IFNs (Yoneyama et al., 2004) (Figure 2). In addition to RIG-I, melanoma differentiation factor-5 (MDA5) and laboratory of genetics and physiology-2 (LGP2) were also identified; these receptors were classified as RLRs because their protein structures were similar to that of RIG-I (Yoneyama et al., 2005). To induce an anti-pathogen immune response, a CARD domain in RIG-I and MDA5 transmits down-stream signals through homophilic interactions with the CARD adaptor molecule, IFN-β promoter stimulator-1 (IPS-1, also known as MAVS, Cardif, or VISA) (Kawai et al., 2005;Meylan et al., 2005;Seth et al., 2005;Xu et al., 2005). The function of LGP2 is controversial. Some in vitro studies showed that LGP2 negatively regulates RIG-I-or MDA5-mediated innate immune responses by competing for binding with their RNA ligands (Yoneyama et al., 2005;Bamming and Horvath, 2009). However, in vivo studies using lgp2-deficient mice revealed that LGP2 is a cofactor of RLRmediated innate immune signaling (Venkataraman et al., 2007;Satoh et al., 2010). RLRs sense pathogen-derived RNA species as PAMPs to induce type I IFN production, while MDA5 has been detected as an autoantigen in clinically amyopathic dermatomyositis patients (Sato et al., 2009;Nakashima et al., 2010). Although it is not clear how extracellular MDA5 is produced, the accumulation of immunocomplexes containing MDA5 is a marker for the frequency of rapidly progressive interstitial lung disease (Sato et al., 2009;Nakashima et al., 2010). Accompanying these observations, loss of function single nucleotide polymorphisms have been found in RIG-I and IPS-1 that are closely related to the development of autoimmune disease (Pothlichet et al., 2011), suggesting that inhibition of RLR signaling may be important in the progression of autoimmune disease. However, as described earlier, excessive production of inflammatory cytokines including type I IFNs appears to result in autoinflammatory disease. In contrast, the dysfunction of RLRs induces poor type I IFN production, but leads to autoimmune disease (Nakashima et al., 2010;Pothlichet et al., 2011). One possibility to explain this phenomenon is that non-functional RLRs result in an increased susceptibility against various types of virus infections, and the subsequent virus-mediated cell death may cause the release of DAMPs and signaling through DAMP receptors. Support this possibility, the loss of MDA5 function increased the susceptibility of beta cells to viral infection with picornavirus or encephalomyocarditis virus-D, and resulted in type 1 diabetes, whose types of diabetes are often caused by virus infection or autoimmunity (Colli et al., 2010;McCartney et al., 2011). Further analyses are required to elucidate the cross-talk between RLR signaling and the development of autoimmune disease. ABSENT IN MELANOMA 2 (AIM2)-LIKE RECEPTORS (ALRs) Although various NLR family members that can induce the activation of caspase-1 and maturation of IL-1β, IL-18, and IL-33 in response to a wide range of PAMP and DAMP molecules have been identified, no sensor of intracellular dsDNA for IL-1β maturation has been identified. However, four research groups concurrently reported a role for the novel intracellular DNA sensor, AIM2, in the activation of caspase-1 following IL-1β production (Burckstummer et al., 2009;Fernandes-Alnemri et al., 2009;Hornung et al., 2009;Roberts et al., 2009). AIM2 belongs to a family of hematopoietic interferon-inducible nuclear proteins with a 200-amino acid repeat (HIN-200), known as the p200 or PYHIN family. Currently, four HIN-200 family molecules have been identified in humans, and six in mice. HIN-200 family molecules share similar structural features, including a pyrin domain at the NH 2 terminus, and a HIN-200 domain at the COOH terminus. Similar to the role of NLRP3 in IL-1β production, AIM2 causes oligomerization of the inflammasome upon DNA binding. The AIM2 inflammasome recruits ASC, an essential adaptor molecule, and induces NLRP3 inflammasome formation through homophilic interactions between the pyrin domain in AIM2 and that in ASC (Figure 2). The importance of the AIM2 inflammasome upon PAMP recognition has been confirmed by infection experiments using aim2-deficient macrophages infected with Francisella tularensis, L. monocytogenes, vaccinia virus, herpes simplex virus-1 and mouse cytomegalovirus (Fernandes-Alnemri et al., 2010;Rathinam et al., 2010). A second ALR, interferon-inducible protein 16 (IFI16) in humans (a homologue of p204 in mice), has been also investigated as an intracellular dsDNA sensor. However, while AIM2 induces IL-1β production in response to intracellular dsDNA binding, IFI16 is a sensor for type I IFN production upon recognition of intracellular dsDNA (Unterholzner et al., 2010). Although IFI16 also contains a pyrin domain, the pyrin in IFI16 is quite distinct from that in AIM2 as it has a lower affinity for ASC. Consistent with these different features of pyrin, IFI16-mediated type I IFN production upon intracellular dsDNA stimulation was not affected by ASC deficiency, suggesting that the two HIN-200 family molecules regulate both IL-1β and type I IFN production upon the recognition of intracellular dsDNA (Unterholzner et al., 2010). Although AIM2mediated signaling appears to be distinct from IFI16-mediated type I IFN production, recent research has revealed that IFI16 negatively regulates the AIM2-mediated activation of caspase-1 (Veeranki et al., 2011). As increased inflammatory cytokine production is closely related to the development of autoinflammatory disease, the regulation between AIM2-mediated innate immune signaling and IFI16 might be deregulated in patients with autoimmune disease. Roberts et al. identified p202 and AIM2 as cytosolic DNA binding proteins in mice. p202 is another ALR molecule without a pyrin domain, indicating an inability to bind ASC for inflammasome formation (Roberts et al., 2009). p202 appears to be a negative regulator for AIM2-mediated signaling, as the reduction of p202 results in higher AIM2-mediated activation of caspase-1 in response to intracellular DNA. However, elevated levels of p202 have been reported to induce SLE-like symptoms in mice (Rozzo et al., 2001). Interestingly, p202 levels are varied among mouse species, while AIM2 is expressed at the same level, indicating that p202 expression is tightly correlated to SLE development. Furthermore, Ravichandran et al. revealed that ablation of the aim2 gene leads to higher expression of p202 and type I IFNs in mice, and aim2-deficient mice are prone to SLE (Panchanathan et al., 2010). Taken together, these findings suggest that mouse p202 might be homologous to human IFI16. In support of this, expression levels of IFI16 and anti-IFI16 autoantibodies were dramatically increased in SLE patients, indicating that IFI16 has similar features to p202 (Mondini et al., 2006). A recent article described a correlation between psoriasis symptoms and AIM2 activation. Psoriasis is a chronic autoinflammatory disease caused by increased IL-1β production leading to Th17 cell maturation (Ghoreschi et al., 2010). Dombrowski et al. observed increased levels of cytosolic DNA fragments in skin lesions from psoriatic patients, which could be sensed by AIM2 (Dombrowski et al., 2011). Interestingly, those DNA fragments, which might be released from skin lesions in psoriatic patients, were internalized through binding to the antimicrobial peptide LL-37 (Dombrowski et al., 2011). Previous studies have shown that the complex of self-DNA with LL-37 can activate plasmacytoid DCs to produce type I IFNs, and complex-mediated type I IFN production is closely related with skin lesion development in psoriasis (Nestle et al., 2005;Lande et al., 2007). AIM2 is an interferon-inducible gene, suggesting that LL-37 complexes with self-DNA activate plasmacytoid DCs to produce type I IFNs, and that the subsequent upregulation of AIM2 leads to IL-1β production, and finally, psoriatic skin lesions occur because of the increased levels of type I IFN production as well as IL-1β production. HIGH MOBILITY GROUP BOX 1 (HMGB1) HMGB1 has been reported to be a major DAMP molecule. Goodwin et al. first identified HMGB1 from calf thymus chromatin as a non-histone DNA-binding protein (Goodwin et al., 1973). However, Wang et al. showed that a mouse macrophage cell line released HMGB1 in response to LPS stimulation. In addition, LPS-treated mice developed increased serum levels of HMGB1, similar to human patients with sepsis, suggesting that HMGB1 is a DAMP molecule in regard to sepsis symptoms (Wang et al., 1999). Accumulating evidence suggests that cellular injury results in the release of HMGB1 leading to inflammation (Abraham et al., 2000;Scaffidi et al., 2002). Consistent with these observations, numerous studies have showed a correlation between HMGB1 and autoimmune/inflammatory diseases such as atherosclerosis, diabetes, SLE, rheumatoid arthritis and Sjögren syndrome (Taniguchi et al., 2003;Porto et al., 2006;Urbonaviciute et al., 2008;Devaraj et al., 2009). As described previously, higher serum levels of immunocomplexes of self-DNA with autoantibodies is a hallmark of SLE. Previous research has shown that HMGB1 is also contained in immunocomplexes and can elicit inflammatory cytokine production, suggesting that HMGB1 may be a carrier of DNA DAMPs (Tian et al., 2007;Urbonaviciute et al., 2008). Furthermore, HMGB1 appears to promiscuously bind numerous molecules such as LPS, IFN-γ, IL-1β, and CXCL12 to induce synergistic physiological responses (Sha et al., 2008;Youn et al., 2008;Campana et al., 2009). Moreover, HMGB1 can sense pathogenderived nucleic acids, which induce type I IFN production (Yanai et al., 2009). Collectively, HMGB1 might be a promiscuous carrier that enhances innate immune responses against PAMPs and DAMPs. The receptors for HMGB1 have been investigated, but are still controversial. A well-studied receptor for HMGB1 is the receptor for advanced glycation end products (RAGE). Similar to HMGB1, RAGE is a promiscuous receptor that can bind to various ligands including DNA, RNA, SAA protein, HSPs and prion protein, suggesting that RAGE may sense a variety of DAMP molecules in an HMGB1-dependent or -independent manner (Sims et al., 2010). Experiments with rage-deficient mice revealed that HMGB1-mediated DNA sensing requires RAGE for internalization of DNA complexes to produce type I IFNs via TLR9 (Tian et al., 2007). Interestingly, RAGE could associate with TLR9 upon recognition of the A type of CpG-HMGB1 complex, indicating a possible function for RAGE as a bridge molecule between the extracellular HMGB1-DNA complex and the TLR9 compartment (Tian et al., 2007). In contrast to this observation, nucleosomes could sense HMGB1 complexes independently of RAGE. Instead of RAGE, TLR2 appears to be important for the recognition of HMGB1-nucleosome complexes, suggesting that the sensing machinery of the HMGB1-nucleosome complex might be distinct from that of the HMGB1-DNA complex, as the HMGB1-nucleosome complex could not elicit production of type I IFNs even though TNF-α or IL-10 were induced (Urbonaviciute et al., 2008). Furthermore, recent research identified a novel ligand for RAGE, complement C3a, that binds human stimulatory CpG DNA to induce type I IFNs in an HMGB1independent manner. This suggests that RAGE-mediated DNA sensing may involve numerous ligands (Ruan et al., 2010). Although there are many varieties of HMGB1-or RAGEmediated DNA recognition, both molecules are strongly associated with the induction of inflammation and the development of chronic inflammatory disease. DNA-DEPENDENT ACTIVATOR OF IFN-REGULATORY FACTORS (DAI) DAI has been identified as a molecule that recognizes intracellular DNA. Previous studies have revealed that DAI senses Z-type DNA; however, it may also bind to B-type DNA and induce type I IFN production through associations with TBK1 and IRF3 (Takaoka et al., 2007). Interestingly, DAI-deficient mice responded normally to cytosolic dsDNA stimulation, suggesting that DAI may function as one of a number of DNA sensors in a cell type-specific fashion (Ishii et al., 2008). Currently, the function of DAI is controversial, although the genetic adjuvanticity of DAI has been shown to induce strong cytotoxic T cell responses (Lladser et al., 2011). Although the ability of DAI to recognize DNA DAMPs has not been determined yet, DAI might be a link between the development of autoimmune disease and host DNA immune complexes. HISTONES Histone H2B (H2B) is a component of chromatin, and Kobiyama et al. identified that H2B also functions to sense intracellular dsDNA. Previous reports showed that histones act as DAMPs, and that excessive intracellular dsDNA induces type I IFNs through H2B (Kobiyama et al., 2010). In confirmation of this, H1 or H2 are released from the nucleus after DNA damage, and are translocated to mitochondria following the induction of apoptosis. In addition, H1, H2A, and H2B may act as antimicrobial proteins in certain animals, suggesting that H2B is an intracellular dsDNA sensor that recognizes dsDNA PAMPs and DAMPs (Kawashima et al., 2011). Histones may be related to autoimmune diseases as anti-histone antibodies were detected in patients with such diseases. Further analyses are required to clarify the relationship between histones and autoimmune disease. Ku70 Ku70 functions in DNA repair, V(D)J recombination and in retaining the telomere. Zhang et al. showed that various DNA species-induced the production of type III interferon, IFN-λ1, and identified Ku70 as a novel DNA sensor by pull-down assay from the nucleus compartment (Zhang et al., 2011a). While other DNA sensors are important for the production of type I IFNs, Ku70 appears to be important for type III IFN production through IRF1 and IRF7. Furthermore, Ku70-mediated type III IFN production is restricted when the length of intracellular DNA stimuli is greater than 500 base pairs. RNA POLYMERASE III As described above, RIG-I senses intracellular RNA species, but may also recognize intracellular dsDNA. siRNA treatment of a human hepatoma cell line, Huh7, suppressed dsDNA-mediated type I IFN production. Subsequently, Chiu et al. showed that RIG-I senses the transcribed RNA byproducts of DNA templates that are generated by RNA polymerase III (as is the case for poly(dA·dT)·poly(dT·dA) and EBV genomic DNA) and induces production of type I IFNs (Chiu et al., 2009). An inhibitor of RNA polymerase III suppressed DNA-mediated type I IFN production, suggesting that RNA polymerase III is a distinct DNA sensor. However, RNA polymerase III-mediated dsDNA sensing is restricted to sequences of DNA stimuli containing less dA·dT than dG·dC. DHX9 AND DHX36 Although the DExD/H box RNA helicase family contains RIG-I and MDA5, which function as RNA sensors, recent reports have revealed a similar RNA helicase family of molecules (DExDc family) that contain DHX9 and DHX36, which function as ssDNA sensors in plasmacytoid DCs . Interestingly, while DHX36 senses CpG-A, DHX9 senses CpG-B in a MyD88dependent manner. This may suggest that ssDNA PAMPs or DAMPs are recognized by either DHX9 or DHX36, but recent research has shown that DHX9 collaborates with IPS-1 to recognize dsRNA in myeloid DCs, indicating the promiscuous sensing of DHX9 (Zhang et al., 2011b). LEUCINE-RICH REPEAT FLIGHTLESS-INTERACTING PROTEIN 1 (Lrrfip1) Some sensor molecules such as TLRs or NLRs share common molecular patterns, such as leucine rich repeats (LRRs), which are important for ligand recognition or protein-protein interactions. An LRR-containing molecule, Lrrfip1, has been reported to sense intracellular DNA or RNA . Interestingly, whereas other DNA sensors often regulate type I IFN-related transcription factors such as IRF3/7 or caspase-1 to induce maturation of IL-1β, Lrrfip1 stimulates β-catenin and CBP/p300 to enhance ifnb1 transcription, indicating a novel pathway involving β-catenin for type I IFN production upon cytosolic DNA sensing. Because Wnt/β-catenin signaling is also linked to tumor development, further analyses may identify the machinery involved in the regulation of type I IFN signaling by Lrrfip1 under tumor development. STING (STIMULATOR OF INTERFERON GENES PROTEIN) The major function of MHC class II is antigen presentation, while monoclonal antibodies against MHC class II can cause cell activation or apoptotic cell death. Jin et al. identified a novel tetraspanin family molecule, MPYS, associated with MHC-IImediated cell death (Jin et al., 2008). Three research groups performing cDNA library screening to identify molecules associated with activation of the type I IFN promoter identified the same molecule, STING (also known as MITA, and ERIS). STING is a novel adaptor molecule that activates innate immune signaling mediated by intracellular nucleic acid stimuli (Ishikawa and Barber, 2008;Zhong et al., 2008;Sun et al., 2009). Surprisingly, the Barber research group further revealed that STING is essential for the induction of type I IFN production following sensing of cytosolic dsDNA, using STING-deficient mice. Based on their imaging analysis, STING appears to localize to the ER during the steady state, but translocates to the Golgi apparatus upon intracellular dsDNA stimulation to activate down-stream molecules such as TBK1. This suggests that STING is an essential adaptor molecule for cytosolic dsDNA-mediated type I IFN production in mice. Cyclic-di-GMP and c-di-AMP are small molecules that function as second messengers and are important for cell survival, differentiation, colonization, and biofilm formation. Recent research has revealed that the cytosolic delivery of c-di-GMP or c-di-AMP-induced type I interferon (IFN) production from bone marrow macrophages, suggesting that c-di-GMP and c-di-AMP are bacterial PAMP molecules (McWhirter et al., 2009;Woodward et al., 2010). As type I IFN production by c-di-GMP or c-di-AMP requires their internalization, live invasive bacteria possibly produce these second messenger molecules after internalization into cells. Recent reports have revealed that STING is a direct sensor of bacterial second messenger molecules, such as c-di-GMP or c-di-AMP (Burdette et al., 2011;Jin et al., 2011). This indicates the novel possibility that cytosolic dsDNA stimulation might produce c-di-GMP/c-di-AMP or related molecules that can be sensed by STING and induce type I IFN production. ADJUVANTICITY THROUGH DNA DAMPs Although DNA DAMPs are closely associated with the development of autoimmune disease, DNA DAMPs also contribute to the activation of acquired immune responses following vaccination with alum adjuvant. Previous studies have shown that genomic DNA from dying cells induces the maturation of antigen-presenting cells as well as antigen-specific antibody and cytotoxic T cell responses. This suggests that self-DNA DAMPs can activate innate immune responses that induce acquired immunoresponses. Recently, Marichal et al. demonstrated that the adjuvanticity of alum was dependent on self-DNA released from cells at the alum inoculation site (Marichal et al., 2011). NLRP3 appears to be a key sensor in the induction of alummediated innate immunity, although its function is only partially dependent upon alum adjuvanticity. Intraperitoneal inoculation of mice with alum induced the recruitment of neutrophils, and the resulting alum deposits contained high amounts of genomic DNA. Because treatment with DNase I attenuated alum adjuvanticity, the alum-mediated release of genomic DNA may account for its potent adjuvanticity. In addition, the alummediated induction of antibody production is dependent on TBK1 and IRF3 as demonstrated using knockout mice, suggesting that alum-mediated genomic DNA induces high adjuvanticity of alum via the TBK1/IRF3 pathway, while alum-mediated uric acid production is less related to alum adjuvanticity via NLRP3 (Marichal et al., 2011). Furthermore, self-DNAs from alum inoculation can activate inflammatory monocytes, and homodimers of IL-12p40 are more important than type I IFN production upon alum adjuvanticity. Taken together, these findings suggest that self-DNA DAMPs are important for pathogen elimination, the development of autoimmune disease and the adjuvanticity of alum. Further analyses are required to elucidate which types of cells produce self-genomic DNA after adjuvant inoculation, and which sensors recognize extracellular genomic DNAs. In addition to alum adjuvant, there are many licensed adjuvants such as MF59®, AS03®, and AS04®. Both MF59® and AS03® are emulsions of oil/water containing squalene. Although both adjuvants elicit antibody responses as well as cell-mediated immune responses specific for antigens, their mode of action has not been identified. Information on the receptors for and signaling induced by these adjuvants is needed, because unfortunate side effects can be expected more easily. CONCLUDING REMARKS Many sorts of nucleic acid species exist in the environment. These species affect all organisms such as the evolution of organisms, the inflammatory response, and the advent of drug-resistant microorganisms. To prevent pathogen infection, mammalian cells have equipped themselves with many sorts of sensors to recognize exogenous nucleic acid species as PAMPs, while those sensors are also stimulated by endogenous nucleic acids species as DAMPs. Dysfunction of the machineries sensing both PAMPs and DAMPs is strongly associated with chronic inflammatory disease or autoimmunity. In addition, both PAMPs and DAMPs underlie the action of vaccines, because most modern vaccines contain adjuvants, which are composed of both PAMP-and DAMP-associated molecules. Therefore, the machinery responsible for sensing nucleic acids species should be further elucidated to help us understand machinery of chronic infection, autoimmune development, identifying the side effects of vaccines, and developing safe vaccine adjuvants.
9,475.2
2012-10-07T00:00:00.000
[ "Biology" ]
MAVS polymers smaller than 80 nm induce mitochondrial membrane remodeling and interferon signaling Double‐stranded RNA (dsRNA) is a potent proinflammatory signature of viral infection and is sensed primarily by RIG‐I‐like receptors (RLRs). Oligomerization of RLRs following binding to cytosolic dsRNA activates and nucleates self‐assembly of the mitochondrial antiviral‐signaling protein (MAVS). In the current signaling model, the caspase recruitment domains of MAVS form helical fibrils that self‐propagate like prions to promote signaling complex assembly. However, there is no conclusive evidence that MAVS forms fibrils in cells or with the transmembrane anchor present. We show here with super‐resolution light microscopy that MAVS activation by dsRNA induces mitochondrial membrane remodeling. Quantitative image analysis at imaging resolutions as high as 32 nm shows that in the cellular context, MAVS signaling complexes and the fibrils within them are smaller than 80 nm. The transmembrane domain of MAVS is required for its membrane remodeling, interferon signaling, and proapoptotic activities. We conclude that membrane tethering of MAVS restrains its polymerization and contributes to mitochondrial remodeling and apoptosis upon dsRNA sensing. Introduction Recognition of viral nucleic acids by innate immune receptors is one of the most conserved and important mechanisms for sensing viral infection. Many viruses deliver or generate double-stranded RNA (dsRNA) in the cytosol of the host cell. Cytosolic dsRNA is a potent proinflammatory signal in vertebrates. Endogenous dsRNAs are modified or masked through various mechanisms to prevent autoimmune signaling, and genetic deficiencies in these dsRNA modification pathways can cause autoimmune disorders [1][2][3]. Cytosolic dsRNA is primarily sensed by the RIG-I-like receptors (RLRs) RIG-I (DDX58), MDA5 (IFIH1), and LGP2 (DHX58) [4], which activate the mitochondrial antiviral-signaling protein (MAVS) [5][6][7][8]. RIG-I recognizes dsRNA blunt ends with unmethylated 5 0 -di-or triphosphate caps [9][10][11][12]. MDA5 recognizes uninterrupted RNA duplexes longer than a few hundred base pairs [11,13]. LGP2 functions as a cofactor for MDA5 by promoting the nucleation of MDA5 signaling complexes near dsRNA blunt ends [14,15]. Binding to dsRNA causes RIG-I to form tetramers and MDA5 to cooperatively assemble into helical filaments around the dsRNA [16][17][18][19]. RIG-I and MDA5 each contain two N-terminal caspase recruitment domains (CARDs). The increased proximity of the CARDs upon RLR oligomerization induces the CARDs from four to eight adjacent RLR molecules to form a helical lock-washerlike assembly [19,20]. These helical RLR CARD oligomers bind to MAVS, which has a single N-terminal CARD, via CARD-CARD interactions [5]. Binding of MAVS CARDs to RLR CARD oligomers nucleates the polymerization of MAVS CARD fibrils with amyloid-like (or prion-like) properties including resistance to detergents and proteases [19][20][21]. MAVS polymerization is required for signaling, and the spontaneous elongation of MAVS fibrils following nucleation is thought to provide a signal amplification mechanism [21]. MAVS fibrils then recruit proteins from the TRAF and TRIM families to form multimeric signaling platforms, or signalosomes [21]. MAVS is localized primarily on the outer mitochondrial membrane [5] but can also migrate via the mitochondria-associated membrane (MAM) to peroxisomes [22], which function as an alternative signaling platform to mitochondria [23]. MAVS signalosomes activate both type I interferon (through IRF3) and NF-jB-dependent inflammatory responses [11,13,21]. Overexpression of MAVS induces apoptotic cell death, and this proapoptotic activity is dependent on its transmembrane anchor (TM) and mitochondrial localization, but independent of the CARD [24,25]. A loss-of-function MAVS variant is associated with a subset of systemic lupus patients [26]. In the current model of MAVS signaling, RLR CARD oligomers trigger a change of state in the CARD of MAVS, from monomer to polymeric helical fibril. MAVS fibrils grow like amyloid fibrils by drawing in any proximal monomeric MAVS CARDs [20]. This model is based partly on the observation that purified monomeric MAVS CARD spontaneously assembles into fibrils of 0.2-1 lm in length [19,21]. The fibrils, but not the monomers, activate IRF3 in signaling assays [21] with cell-free cytosolic extracts. Moreover, a purified MAVS fragment lacking the TM can, in its polymeric fibril form, activate IRF3 in crude mitochondrial cell extracts that contain endogenous wild-type MAVS [17]. However, this signaling model is based primarily on signaling assays and structural studies performed in a cell-free environment with soluble fragments of MAVS lacking the TM. MAVS has been reported to form rod-shaped puncta on the outer mitochondria membrane upon activation with Sendai virus [27], but evidence that MAVS forms polymeric fibrils in cells remains inconclusive, and furthermore MAVS fibrils are not sufficient for signaling. Indeed, the MAVS TM is required for interferon induction and cell death activation [5,21,25], and several viruses including hepatitis C virus suppress type I interferon production by cleaving it off [8,[28][29][30][31]. The sequence between the CARD and TM of MAVS, which represents 80% of the MAVS sequence, is also required for downstream signaling [32]. How this sequence and the TM function together with the CARD in cell signaling remains unclear. In the cellular context, MAVS CARD fibrils are subject to multiple physical constraints, including tethering to RLR-dsRNA complexes (at one end of the fibril) and to the mitochondrial membrane (via the TM of each MAVS molecule in the fibril). Here, we address the question of how the current model of MAVS signaling can be reconciled with these physical constraints, and the requirement of the TM, for cell signaling. Imaging of MAVS signaling complexes by super-resolution light microscopy with effective optical resolutions of up to 32 nm reveal that in the cellular context MAVS signaling complexes are significantly smaller than expected-no more than 80 nm. Moreover, MAVS signaling is associated with remodeling of mitochondrial compartments and apoptosis, and both of these activities are dependent on the TM of MAVS. Our data indicate that MAVS forms smaller signaling complexes than previously thought [21,27]. MAVS activation by cytosolic RNA induces mitochondrial membrane remodeling The polymerization of purified soluble fragments of MAVS into helical fibrils is well documented [19,21]. In the cellular context, however, MAVS is tethered to the outer mitochondrial membrane via its TM, and binds via its CARD to oligomeric or polymeric RLR-dsRNA complexes [19,21,33]. To examine how the physical constraints imposed by membrane tethering and association with RIG-I or MDA5 may affect MAVS CARD fibril formation, we imaged cells containing active MAVS signaling complexes by superresolution fluorescence microscopy. Mouse embryonic fibroblasts (MEFs) and 3T3 cells were imaged by structured illumination microscopy (SIM) and stimulated emission depletion microscopy (STED). Because fluorescent proteins fused to the N terminus, C terminus, or juxtamembrane region of MAVS were not suitable for STORM, cells were labeled with a monoclonal antibody against a linear epitope within residues 1-300 of MAVS and a fluorescently labeled secondary antibody. An antibody with an overlapping epitope was shown previously to recognize MAVS in the fibril form in nonreducing semidenaturing electrophoresis [21]. Immunofluorescence of MAVS and TOM20, an outer mitochondrial membrane marker, showed that the two proteins localized to the same mitochondrial compartments (Fig. 1). We observed changes in the distribution of MAVS and in overall mitochondrial morphology (using the TOM20 marker) upon infection of 3T3 cells with a West Nile virus (WNV) replicon (Fig. 1A). Infection with the WNV replicon caused mitochondrial compartments to form more fragmented and less filamentous structures closer to the nucleus. Introducing the dsRNA mimic poly(I: C) into MEFs by electroporation (0.6-1 pg per cell) recapitulated the changes in MAVS distribution and mitochondrial morphology observed upon infection with the WNV replicon (Fig. 1B). These changes were also recapitulated by cotransfecting MEFs derived from MAVS knockout mice (MAVS KO MEFs) [34] with poly(I:C) RNA and a plasmid encoding MAVS (0.6-1 pg of each per cell, Fig. 1C). Mitochondrial remodeling associated with poly(I:C) treatment was quantified as statistically significant reductions in: (a) the distance from the nucleus (from an average of 9.0 to 4.5 lm, P = 2.0 9 10 À4 ) (Fig. 1E); (b) the fraction of the cytosolic area occupied by mitochondria (from an average of 24.2% to 16.7%, P = 0.0015) (Fig. 1F); and (3) the length of mitochondrial compartments measured as unbranched segments of skeletonized TOM20 fluorescence (from an average of 2.42 to 1.98 lm, P = 0.0478) (Fig. 1G). The amount of MAVS plasmid and the electroporation method used in transfections were selected to yield MAVS expression levels that were in the physiological range (see below). Notably, no MAVS filaments longer than the resolution limit were observed. The resolution of SIM and STED was approximately 110 and 80 nm in the imaging (xy) plane, respectively (and 350 and 600 nm in z, respectively). These resolutions were also sufficient to resolve differences in the positions of individual MAVS and TOM20 protein complexes so that the immunofluorescence signals from the two proteins formed an alternating pattern within mitochondrial compartments rather than strictly colocalizing (Figs 1A,C and 2A). We confirmed that transfection with poly(I:C) RNA induced translocation of IRF3 to the nucleus, which is a hallmark of interferon-b (IFNb) signaling (Fig. 2B). Importantly, we showed that the level of MAVS expression in transfected MAVS KO MEFs was comparable to the physiological level of endogenous MAVS expression in wild-type MEFs (Fig. 3). A live-cell dual-luciferase reporter assay was used to confirm that IFN-b signaling is activated in MAVS KO MEFs transfected with the MAVS expression plasmid and poly(I:C) RNA (see below). Costaining for MAVS and MDA5 (Fig. 2C) showed an increased interaction between the two proteins, defined as the average distance between MAVS and MDA5 fluorescence (Fig. 2D) [35]. However, MDA5 staining remained predominantly cytosolic and no significant increase in colocalization was detected with poly(I:C) treatment based on the Pearson correlation. Similarly, RIG-I was recently shown to partition into a MAVSassociated mitochondrial fraction and a cytosolic stress granule fraction [36]. In the absence of MAVS, mitochondria retained their filamentous morphology and failed to move toward the nucleus upon transfection with poly(I:C) (Fig. 1D). We note that the immunofluorescence and cell signaling data show relatively high levels of mitochondrial remodeling ( Fig. 1), IRF3 nuclear translocation (Fig. 2B), and background signaling (see below) in cells transfected with a control plasmid instead of poly(I:C), which is likely attributable to IFN-b signal transduction by cytosolic DNA sensors. We conclude that activation of MAVS by cytosolic RNA sensing is associated with remodeling of mitochondria into more globular and perinuclear compartments. Mitochondrial remodeling from the healthy filamentous morphology to perinuclear globular compartments through mitochondrial fission events is a hallmark of apoptosis [37]. Indeed, MAVS was shown previously to promote apoptosis independently of its function in initiating interferon and NF-jB signaling [24]. STORM shows MAVS signaling complexes are smaller than expected Purified monomeric MAVS CARD spontaneously forms fibrils 0.2-1 lm in length [19,21]. The fibrils, but not the monomers, activate IRF3 in cell-free assays [21]. However, SIM and STED imaging of cells with actively signaling MAVS did not resolve any clearly apparent fibrils (Fig. 1). To determine whether MAVS forms fibrils too small to resolve by SIM or STED, we employed a higher resolution imaging modality, stochastic optical reconstruction microscopy (STORM), to image cells containing active MAVS signaling complexes. STORM can yield effective resolutions of 20 nm in the imaging plane [38]. MEFs were immunolabeled with an antibody against MAVS and a secondary antibody conjugated to Alexa Fluor 647, which was selected for its favorable blinking characteristics and high photon output. MAVS KO MEFs were cotransfected with MAVS and poly(I: C) RNA as described for SIM and STED imaging. No MAVS filaments longer than the resolution limit were observed in the STORM images (Fig. 4). Instead, most of the MAVS immunofluorescence was present in fluorescent foci with irregular shapes, varying in diameter from 30 to 80 nm. These dimensions coincide with the effective resolution of the STORM imaging (see below). Unexpectedly, despite the global change in mitochondrial morphology associated with poly(I:C) treatment, there was no significant difference in the shapes and size range of MAVS foci in cells transfected with poly(I:C) or with a control plasmid. Quantitative image analysis indicates MAVS fibrils are shorter than 80 nm Since our ability to visualize submicrometer MAVS fibrils is critically dependent on the effective resolution of the imaging experiment, an accurate measurement of the imaging resolution is necessary to determine the minimum fibril length that can be resolved. We measured the resolution of our STORM images with the Fourier ring correlation (FRC) method, using an FRC of 0.143 as the threshold to measure resolution [39]. This criterion is the widely accepted standard for resolution assessment in cryo-electron microscopy (cryoEM) [40][41][42]. FRC curves calculated from our STORM images indicate that the resolution in the imaging plane ranged from 32 to 66 nm (Fig. 5A,B). Moreover, there was a correlation between the measured resolution of the images and the visual appearance of the MAVS foci. More specifically, images with the lowest resolutions (Fig. 4G,A) had more diffuse density, whereas images with the highest resolutions ( Fig. 4E,I,C) had more pronounced foci, regardless of poly(I:C) treatment. This suggests that the appearance of clearly visible foci is determined by the imaging resolution rather than by the size of the imaged object. The calculated FRC resolution of the lowest resolution STORM image was 66 nm. This is consistent with the SIM and STED data ( Fig. 1A,C), which showed no correlation between poly(I:C) treatment and the appearance of clearly distinguishable MAVS fluorescence foci. To quantify any subtle effects that poly(I:C) treatment might have on the clustering of MAVS foci, we performed cluster analysis on the STORM images using the pair correlation function. A hierarchical cluster model of two nested clusters was necessary to obtain a good fit to the pair correlation curve (Fig. 5C,D) [43,44]. The clustering analysis showed that the two-cluster size parameters of the nested cluster model were not significantly different in poly(I:C)treated and control samples (Fig. 5E), confirming the lack of correlation between poly(I:C) treatment and the size of MAVS fluorescent foci. Reconstruction of fine structural features by superresolution microscopy depends on the precision with which the position of each fluorophore can be localized, and on the density and spatial distribution of active fluorophores along the labeled sample [39]. In the case of immunolabeled samples, the primary and secondary antibodies increase the spacing between the molecule of interest and the fluorophore significantly (by 20-35 nm) [45]. To determine whether 200-nm MAVS fibrils would in principle be visible in STORM images with the localization precision, labeling strategy and fluorophore properties inherent to our study, we generated an atomic model of 200-nm MAVS CARD fibrils bound to primary and secondary antibodies, and simulated the appearance of the fibrils with the same localization precision as our STORM images at different labeling efficiencies (Fig. 4K,L). Immunolabeling increases the overall diameter of the fibrils from and TOM20 in NIH 3T3 cells infected with a GFP-labeled West Nile reporter virus (WN replicon), with immunofluorescently labeled MAVS in green and TOM20 immunofluorescence in red, or with the MAVS staining, TOM20 staining or the GFP infection marker shown separately in gray. Yellow results from green-red overlap and is indicative of MAVS-TOM20 colocalization. Scale bars: 10 lm in the overview panel, 1 lm in the inset panels. (B) Confocal images of MAVS and TOM20 in wild-type MEFs, with MAVS (green) and TOM20 (red) immunofluorescence, or with TOM20 staining shown separately in gray. Cells were transfected with either poly(I:C) RNA (+poly(I:C)) or an empty plasmid (Àpoly(I: C)) to control for the effects of transfection. Scale bar: 10 lm. (C) Structured illumination microscopy (SIM) of MAVS immunofluorescence (green) in MAVS KO MEFs cotransfected with MAVS and poly(I:C) RNA (+poly(I:C)), or MAVS and a control plasmid (Àpoly(I:C)). Scale bars: 10 lm in the overview panel, 2 lm in the inset panel. (D) Confocal images of TOM20 in MAVS KO MEFs transfected with either poly(I:C) RNA (+poly(I:C)) or an empty plasmid (Àpoly(I:C)) but no plasmid encoding MAVS, with immunolabeling of MAVS (green) and TOM20 (red), or with TOM20 staining shown separately in gray. DAPI nuclear staining is shown in blue (panels B-D). Scale bar: 10 lm. (E) Average distance of TOM20 fluorescence from the nucleus, defined as DAPI fluorescence. (F) Quantification of the fraction of the cytosolic area occupied by TOM20 fluorescence, from bright field immunofluorescence imaging (1009 magnification) of 39 cells [28 Àpoly(I:C), 11 + poly (I:C)]. (G) Average length of mitochondrial compartments measured as the length of unbranched segments of skeletonized TOM20 fluorescence, from the same images used in (F). Statistical significance in panels (E-G) was calculated using a one-sided t-test. Statistical significance was assigned as follows: *P < 0.05; **P < 0.01; ***P < 0.001, n = 32. 8.5 to 79 nm, but fibrils were nevertheless clearly visible with labeling efficiencies > 40%. Notably, the diameter of immunolabeled MAVS CARD fibrils is similar to the lowest resolution measured for the STORM images (79 vs. 66 nm, respectively). Therefore, immunolabeled MAVS fibrils shorter than 80 nm in axial length would be expected to appear as globular foci in STORM. Hence, the absence of visible fibrils in our images remains consistent with the presence of helical MAVS fibrils up to 70-80 nm. A MAVS CARD helical assembly of this length would contain 136-156 MAVS molecules based on the 5. 13 A axial rise per protomer [19]. By comparison, purified MAVS fragments form filaments 200-1000 nm in length in solution [19,21]. MAVS TM is required for mitochondrial remodeling and IFN-b signaling The MAVS CARD fibrils are sufficient to activate IRF3 in cytosolic extracts, but the transmembrane domain (TM) of MAVS is absolutely required for MAVS to activate IRF3 and induce interferon [5,21]. We have shown that MAVS signaling activation causes changes in overall mitochondrial morphology similar to those associated with apoptosis, consistent with the documented proapoptotic activity of MAVS, which is dependent on the TM but not the CARD of MAVS [24]. To determine whether the MAVS TM is required for MAVS-dependent mitochondrial remodeling, we used STED microscopy to image MAVS KO MEFs transfected with a plasmid-encoding MAVS with the TM deleted (MAVS-DTM). As reported previously [5], we found that MAVS-DTM had diffuse cytosolic staining (Fig. 6A). Notably, cells expressing MAVS-ΔTM showed no visible aggregation, and little or no mitochondrial remodeling upon induction with poly(I: C), consistent with a recent report that the TM is required for the formation of high molecular weight MAVS aggregates [46]. Previous studies have measured MAVS signaling activity from cytosolic or mitochondrial cell extracts. We confirmed that MAVS KO MEFs transfected with wild-type MAVS and poly(I:C) following the same protocol used for super-resolution imaging induced IFN-b signaling in the dual-luciferase reporter assay (Fig. 6B). In contrast, cells expressing MAVS-ΔTM failed to activate IFN-b signaling. The signal-to-noise ratio was low in the assay, however, due at least in part to induction of IFN-b signaling by cytosolic DNA-sensing pathways such as cGAS-STING [47] in response to the transfected plasmid DNA. We therefore performed the luciferase reporter assay in STING KO MEFs (Fig. 6B), which are defective for cGASdependent DNA sensing [48]. The signal-to-noise was higher with STING KO MEFs than with MAVS KO MEFs despite the presence of endogenous MAVS in the STING KO MEFs. A slight but statistically insignificant increase in signaling was observed in STING KO MEFs transfected with MAVS-ΔTM. This is consistent with previous work showing that purified recombinant MAVS-ΔTM can, in its aggregated form, induce aggregation of endogenous wild-type MAVS and IRF3 activation in cell extracts enriched for mitochondria [21]. MAVS induces cell death in response to cytosolic RNA An early hallmark of apoptosis is the depolarization of the inner mitochondrial membrane [49], which is followed at later stages of cell death by loss of nuclear DNA content due to DNA fragmentation [50]. Overexpression of MAVS in HEK293T cells was shown previously to induce apoptosis [24]. To determine whether MAVS KO MEFs expressing physiological levels of MAVS induced apoptosis in response to activation with cytosolic dsRNA, we conducted cell death assays on cells transfected with MAVS and poly(I:C) RNA following the same protocol as for super-resolution imaging. Inner mitochondrial membrane depolarization and loss of nuclear DNA content were measured with the fluorescent markers 3,3 0 -dihexyloxacarbocanine iodide (DiOC6) (Fig. 6C) and propidium iodide (PI) (Fig. 6D), respectively. We found poly(I:C) treatment induced a loss of mitochondrial membrane potential in one third of cells transfected with wildtype MAVS, and loss of nuclear DNA content indicative of cell death in 21% of cells, at 16 h posttransfection, the same time point used for super-resolution imaging (Fig. 6C,D). In contrast, poly(I:C) treatment of cells transfected with MAVS-ΔTM induced a loss of mitochondrial membrane potential and DNA content in only 13% and 8% of cells, respectively. Implications for signal transduction by MAVS Taken together, our super-resolution light microscopy data suggest that in live cells, MAVS signaling complexes contain significantly smaller fibrils than those formed in vitro by soluble MAVS fragments. The absence of visible fibrils in SIM, STED, and STORM images of cells expressing physiological levels of MAVS activated with various stimuli (synthetic or viral dsRNA), and a quantitative assessment of the resolution of the STORM images are all consistent with an upper limit in the range of 70-80 nm for the longest dimension of any MAVS fibrils within the MAVS signalosome. Unexpectedly, despite the global change in mitochondrial morphology associated with poly(I:C) treatment, there were no significant differences in the shape and distribution of MAVS foci in cells transfected with poly(I:C) or with a control plasmid. Cluster analysis of STORM images revealed no significant correlation between poly(I:C) treatment and the size of the clusters. Our confocal, SIM, STED, and STORM data are consistent with each other and broadly consistent with previous fluorescence microscopy studies [21,51], although a 3D SIM reconstruction in one previous study showed MAVS forming rod-shaped puncta ranging from 100 to 650 nm in length, with a median length of 350 nm (n = 74) [27]. However, 350 nm is similar to the axial resolution of SIM and hence could be an overestimate of the MAVS cluster length. An 80-nm-long MAVS CARD helical assembly would still contain 156 MAVS molecules, a sufficiently large number to retain potential for polymerizationdependent signal amplification and to function as a mitochondrial signaling platform capable of recruiting the necessary number and diversity of downstream signaling proteins to elicit a robust IFN-b response [19,20]. Given the smaller than expected size of MAVS signaling assemblies, a maximal MAVS-dependent signaling response may require a larger number of fibril nucleation events than previously thought. However, these nucleation events need not be independent, and MAVS signaling complexes may nevertheless assemble cooperatively, with assembled complexes promoting the nucleation of additional complexes without MAVS CARD filament extension beyond approximately 80 nm, but potentially forming two-or threedimensional networks of microfibrils. Such a trade-off of reduced filament length in favor of increased nucleation was recently shown to occur elsewhere in the dsRNA-sensing pathway. Indeed, LGP2 increases the initial rate of MDA5-dsRNA binding and limits MDA5 filament assembly, resulting in the formation of more numerous, shorter MDA5 filaments to generate a greater signaling activity [14]. Imaging at higher resolution, for example, by electron microscopy, is required to elucidate the structural organization of MAVS signaling complexes. We have shown that the MAVS TM is required for MAVS-associated mitochondrial remodeling and confirmed that the TM is required for MAVS-dependent IFN-b signaling and apoptosis in live cells and under the same experimental conditions used for the superresolution imaging experiments in this study. The diffuse cytosolic localization of MAVS-ΔTM suggests that MAVS oligomerization is regulated by elements between the CARD and TM, and that the TM is required to overcome this regulation. It is tempting to speculate that the physical constraints imposed by MAVS aggregation and association with RLR-RNA complexes exert a physical force on the outer mitochondrial membrane capable of inducing distortions that could conceivably contribute to mitochondrial remodeling or leakage associated with apoptotic cell death. West Nile Virus infection West Nile reporter virus particles were generated in HEK 293T cells by cotransfecting the cells with plasmids encoding the structural proteins (C, prM, E) and a subgenomic replicon containing GFP (pWNVII-Rep-G-Z), as described previously [52,53]. The plasmids were kind gifts from T. Pierson (NIH). Supernatant collected 48 h after transfection was filtered through 0.2-lm membranes and used to infect 3T3 cells. Twenty-four hours postinfection, the cells were fixed with 4% formaldehyde solution and labeled for immunofluorescence. Fig. 4 (see Materials and methods). An FRC value of 0.143, indicated by a horizontal purple line, was used as the threshold to measure resolution. (B) Effective resolution of each STORM image calculated from the FRC curves (FRC = 0.143). Cluster analysis of MAVS immunofluorescence in the STORM images. (C) Each image was divided into tiled 3 9 3 lm windows covering the field of view. Windows with a mean fluorescence intensity greater than half the mean intensity of the whole image were selected for analysis (shaded in blue). (D) A nested two-cluster model (red curve) fitted to the pair correlation function (PCF, blue curve), shown for representative individual cluster analysis windows from images of poly(I:C)-treated and control samples. (E) Box plots of the four parameters fitted in the cluster analysis, l 1 , r 1 , l 2 , r 2 , as defined in the Materials and Methods. The cluster density parameters, l 1 and l 2 , were normalized against the mean intensity of the cluster analysis window, I win . Statistical significance in panels was calculated using a one-sided t-test. 1553 The Mitochondrial remodeling analysis Widefield images were acquired at 1009 magnification with an inverted Nikon TE2000 microscope equipped with a Niji LED light source (Bluebox Optics, Huntingdon, UK), a 1009/1.49NA oil objective (Nikon, Tokyo, Japan), and NEO scientific CMOS camera (Andor, Belfast, UK). A custom IMAGEJ macro script (Mitochondria_remodelling_analysis.ijm, Data S1) was written to analyze mitochondrial distance, area, and length parameters with IMAGEJ [54]. More specifically, distance from the nucleus was measured as the shortest distance from each TOM20 fluorescence pixel to a DAPI fluorescence pixel. Second, the fraction of the cytosolic area occupied by mitochondria was measured in each cell. Third, the length of mitochondrial compartments was measured by skeletonizing the mitochondrial fluorescence signal, segmenting the skeleton at branchpoints and measuring the length of each resulting segment. These measurements were performed on 28 cells without poly(I:C) and 11 cells with poly(I:C) treatment. MAVS-MDA5 interaction analysis The averaged distances of the collective points in the MDA5 and MAVS fluorescence signals after treatment with poly(I:C) or a control plasmid was calculated with the Interaction Analysis plugin for IMAGEJ from MosaicSuite (MOSAIC Group, Dresden, Germany) [35]. Colocalization was measured by calculating the Pearson correlation coefficient with the Coloc 2 plugin in IMAGEJ. Structured illumination microscopy Cells were plated as a monolayer on high-performance Zeiss cover glasses (18 9 18 mm, 0.17 mm thickness, type 1.5 H, ISO 8255-1 with restricted thickness-related tolerance of AE0.005 mm, refractive index = 1.5255 AE 0.0015) in six-well CytoOne TC-Treated tissue culture plates (STAR-LAB, Milton Keynes, UK). Cells were stained and mounted as described above. Images were acquired on a Zeiss ELYRA S.1 system with a pco.edge 5.5 scientific CMOS camera using a 639/1.4 NA oil immersion objective lens. For optimal lateral spatial sampling a 1.69 intermediate magnification lens was used, resulting in a pixel size of 64 nm. Excitation was achieved with a 488-nm laser and emission was filtered with a 495-550-nm band pass filter. Modulation of the illumination light was achieved using a physical grating with 28-lm spacing. This patterned illumination was shifted through five phases at each of three rotational angles per image. Raw data were processed using ZEN software (Zeiss, Jena, Germany). Stimulated emission depletion microscopy Cells were plated as a monolayer on high-performance Zeiss cover glasses (as above) in six-well CytoOne TC-Treated tissue culture plates (STARLAB). Cells were stained and mounted as described above. Images were acquired on a Leica TCS SP8 STED confocal system with time-gated HyD GaAsP detectors using a 1009/1.4 NA oil immersion objective lens. Excitation was achieved with an NKT Super K pulsed white light (470-670 nm) laser tuned to 488 nm and STED was induced with a 592-nm laser (~40 MWÁcm À2 ). A time gate window of 2-6 ns was used to maximize STED resolution. About 20-nm pixels were used to ensure adequate spatial sampling to support maximal expected resolution. Images were deconvolved with Huygens Professional. Stochastic optical reconstruction microscopy In STORM, fluorophores are induced to switch, or 'blink' between fluorescent and dark states. With a sufficiently small fraction of fluorophores in the fluorescent state at any given time, spatial overlap between adjacent fluorophores is avoided and the positions of individual fluorophores can be determined with high precision using the point-spread function of each fluorophore [38]. STORM images are generated by plotting the positions of each fluorophore blinking event as points in the image plane and applying to each point a blurring factor corresponding to the localization precision for that point. The appearance of the images also depends on the efficiency of fluorescent labeling and the extent of fluorophore bleaching during the experiment. Cells were plated as a monolayer on high-performance CellPath (Newtown, UK) HiQA Coverslips (No. 1.5 H, 24 mm Ø) in six-well CytoOne TC-Treated tissue culture plates (STARLAB). Cells were stained as described above. Alexa Fluor 647-labeled samples were imaged in the following buffer condition: 10% w/v glucose, 0.5 mgÁmL À1 glucose oxidase (Sigma-Aldrich, St. Louis, MO, USA), 40 lgÁmL À1 catalase (Sigma Aldrich), and 0.1 M b-mercaptoethylamine (MEA) (Sigma-Aldrich) in PBS, pH adjusted to 7.4 with concentrated HCl. To minimize air oxidation of the fluorophore, the imaging well was filled to full capacity with the imaging buffer and sealed with a cover glass. Imaging was performed with an N-STORM microscope (Nikon) equipped with an Apochromat TIRF 1009/1.49 NA oil immersion objective lens, a single-photon detection iXon Ultra DU897 EMCCD camera (Andor), and 405 nm (30 mW) and 647 nm (170 mW) laser lines. To acquire single-molecule localizations, samples were constantly illuminated with the 647 nm laser at 2-3 kWÁcm À2 . To maintain adequate localization density, the 405-nm laser was used to reactivate the fluorophore. During imaging, Perfect Focus System was used to maintain the axial focal plane. 7-10 9 10 4 frames were collected for each field of view at a frame rate of 50-70 Hz. STORM images were reconstructed with NIS Elements Advanced Research (Nikon): lateral drift correction was performed using automated cross-correlation between frame sets and each localization point in the reconstructed image was shown as a normalized Gaussian, the width of which corresponded to the localization uncertainty calculated using the Thompson-Larson-Webb equation [55]. Each STORM image contained 1-2 million molecules within a single cell. Fourier ring correlation The effective optical resolution of the STORM images was assessed using the FRC method proposed by Nieuwenhuizen et al. [39] using the MATLAB code provided (frc_analysis.m, Data S2). An FRC threshold of 1/7 (0.143) was used to determine the resolution of each rendered STORM image. Cluster analysis of MAVS immunofluorescence in STORM images Cluster analysis was performed to assess the size of fluorescent foci from immunolabeled MAVS in the STORM images. Using a custom C++ script (nestedclusteranalysis.cpp, Data S3), each image was divided into 3 9 3 lm windows covering the field of view and windows with a mean fluorescence intensity greater than an arbitrary thresholdhalf of the mean fluorescence intensity of the whole imagewere selected for analysis (Fig. 5C-E). Cluster analysis was performed by fitted a hierarchical nested cluster model, or clusters of clusters model, to the pair correlation function (or radial distribution function), defined as the average number of points located in a ring of radius r centered around each point and normalized by the expected intensity taking into account the border of the window of analysis [44,56]. The nested cluster model was fitted to the pair correlation curve using a least squares approach. The selected model comprised two clusters (smaller 'inner' clusters that cluster into larger 'outer' clusters) defined by four parameters (l 1 , r 1 , l 2 , r 2 ) through the following equation: with r 1 and r 2 corresponding to the outer and inner cluster sizes, respectively. Simulated rendering of immunolabeled MAVS fibrils The atomic coordinates of an mouse antibody molecule (PDB code 1IGY) [57] were manually docked with UCSF CHIMERA [58] to an arbitrary epitope on the MAVS CARD coordinates (PDB code 3J6J) to simulate a primary antibody-MAVS complex. Two additional antibody molecules were then docked to two different arbitrary epitopes in the constant region (Fc domain) of the primary antibody. The helical symmetry of the MAVS CARD filament determined by cryo-electron microscopy (cryoEM) image reconstruction [19] was then applied consecutively to the atomic coordinate of the complex to generate a 200-nm MAVS CARD filament with bound primary and secondary antibodies. The simulated epitopes and orientations of the antibodies were selected to minimize steric clashes after the helical symmetry of the MAVS CARD filament was applied. A custom C++ code was used to render the atoms of the lysine residues, the sites of fluorophore conjugation, of each secondary antibody in the filament. Each atom was rendered on a two-dimensional image at a 5-nm scale with a given observation probability ranging from 20% to 100%, in order to emulate different antibody labeling efficiencies. The resulting image was then blurred in order to take into account the localization uncertainty of the STORM singlemolecule localizations events, approximately 10 nm. Dual-luciferase reporter cell signaling assay The MAVS KO MEFs or STING KO MEFs were transfected with wild-type MAVS or MAVS-ΔTM and poly(I:C) with the Neon transfection system as described above, using the same conditions as for light microscopy, except that plasmids encoding firefly luciferase under an interferon-b (IFN-b) promoter and Renilla luciferase under a constitutive promoter (Promega, Madison, WI, USA) were also included in the transfection. Cells were transfected and seeded in quadruplet sets for each condition in 96-well clear bottom black polystyrene microplates (Corning, Corning, NY, USA) at 5-6 9 10 3 cells per well. Cells were lysed and the lysates transferred to all-white, 96-well flat solid bottom plates suitable for tissue culture and luminescence measurements (Greiner Bio-One, Kremsm€ unster, Austria). Sample preparation from cell lysates and luciferase luminescence measurement were performed according to the manufacturer's protocol for the Dual-Luciferase Reporter Assay System (Promega). IFN-b-dependent induction of firefly luciferase was measured in cell lysates 16 h postinduction with a PHERAstar (Ortenberg, Germany) FSX microplate reader with dual sample injection. IFN-b signaling activity was measured as the ratio of firefly luciferase luminescence to Renilla luciferase luminescence. Quantification of cell death Depolarization of the inner mitochondrial membrane was quantified by staining cells with 40 nM of 3,3 0 -dihexyloxacarbocanine iodide (DiOC6; Sigma-Aldrich) or 6 mgÁmL À1 propidium iodide (PI) for 1.5 h in PBS. DiOC6 and PI fluorescence was quantified by flow cytometry. Quantification was set at 5000 for cell count and FL1 (530 nm) and FL3 (670 nm) were used to acquire DiOC6 and PI signals, respectively. Data were acquired on an LSR II flow cytometer (BD Biosciences, San Jose, CA, USA) and analyzed with FLOWJO (TreeStar, Ashland, OR, USA). Statistical analysis No statistical methods were used to predetermine sample size, experiments were not randomized, and the investigators were not blinded to experimental outcomes. Unless otherwise noted, errors are presented as the standard deviation of the mean of four replicates conducted in a single independent experiment. Statistical significance was calculated with PRISM 8 (GraphPad Software, San Diego, CA, USA) using an unpaired t-test without prior assumptions regarding the standard deviations of each set of quadruplet measurement. Statistical significance was assigned as follows: *P < 0.05; **P < 0.01; ***P < 0.001, n = 4. Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article. Data S1. IMAGEJ macro script used to analyze mitochondrial distance, area, and length parameters.
8,289.4
2018-12-26T00:00:00.000
[ "Biology", "Medicine" ]
Continuous, label-free, 96-well-based determination of cell migration using confluence measurement ABSTRACT Cellular migration is essential in diverse physiological and pathophysiological processes. Here, we present a protocol for quantitative analysis of migration using confluence detection allowing continuous, non-endpoint measurement with minimal hands-on time under cell incubator conditions. Applicability was tested using substances which enhance (EGF) or inhibit (cytochalasin D, ouabain) migration. Using a gap-closure assay we demonstrate that automated confluence detection monitors cellular migration in the 96-well microplate format. Quantification by % confluence, % cell free-area or % confluence in cell-free area against time, allows detailed analysis of cellular migration. The study describes a practicable approach for continuous, non-endpoint measurement of migration in 96-well microplates and for detailed data analysis, which allows for medium/high-throughput analysis of cellular migration in vitro. Introduction Cellular migration is fundamental in physiological processes such as embryonic development, tissue remodeling, repair, wound healing and immune response. Migration of cells is also highly important in disrupted pathological situationsespecially in tumor biology as invasion and migration of tumor cells are prerequisites for formation of secondary tumors (metastasis) [1][2][3][4]. Migration in vitro can be defined as cellular movement in a two-dimensional system such as basal membranes or plastic surfaces. Basic characteristics of migrating cells include massive rearrangement of the cytoskeleton, changes in the cellular shape as well as a defined polarity of the cell with a leading edge at the front and a trailing edge at the back part of the cell [3,5,6]. In principle, one can distinguish migration of single cells and so-called sheet migration: in sheet migration, cells retain their cell-cell junctions and move as a collective [3,7]. The formation of secondary tumors is centrally dependent on cellular migration and has high clinical significance, since most cancer-related deaths are caused by metastatic spread [8]. However, tumor cells do not only possess the ability to migrate, but also to invade, meaning they are able to degrade the surrounding extracellular matrix and move through three-dimensional tissue [8]. Although migration and invasion are functionally connected processes in this context, the ability to migrate is a necessity for cells to invade, as non-invasive cells can still migrate, whereas non-migratory cells are not able to invade [3]. Investigation of (two-dimensional) cellular migration is therefore of high interest especially in cancer research. There are several commercially available migration assays that measure the migratory potential of cells and/ or effect of substances or treatments on cellular migration. Despite the fact that these assays cannot entirely mimic the complex process of migration in a (patho-) physiological context, they are popular model systems and widely used in diverse experimental setups (see [3,7] for detailed reviews on migration and invasion assays). Wound healing, i.e. sheet migration movement of cells on a two-dimensional surface to close a cell-free area, is investigated in commonly used assays and protocols and represents a surrogate for the migration potential of cells [9][10][11][12]. Applying these assays, a cell-free area is produced by either 'wounding' the cellular layer (scratch assay) or by preventing the cellular colonialization of a defined area at the time of cell seeding by chemical or physical stoppers (cell exclusion or gap closure assay). The closure of the cell-free area is monitored microscopically to determine the migration behavior of the cells. Although this assay is very popular and easy to setup, it has several disadvantages. The process of scratching can damage and stress cells, especially at the newly formed border and thereby influence their ability to migrate. Moreover, methods for wounding the cell layer (scratching) as well as size and shape of the scratch vary between laboratories and experiments, which limits reproducibility. The cell exclusion assay also aims to measure the ability of cells to migrate into a cell-free area. However, in contrast to the scratch assay, the cell-free area of the cell exclusion assay is defined before cell seeding by a physical stamp or a gel spot on which cells are not able to grow. After defining the pre-migratory time point, the stamp or gel is removed, and migration of cells can be monitored [3,7,[13][14][15]. Monitoring of cellular migration e.g. via a microscope is often performed at defined time points (e.g. after 6, 12, 24, or 48 h) for practical reasons. However, continuous monitoring and evaluation of cellular migration would be advantageous for a detailed characterization of the kinetics of migration and for detailed investigation of the effects of substances or treatments. Therefore, we here describe a modified protocol of the cell exclusion assay that uses the confluence detection function of a multimode reader combined with continuous monitoring of cell migration over a time period of 48 hours under cell culture conditions. As established model substances we used the cardiotonic steroid and Na + /K + -ATPase inhibitor ouabain ( [16][17][18]) as well as the actin-affecting cytochalasin D [19] as migration inhibitors and the epidermal growth factor (EGF) as a migration enhancer ( [16,20]) of A549 human lung carcinoma cells [16,21,22], respectively, to demonstrate the applicability of this protocol. Results As described by Liu et al., low concentrations of the cardiotonic steroid and Na + /K + -ATPase inhibitor ouabain inhibit migration of A549 cells with minimal cytotoxic effects [16]. We therefore treated A549 cells with different ouabain concentrations (dilution series range: 2-1,000 nM) for 48 hours to determine the highest possible concentration that displays only minimal cytotoxic effects. As shown in Figure 1, ouabain concentrations ≥0.06 µM resulted in massive reduction of viable cells, whereas a concentration of 0.031 µM only slightly reduced the amount of viable A549 cells. Although 0.031 µM ouabain displayed small cytotoxic effects we chose this concentration for our further experiments as we wanted to use the highest possible non-lethal concentration to guarantee inhibitory effects of ouabain on migration. Moreover, 0.031 µM are comparable to migration-inhibitory ouabain concentrations described in the literature [16,23]. Likewise, we performed a dilution series with cytochalasin D (dilution series range 0.01-5.00 µM) over 48 hours to determine the highest possible concentration displaying only minimal cytotoxic effects ( Figure 1). Based on the results of these experiments, we chose 0.3 µM cytochalasin D for subsequent experiments, as this concentration only showed minimal cytotoxicity (mean survival of 87.9 % related to untreated cells) and is comparable to migration-inhibitory concentrations described by others [19]. For EGF, we used 50 ng/ml as described by Liu et al. [16]. To allow continuous and non-endpoint analysis of cell migration, we used the confluence detection mode (imaging) and the environment control functions (CO 2 and temperature) of a multimode reader. Confluence was measured every 30 minutes for 48 hours at the center of the well where the initially cell-free area ('spot') is located. Figure 2 shows an exemplary experimental series for selected time points (0, 6, 12, 18, 24, 30, 36, 42 and 48 h) of untreated, ouabain-, EGF-and cytochalasin D-treated cells, respectively. The confluence detection function was able to reliably detect migrating cells over the course of the experiments ( Figure 2)in case of EGF-treated and cytochalasin D-treated cells, the pro-and anti-migratory effects were obvious by mere visual assessment. Moreover, visual assessment indicates that cells occupy the spot via migration and not due to proliferation effects. Morphologically, we observed no cytotoxic effects caused by the protocol of the assay or the substances as indicated by the visual assessment and by the constant confluence in the non-spot area. No effects of the solvent DMSO alone on cellular migration at corresponding concentrations were observed (data not shown). As shown in Figure 3(a), a first approach to quantification is based on assessment of the % confluence as provided by the multimode reader's confluence detection function. However, as the confluence per field of view starts at a rather high value (e.g. for control, starting confluence at 0 h: 85%, confluence at 48 h: 94%) the absolute changes in % confluence were small. Therefore, for subsequent analysis, we changed the calculation approach to relative values only considering the area of the central spot. Due to the settings described in the methods and material section (i.e. high cell seeding density), the non-confluent area visualized by the confluence detection function is exclusively caused by the cell-exclusion of the assay's gel and not by other effects (such as non-confluent cellular growth of the cells around the gap/wound). Thus, the cell-free area at 0 hours is set to 100% (Figure 3(b)) and migration was quantified as time-dependent reduction of the cell-free/ non confluent area. Here, the positive effect of EGF on migration and the inhibitory effects of cytochalasin D and ouabain were clearly discernable, albeit the effect of ouabain is not as prominent. Specifically, the positive effect of EGF on migration is highest between 5 and 15 hours after incubation as indicated by the steeper slope of the curve. For improved readability, migration can be directly (proportionally) displayed as the confluence in the initially cell-free areas: as shown in Figure 3(c), the spot's area was defined as 0% confluence at the initial time point, i.e. 0 hours. Using this calculation (% confluence in the spot area versus time), the effects of the employed substances on migration can be displayed proportionally: EGF enhances migration of A549 cells meaning that the confluence in the initially cell-free area increases faster compared to untreated cells. Ouabain-treated cells showed a slightly reduced migratory capacity, whereas cells treated with cytochalasin D displayed a clear reduction in cellular migration. Non-linear curve fitting of these data was possible with high (control cells, ouabain-and EGF-treated cells) or good (cytochalasin D-treated cells) quality using a Michaelis-Menten fit function (Figure 3(d)). Discussion Although 2-dimensional measurement of migration does not accurately mimic the physiological 3-dimensional environment, it is an important and widely used surrogate for interpretation of cellular migration [24,25]. In the current manuscript we have described that automated confluence measurement is an efficient tool to measure 2-dimensional cellular migration using the gap closure assay. Due to the functional parameters of the confluence measurement, this protocol is most suitable for the 96-well format. The following thoughts have to be taken into account to gather proper results using our described approach. First, for correct measurement it is important that the cell-free area is completely within the measurement field of view of the confluence detection function, which is feasible using commercially available 96-well microplate format gap closure plates. However, probably due to variations in the manufacturing process, we noticed that the cell-free area spots were not always located at the center of the well, making interpretation difficult. In fact, several experimental series had to be excluded from our data analysis due to extreme variations in the spots' positions. We therefore recommend to include sufficient technical replicates and, if technical feasible, to extend the segment of the well at which confluence is measured to correct for these variations in the spots' positions. Second, to perform the analysis described in the current manuscript, it is mandatory that cells are completely confluent around the cell-free area, which can be easily achieved in the 96-well microplate format by using sufficient cell densities. Importantly, visual evaluation of cell confluence around the cell-free area and cell health using a microscope is mandatory before starting the experiment to ensure reliable results. Nevertheless, variations between biological replicates are possible. Although the trends within each individual experiment were always observable (e.g. migration-promoting effect of EGF; migration-inhibitory effect of ouabain and cytochalasin D), we found that the magnitude of these effects varied between the biological replicates. From a technical point of view, confluence measurement including sufficient biological replicates was able to resolve the pro-and antimigratory effects of the employed model substances despite the fact that their absolute effects varied between the biological replicates. Lastly, while the described protocol requires no additional hands-on time after starting the experiment since the measurement is performed continuously within the instrument, the microplate reader must be equipped with the option to control the CO 2 concentration, temperature and prevent evaporation form the plate to ensure correct environmental conditions. Our protocol offers several advantages compared to standard protocols without continuous confluence measurement [3,7,13]. First and foremost, measurement of confluence in a multimode reader equipped with temperature and gas control allows non-endpoint and continuous monitoring of cellular migration without any hands-on time after starting the assay. In contrast to conventional endpoint measurements and/or measurements at defined time points, continuous investigation of migration allows a more accurate interpretation of the migratory potential of different cell lines and of the effects of substances or treatments [25]. Moreover, this protocol allows automated and medium/high-throughput monitoring of migration due to described software settings without the need to move the plate from the cell culture incubator to the multimode reader. In addition, the protocol is flexible in the sense that if adapted to individual experimental setup and used wells, timing of the confluence measurements can be adjusted. For example, in the current setup we used up to 28 wells per experimental series and measured the confluence in each well every 30 minutes. Dependent on the number of used wells per experiment (and the associated amount of time required for confluence detection) one can use a more close-or broad-meshed time schedule for confluence measurement. Furthermore, the 96-well format allows high-throughput analysis including appropriate controls and technical replicates and also minimizes the amount of cells and chemicals required compared to other assays such as the scratch assay, which is often performed in tissue culture dishes [26]. Another advantage of our protocol is that the detailed and quantitative investigation of migration data obtained by confluence measurement highlights several ways of data evaluation. As demonstrated, besides analysis of the confluence per well/per field of view itself, one can also quantify the reduction of the (initially) cell-free area and the increase of the confluence in this area (gap) in order to directly quantify cellular migration into the gap. Dependent on the results, a Michaelis-Menten data fit with high quality can be performed as shown for our results. For biological questions, the formula of the data fit can be used to quantify migratory behavior under defined circumstances, e.g. one could calculate the required time at which 50% of the cell-free spots are populated by migrated cells to provide a single quantitative measure of cell migration under different experimental conditions. This is especially interesting when used as a semi-high throughput screening approach for substances and/or treatments. Such detailed analysis of effects of substances or treatments on cellular migration based on continuous confluence measurements therefore allows a more detailed and accurate analysis and consequently allows additional scientific statements. In conclusion, the current manuscript describes a new and expanded protocol for measurement of cell migration using microplate-based imaging and confluence determination enabling non-endpoint, medium/ high-throughput and continuous data generation. Combined with the described options for data analysis, the protocol represents a versatile tool for medium/ high-throughput studies of cellular migration and for evaluation of the pro-/anti-migratory effects of diverse substances and treatments. Cytochalasin and ouabain cytotoxicity Cells were seeded in 96-well microplates at a concentration of 2 × 10 5 cells/ml corresponding to 2 × 10 4 cells per 100 µl and well. A dilution series of cytochalasin D (range 0.01-5.00 µM) or ouabain (range: 2-1,000 nM) was applied for 48 hours. Serum-free media (sfRPMI) was used to avoid potential interactions of serum components with cytochalasin D/ouabain. Quantification of viable cells was done using the resazurin assay and a Spark® multimode reader (Tecan). Viability was related to untreated cells (incubated with sfRPMI only). Gap closure assay The Radius TM 96-Well Cell Migration Assay (Cell Biolabs, CBA-126) was used according to the manufacturer's instructions. For all experiments, sfRPMI was used to avoid cellular proliferation during measurement [25]. In brief, A549 cells were seeded at high density (6×10 5 cells/ml; 6 × 10 4 cells per 100 µl and well) overnight in the Radius TM 96-Well Cell Migration Assay plate. After washing, the gel was removed and cytochalasin D (0.3 µM), ouabain (0.031 µM) or EGF (50 ng/ml) were added in sfRPMI. For continuous monitoring, the plate was placed in the Spark multimode reader for 48 hours under standard cell culture conditions (37°C, 5% CO 2 ) using the temperature and gas control of the reader. To avoid evaporation, the plate was placed in a 'large humidity cassette' which includes excess water in a groove surrounding the microplate (Tecan). Confluence measurement of each well was performed every 30 minutes using the confluence measurement function in the center of the well using a kinetic loop (96 measurements, measurement every 30 minutes). Before each experiment, the position of the cell-free area as well as the density and health of the cells were verified using a phase-contrast microscope (Motic) and the microplate reader's Live-Viewer function to guarantee that confluence measurement in the center of the well is suitable to cover the gel spot and most of the surrounding area. Statistics All data points represent mean values of four individual biological replicates ± standard error of mean (sem) or Gaussian error propagation were applicable. For confluence measurements, the individual biological replicates included at least three technical replicates. Paired Student's t-test was used for calculation of significant differences between control and ouabain/cytochalasin D viability, respectively. Statistical results were considered significant (*) or highly significant (**) at p < 0.05 and p < 0.01, respectively. Calculations and diagrams were performed with OriginPro 2017 (OriginLab). Disclosure statement No potential conflict of interest was reported by the authors. Funding This study was supported by funds of the federal government of Salzburg under grant number: Wirtschafts-und Forschungsförderung, 20102-P1600889-FPR01-2016.
4,068.4
2018-10-08T00:00:00.000
[ "Biology" ]
Oxidized-Multiwalled Carbon Nanotubes as Non-Toxic Nanocarriers for Hydroxytyrosol Delivery in Cells Carbon nanotubes (CNTs) possess excellent physicochemical and structural properties alongside their nano dimensions, constituting a medical platform for the delivery of different therapeutic molecules and drug systems. Hydroxytyrosol (HT) is a molecule with potent antioxidant properties that, however, is rapidly metabolized in the organism. HT immobilized on functionalized CNTs could improve its oral absorption and protect it against rapid degradation and elimination. This study investigated the effects of cellular oxidized multiwall carbon nanotubes (oxMWCNTs) as biocompatible carriers of HT. The oxidation of MWCNTs via H2SO4 and HNO3 has a double effect since it leads to increased hydrophilicity, while the introduced oxygen functionalities can contribute to the delivery of the drug. The in vitro effects of HT, oxMWCNTS, and oxMWCNTS functionalized with HT (oxMWCNTS_HT) were studied against two different cell lines (NIH/3T3 and Tg/Tg). We evaluated the toxicity (MTT and clonogenic assay), cell cycle arrest, and reactive oxygen species (ROS) formation. Both cell lines coped with oxMWCNTs even at high doses. oxMWCNTS_HT acted as pro-oxidants in Tg/Tg cells and as antioxidants in NIH/3T3 cells. These findings suggest that oxMWCNTs could evolve into a promising nanocarrier suitable for targeted drug delivery in the future. Introduction Since the discovery of carbon nanotubes (CNTs) by Iijima and co-workers in 1991 [1], a vast scientific community from diverse research areas has undertaken efforts to explore this peculiar one-dimensional nanomaterial that possesses extraordinary properties. Their shape consists of graphitic walls ranging from single graphene (SWCNTs) to multiple graphene sheets (MWCNTs) [2], wrapped in cylinders of a few nanometers in diameter and several micrometers in length. This unique shape, alongside the p-p conjugated system and the small nanometer size of the CNTs, generates a series of impressive structural, mechanical, optical, and electronic properties [3]. Among them are excellent electrical and thermal conductivity and exceptional tensile strength combined with high elasticity, chemical stability, and an ultrahigh surface area, while the surface can be easily tailored and functionalized with a wide variety of biomolecules for specific biomedical targets [4]. As a consequence, CNTs have found a dominant place in many applications [5], such as in transistors in microelectronics [6,7], coatings and fillers in nanocomposites [8], energy storage devices [9], biosensors [10], and medical devices [11]. Recently, there has been a revolution in how carbon nanotubes are utilized as effective vehicles for drug delivery due to the fact that they have the ability to adsorb or conjugate a wide variety of pharmaceutical drugs, biomolecules, proteins, enzymes, or DNA. Drug nanocarriers are categorized as organic (e.g., liposomes and solid lipid nanoparticles) and inorganic (e.g., gold, silica, carbon nanoparticles). Organic nanocarriers are, in general, non-toxic and have the potential to encapsulate and deliver a wide range of pharmaceutical drugs [12]. On the other hand, inorganic nanocarriers have raised issues regarding their potential toxicity [13]. Amongst inorganic nanocarriers, CNTs have attracted significant attention for drug delivery [14]. CNTs have been examined as potential carriers for anticancer drugs, DNA/RNA, enzymes, antibodies, and antibiotics [15][16][17]. The greatest obstacle to the usefulness of such molecules and subsequently their targeted bio-applications is that CNTs are highly hydrophobic, which is a limiting factor for drug delivery systems. In order for CNTs to be used as a vehicle for drugs and reach cells, they should be first functionalized for two main reasons: (a) to create the appropriate functionalities for the drugs to absorb and (b) to increase the solubility in polar/aqueous media. An efficient way to achieve this is via the oxidation of MWCNTs by various reagents [18]. Nanotechnology allows scientists to modify the physical, chemical, and biological characteristics of particles at the nanoscale and to develop nano-carriers for drug delivery and diagnostic applications. Nano-enhanced drugs are designed to maximize the delivery of non-water-soluble drugs, act directly on specific cells or tissues, and overcome problems such as drug transport at the brain-blood barrier (BBB) [19]. Hydroxytyrosol (HT) is a phenolic compound found in olive oil, olive leaves, and wine and exerts a broad range of potential pharmacological activities, such as antioxidant, anti-inflammatory, neuroprotective, antimicrobial, cardioprotective, and anticancer effects [20][21][22][23]. Due to its medicinal benefits, various methodologies have been proposed for the extraction of HT from olive oil or olive leaves and production by chemical synthesis or by microorganisms through the application of biotechnological approaches [24,25]. The absorption of HT is a dose-dependent procedure that takes place in the intestine and undergoes intestinal and hepatic metabolism [26]. The absorption efficiency of HT ranges between 75 and 100%. HT and its metabolites are distributed extensively to the body tissues and can even cross the BBB. This property could explain the advantageous health effects of HT [27,28]. However, HT absorption is a rapid procedure leading to maximum plasma levels a few minutes after intake, while the complete elimination of HT and its metabolites from the body occurs after 6 h in humans [29]. Due to its potent and rapid metabolism, HT exhibits a plasma half-life of 1-2 min [28]. Bioavailability is a prerequisite for any compound to exhibit a biological effect. Thus, a biocompatible nanocarrier that protects and increases the drug's half-life could act multiplicatively and enhance its biological activity. The toxic effects of CNTs have been studied for the past several years, and it was concluded that CNTs' toxicity is related to the types of nanotubes, the type of functionalization, and the purity of the final product [14]. HT is a molecule that is rapidly metabolized in the body, so it is possible that effective therapeutic blood and tissue concentrations are not achieved. This study aimed to utilize newly synthesized oxMWCNTs as nanocarriers for HT, assess their physicochemical characteristics, and determine how they interact with cells. Oxidation of Multi-Walled Carbon Nanotubes-oxMWCNTs First, 100 mg of pristine MWCNTs was added to a mixture of 30 mL H 2 SO 4 (95-97%) and 10 mL HNO 3 (65%) (3:1 v/v%). The dispersion flask was placed in a sonicator bath for 3 h. The resulting oxidized MWCNTs (oxMWCNTs) were separated from the mixture by Nanomaterials 2023, 13, 714 3 of 17 centrifugation, which was followed by four washings with distilled water. The obtained oxidized material was dried in ambient conditions on a glass plate ( Figure 1). Oxidation of Multi-Walled Carbon Nanotubes-oxMWCNTs First, 100 mg of pristine MWCNTs was added to a mixture of 30 mL H2SO4 (95-97 and 10 mL HNO3 (65%) (3:1 v/v%). The dispersion flask was placed in a sonicator bath 3 h. The resulting oxidized MWCNTs (oxMWCNTs) were separated from the mixture centrifugation, which was followed by four washings with distilled water. The obtain oxidized material was dried in ambient conditions on a glass plate ( Figure 1). Decoration of oxMWCNTs with Hydroxytyrosol-oxMWCNTs_HT Τhe ability of the oxidized carbon nanotubes to be an effective carrier in drug delive systems is examined in this study. For this purpose, 50 mg of HT and 50 mg oxMWCNTs were added, each one in 50 mL of distilled water. The aqueous dispersio were mixed and stirred for 24 h. The precipitate was collected after centrifugation a several washings with distilled water. The final product was air-dried on a glass pl ( Figure 2). Characterization Techniques for oxMWCNTs and oxMWCNTs_HT Fourier transform infrared (FT-IR) spectra over the spectral range 400-4000 cm −1 w collected with a Perkin-Elmer Spectrum GX infrared spectrometer featuring a deutera triglycine sulphate (DTGS) detector. Every spectrum was the average of 64 scans tak with 2 cm −1 resolution. Samples were prepared as KBr pellets with ca. 2 wt% of samp Raman spectra were recorded with a Micro-Raman system, RM 1000 RENISHAW, usi a laser excitation line at 532 nm (Nd-YAG), in the range of 1000-3500 cm −1 . A power le of 1 mW was utilized with a 1 μm focusing spot so as to avoid the photodecomposition the samples. Thermogravimetric measurements were carried out with a Perkin Elm Decoration of oxMWCNTs with Hydroxytyrosol-oxMWCNTs_HT The ability of the oxidized carbon nanotubes to be an effective carrier in drug delivery systems is examined in this study. For this purpose, 50 mg of HT and 50 mg of oxMWCNTs were added, each one in 50 mL of distilled water. The aqueous dispersions were mixed and stirred for 24 h. The precipitate was collected after centrifugation and several washings with distilled water. The final product was air-dried on a glass plate ( Figure 2). 3 h. The resulting oxidized MWCNTs (oxMWCNTs) were separated from the mixture centrifugation, which was followed by four washings with distilled water. The obtain oxidized material was dried in ambient conditions on a glass plate ( Figure 1). Decoration of oxMWCNTs with Hydroxytyrosol-oxMWCNTs_HT Τhe ability of the oxidized carbon nanotubes to be an effective carrier in drug delive systems is examined in this study. For this purpose, 50 mg of HT and 50 mg oxMWCNTs were added, each one in 50 mL of distilled water. The aqueous dispersio were mixed and stirred for 24 h. The precipitate was collected after centrifugation a several washings with distilled water. The final product was air-dried on a glass pl ( Figure 2). Characterization Techniques for oxMWCNTs and oxMWCNTs_HT Fourier transform infrared (FT-IR) spectra over the spectral range 400-4000 cm −1 w collected with a Perkin-Elmer Spectrum GX infrared spectrometer featuring a deuterat triglycine sulphate (DTGS) detector. Every spectrum was the average of 64 scans tak with 2 cm −1 resolution. Samples were prepared as KBr pellets with ca. 2 wt% of samp Raman spectra were recorded with a Micro-Raman system, RM 1000 RENISHAW, usi a laser excitation line at 532 nm (Nd-YAG), in the range of 1000-3500 cm −1 . A power le of 1 mW was utilized with a 1 μm focusing spot so as to avoid the photodecomposition the samples. Thermogravimetric measurements were carried out with a Perkin Elm Characterization Techniques for oxMWCNTs and oxMWCNTs_HT Fourier transform infrared (FT-IR) spectra over the spectral range 400-4000 cm −1 were collected with a Perkin-Elmer Spectrum GX infrared spectrometer featuring a deuterated triglycine sulphate (DTGS) detector. Every spectrum was the average of 64 scans taken with 2 cm −1 resolution. Samples were prepared as KBr pellets with ca. 2 wt% of sample. Raman spectra were recorded with a Micro-Raman system, RM 1000 RENISHAW, using a laser excitation line at 532 nm (Nd-YAG), in the range of 1000-3500 cm −1 . A power level of 1 mW was utilized with a 1 µm focusing spot so as to avoid the photodecomposition of the samples. Thermogravimetric measurements were carried out with a Perkin Elmer Pyris Diamond TG/DTA. Samples of approximately 5 mg were heated in air from 25 • C to 900 • C at a rate of 5 • C/min. XPS measurements were applied under an ultrahigh vacuum with a base pressure of 4 × 10 −9 mbar, using a SPECS GmbH instrument equipped with a monochromatic MgKa source (hv = 1253.6 eV) and a Phoibos-100 hemispherical analyzer. The energy resolution was set to 1.18 eV. Recorded spectra were set with an energy step of 0.05 eV and dwell time of 1 s. All binding energies were referenced with regard to the C1s core level centered at 284.6 eV. Spectral analysis included Shirley or linear background subtraction, and peak deconvolution involving mixed Gaussian-Lorentzian functions was conducted with a least squares curve-fitting program (WinSpec, University of Namur, Belgium). Atomic force microscopy (AFM) images were recorded on silicon wafer substrates using tapping mode with a Bruker Multimode 3D Nanoscope (Ted Pella Inc., Redding, CA, USA). Determination of Cellular Viability For the estimation of cell viability, 96-well plates were used. In each plate, 5000 cells (Tg/Tg or NIH/3T3 cells) were plated with 100 µL of the medium. After 24 h, oxMWCNTs, HT, and oxMWCNTs_HT were added at various concentrations. The plates then were filled with medium to a final volume of 200 µL. Cells were incubated with the substances for 24 or 48 h. At the end of this time, 40 µL of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT tetrazolium salt, stock solution of 3 mg/mL) was added. The cells were incubated again under the same conditions (37 • C and 5% CO 2 ) for three hours. The supernatant was then removed, and 100µI of DMSO was added. Absorbance at 540 nm and 690 nm was measured with the aid of a spectrophotometer (Multiskan Spectrum, Thermo Fisher Scientific, Waltham, MA, USA). All experiments were performed in triplicate. Clonogenic Assay Cells (500 cells/mL) were seeded in 6-well plates, to a final volume of 2 mL per well. After 24 h of incubation, cells were treated with oxMWCNTs (10 and 50 µg/mL), HT (1, 10, 20, and 50 µg/mL), and oxMWCNTs_HT (1 and 20 µg/mL) for an additional 24 h. The supernatant was then removed, and a new medium (2 mL) was added again. The cells were incubated for 8 days with intermediate medium renewal at day 4. On the 8th day, the medium was removed, and cells were rinsed carefully with PBS and stained with a mixture of glutaraldehyde (6.0%) (Thermo Scientific A17876AE, Waltham, MA, USA) and crystal violet (0.5%) (Sigma-Aldrich C3886, Burlington, MA, USA) for 30 min. Plates were then carefully washed and left to dry in normal air at room temperature [31]. Visible colonies were measured using the OpenCFU open-source software [32]. All experiments were performed in triplicate. Cell Cycle Cells were seeded in six-well plates at a density of 75 × 10 3 cells/mL before the addition of 20 µg/mL oxMWCNTs, HT, or oxMWCNTs_HT. Twenty-four hours later, the supernatant was discarded and cells were washed twice with PBS, detached with trypsin, and collected with PBS. The cell suspensions were centrifuged for 5 min at 3000 rpm. The supernatants were removed and the cells were resuspended in ice-cold PBS. Centrifugation was re-performed and the pellet was re-suspended with 0.5 mL iced-cold PBS. Following this, 0.5 mL of absolute ethanol was added using the drop-by-drop technique. The samples were then stored at −20 • C for one week. On day 8, the samples were centrifuged and the pellets were resuspended in 1 mL of iced-cold PBS. PI and RNAse-A at similar concentrations (25 µg/mL) of each were added to the samples, followed by incubation at 37 • C for 30 min in the dark. Analysis was performed on a fluorescence-activated flow cytometer (Partec ML, Partec GmbH, Germany). For each sample, 10,000 events were counted. All experiments were performed in triplicate. Statistical Analysis Data are expressed as means ± standard deviation. The statistically significant difference between data means was determined by the Student t-test. p < 0.05 indicated a statistically significant difference (SPSS version 20.0, Statistical Package for the Social Sciences software, SPSS, Chicago, IL, USA). The GraphPad Prism 8 software was used for creating the figures. Structural and Morphological Characterization of oxMWCNTs and oxMWCNTs_HT The samples of the oxidized MWCNTs (oxMWCNTs) and the modified ones with hydroxytyrosol (oxMWCNTs_HT) are shown in Figure 3. For comparison reasons, the spectra of pristine MWCNTs and hydroxytyrosol are presented as well. In the spectrum of the pristine MWCNTs, no intense vibrations were recorded, in contrast to the spectrum of the oxidized ones. More specifically, the bands at 613 cm −1 and 1107 cm −1 are attributed to the wagging vibrations of the hydroxyl groups and stretching vibrations of the epoxide groups (C-O-C), respectively. Stretching vibrations of C-OH groups are observed at 1413 cm −1 , while the band at 1618 cm −1 is associated with the bending vibrations of water molecules. Additionally, the band at 3436 cm −1 corresponds to vibrations of hydroxyl groups [33]. The presence of the aforementioned bands confirms the creation of oxygen functional groups on the surface of the nanotubes and, therefore, the successful oxidation of the material. In the case of oxMWCNTs_HT, the appearance of the two bands in the range of 2800-3000 cm −1 is related to the stretching vibration of alkyl groups of hydroxytyrosol molecules. The same bands are detected in the spectrum of HT, which leads to the conclusion that the phenolic compound was successfully attached to the surface of the graphitic walls of the nanotubes. The vibration bands derived from HT attached to the nanotubes were reduced, and the remaining ones shifted slightly compared to the spectra of HT. This is due to the fact that the freedom of some bonds to vibrate is reduced significantly as the functional groups take part in the interactions between the oxMWCNTs and HT. Nanomaterials 2023, 13, x FOR PEER REVIEW 6 of 18 The Raman spectra of MWCNTs, oxMWCNTs, and oxMWCNTs_HT ( Figure 4) depict the intense D, G, and G'2D bands characteristic of pristine and oxidized MWCNTs [34,35]. The Raman spectrum of MWCNTs is similar to those of the paracrystalline carbon black and amorphous biochar, exhibiting an intense and broad D band [36]. In contrast to SWCNTs, sp 2 -to-sp 3 hybridization associated with the oxidation or functionalization of MWCNTs does not generally result in drastic changes to their intrinsically intense D bands [36]. This renders the D-to-G band intensity ratio ( ) an unreliable parameter for the quantification of the extent of oxidation or functionalization of MWCNTs [37,38]. Instead, the disorder associated with the oxidation or functionalization of pristine MWCNTs is characterized by the intensifying of another weak defect-activated Raman band, the D' band appearing at ~1600-1620 cm −1 as a shoulder of the G band, hence giving the G bands of pristine, oxidized, and functionalized MWCNTs their characteristic weakly asymmetric profiles [39,40]. The D' bands of MWCNTs, oxMWCNTs, and oxMWCNTs_H T were unveiled upon deconvolution of their corresponding G bands, as can be seen in Figure 4. To achieve this, the Raman spectra were fit to five symmetric Lorentzian peaks corresponding to the D, G, The Raman spectra of MWCNTs, oxMWCNTs, and oxMWCNTs_HT ( Figure 4) depict the intense D, G, and G' 2D bands characteristic of pristine and oxidized MWCNTs [34,35]. The Raman spectrum of MWCNTs is similar to those of the paracrystalline carbon black and amorphous biochar, exhibiting an intense and broad D band [36]. In contrast to SWCNTs, sp 2 -to-sp 3 hybridization associated with the oxidation or functionalization of MWCNTs does not generally result in drastic changes to their intrinsically intense D bands [36]. This renders the D-to-G band intensity ratio ( I D I G ) an unreliable parameter for the quantification of the extent of oxidation or functionalization of MWCNTs [37,38]. Instead, the disorder associated with the oxidation or functionalization of pristine MWCNTs is characterized by the intensifying of another weak defect-activated Raman band, the D' band appearing at 1600-1620 cm −1 as a shoulder of the G band, hence giving the G bands of pristine, oxidized, and functionalized MWCNTs their characteristic weakly asymmetric profiles [39,40]. The Raman spectra of MWCNTs, oxMWCNTs, and oxMWCNTs_HT ( Figure 4) depict the intense D, G, and G'2D bands characteristic of pristine and oxidized MWCNTs [34,35]. The Raman spectrum of MWCNTs is similar to those of the paracrystalline carbon black and amorphous biochar, exhibiting an intense and broad D band [36]. In contrast to SWCNTs, sp 2 -to-sp 3 hybridization associated with the oxidation or functionalization of MWCNTs does not generally result in drastic changes to their intrinsically intense D bands [36]. This renders the D-to-G band intensity ratio ( ) an unreliable parameter for the quantification of the extent of oxidation or functionalization of MWCNTs [37,38]. Instead, the disorder associated with the oxidation or functionalization of pristine MWCNTs is characterized by the intensifying of another weak defect-activated Raman band, the D' band appearing at ~1600-1620 cm −1 as a shoulder of the G band, hence giving the G bands of pristine, oxidized, and functionalized MWCNTs their characteristic weakly asymmetric profiles [39,40]. The D' bands of MWCNTs, oxMWCNTs, and oxMWCNTs_H T were unveiled upon deconvolution of their corresponding G bands, as can be seen in Figure 4. To achieve this, the Raman spectra were fit to five symmetric Lorentzian peaks corresponding to the D, G, The D' bands of MWCNTs, oxMWCNTs, and oxMWCNTs_H T were unveiled upon deconvolution of their corresponding G bands, as can be seen in Figure 4. To achieve this, the Raman spectra were fit to five symmetric Lorentzian peaks corresponding to the D, G, D', G' 2D , and D + G bands. The corresponding Lorentzian curve fitting parameters are presented in Table 1. The D'-to-G band intensity ratio ( , which is directly proportional to Nanomaterials 2023, 13, 714 7 of 17 the defect concentration [41], was used in place of I D I G to probe the structural transformations associated with the oxidation of MWCNTs and the adsorption of HT onto oxMWCNTs. As can be seen in Table 1, increases from 0.16 for MWCNTs to 0.43 for oxMWCNTs. Furthermore, all the bands of oxMWCNTs and oxMWCNTs_HT reveal a significant increase in the full width at half maximum (FWHM) relative to those of MWCNTs. These observations confirm the oxidation of MWCNTs, which is also corroborated by the intensifying of the D + G bands in the Raman spectra of oxMWCNTs and oxMWCNTs_HT. The peculiar decrease in I D I G to 0.29 for oxMWCNTs-HT can be attributed to the vibrational contribution of the phenolic C=C bonds in HT at~1610-1620 cm −1 [42,43]. For this reason, the mode of adsorption of HT onto oxMWCNTs, whether it be physisorption or chemisorption, can only be identified by the G' 2D -to-G band intensity ratio ( , which is inversely proportional to the defect concentration [41]. As can be seen in Table 1, I G decreases from 0.92 for MWCNTs to 0.44 for oxMWCNTs, which further confirms the oxidation of MWCNTs, but it remains at 0.44 in the case of oxMWCNTs_HT, thus confirming that HT is physisorbed. The thermogravimetric analyses of pristine MWCNTs, oxMWCNTs, and oxMWC-NTs_HT are presented in Figure 5. In the case of MWCNTs, one main combustion is observed, starting approximately at 480 • C, which is accompanied by~100% mass loss of the material. The analysis of oxMWCNTs indicates the existence of three mass losses. The initial weight loss (up to 100 • C) in the order of 9% wt. corresponds to the removal of the naturally adsorbed water molecules, which is indicative of the material's hydrophilicity. The next mass loss up to 430 • C (~11%) corresponds to the removal of oxygen groups of carbon nanotubes, while the third weight loss is caused by the deformation of the carbon network (~80%). The curve of oxMWCNTs_HT exhibits a weight loss of~18.5%, which occurs through the decomposition of both the functional groups and the organic compound. The combustion of the graphitic lattice takes place at approximately 480 • C, which is followed by~69.5% mass loss. Up to 100 • C, a weight mass loss of 12% is observed, which is attributed to the removal of the water molecules. This value is higher than that of the oxidized nanotubes because of the significant hydrophilic nature of HT. which occurs through the decomposition of both the functional groups and compound. The combustion of the graphitic lattice takes place at approxima which is followed by ~69.5% mass loss. Up to 100 °C, a weight mass loss of served, which is attributed to the removal of the water molecules. This valu than that of the oxidized nanotubes because of the significant hydrophilic nat The surface chemical composition, as well as the valence states of oxMW oxMWCNTs_HT, was collected by applying X-ray photoelectron spectros measurements. The silica 2p and 2s (Si2p and Si2S) photoelectron peaks (cen eV and 152 eV, respectively) are due to the silica substrate in which the sa drop-casted and dried from the solution phase. Figure 6a,b display represe surveys' scan spectra, which indicate the coexistence of carbon and oxygen C1s high-resolution photoelectron spectrum is presented in Figure 4 for both o and oxMWCNTs_HT. The C1s spectrum of oxMWCNTs (Figure 6c) is deconv five peaks and demonstrates the successful oxidation of the carbon walls. M cally, at 284.6 eV, we receive the contribution of the C=C and C-C bonds of th walls, representing 32.7% of the whole carbon peak, while the next fitted peaks functionalities created due to oxidation, such as C-O (24.4%), C-O-C (31.5%), and C(O)O (3.5%). After the attachment of hydroxytyrosol, the C1s spectrum that of oxMWCNTs, as displayed in Figure 6d. The more obvious difference d the C=C peak at 284.6 eV, where, in the case of oxMWCNTs_HT, the percenta significantly to 52.3%. This is due to the fact that the drug, which consists amount of C-C bonds, is attached to the surface of the tubes. The ratio of C/O The surface chemical composition, as well as the valence states of oxMWCNTs and oxMWCNTs_HT, was collected by applying X-ray photoelectron spectroscopy (XPS) measurements. The silica 2p and 2s (Si2p and Si2S) photoelectron peaks (centered at 100 eV and 152 eV, respectively) are due to the silica substrate in which the samples were drop-casted and dried from the solution phase. Figure 6a,b display representative XPS surveys' scan spectra, which indicate the coexistence of carbon and oxygen atoms. The C1s high-resolution photoelectron spectrum is presented in Figure 4 for both oxMWCNTs and oxMWCNTs_HT. The C1s spectrum of oxMWCNTs (Figure 6c) is deconvoluted into five peaks and demonstrates the successful oxidation of the carbon walls. More specifically, at 284.6 eV, we receive the contribution of the C=C and C-C bonds of the graphitic walls, representing 32.7% of the whole carbon peak, while the next fitted peaks are oxygen functionalities created due to oxidation, such as C-O (24.4%), C-O-C (31.5%), C=O (7.9%), and C(O)O (3.5%). After the attachment of hydroxytyrosol, the C1s spectrum differs from that of oxMWCNTs, as displayed in Figure 6d. The more obvious difference derives from the C=C peak at 284.6 eV, where, in the case of oxMWCNTs_HT, the percentage increases significantly to 52.3%. This is due to the fact that the drug, which consists of a higher amount of C-C bonds, is attached to the surface of the tubes. The ratio of C/O is estimated for both cases and is found to be C/O = 2.0 for oxMWCNTs and C/O = 3.1 for oxMWCNTs_HT. The oxygen functionalities remain after the drug interaction, but all of them are shifted from 0.2 to 0.4 eV, and this phenomenon may be due to the weak interaction of these functionalities with the oxygen species of HT. Finally, a new peak observed and centered at 290.3 eV is due to pi-pi* interaction between the aromatic groups of the drug and the graphitic walls of the nanotubes [44]. We can conclude that the type of interaction of the drug and the nanotubes is via pi-pi* interactions of weak van der Walls forces. rials 2023, 13, x FOR PEER REVIEW drug and the graphitic walls of the nanotubes [44]. We can conclude that the typ teraction of the drug and the nanotubes is via pi-pi* interactions of weak van de forces. Characteristic AFM images of the synthesized carbon nanostructures are dep Figure 7. The usual thickness of the oxMWCNTs is 10-12.5 nm, as derived from th graphic height profile (section analysis) (Figure 7a). Moreover, in the AFM image are shown the modified structures of the oxidized multi-walled carbon nanotub HT, oxMWCNTs_HT (Figure 7b). The average thickness of the oxMWCNTs_HT is from 19 to 22 nm, which is indicative of the decoration of the oxMWCNTs with From the topographic height profile, the AFM images (section analysis) can also to calculate the thickness of the HT, which ranges between 8 and 12 nm. Characteristic AFM images of the synthesized carbon nanostructures are depicted in Figure 7. The usual thickness of the oxMWCNTs is 10-12.5 nm, as derived from the topographic height profile (section analysis) (Figure 7a). Moreover, in the AFM images below are shown the modified structures of the oxidized multi-walled carbon nanotubes with HT, oxMWCNTs_HT (Figure 7b). The average thickness of the oxMWCNTs_HT is varied from 19 to 22 nm, which is indicative of the decoration of the oxMWCNTs with the HT. From the topographic height profile, the AFM images (section analysis) can also be used to calculate the thickness of the HT, which ranges between 8 and 12 nm. Nanomaterials 2023, 13, x FOR PEER REVIEW 10 of 18 Cell Viability oxMWCNTs had neither dose-nor time-dependent toxic effects against Tg/Tg and NIH/3T3 cells, whose viability remained over 80% after 48 h of incubation ( Figure 8). Moreover, oxMWCNTs_HT did not affect cell proliferation in NIH/3T3 cells (Figure 8c,d) but significantly reduced the Tg/Tg cell population, especially at doses higher than 10 μg/mL and exposure for 48 h (Figure 8b). HT exerted a similar dose-dependent effect in Cell Viability oxMWCNTs had neither dose-nor time-dependent toxic effects against Tg/Tg and NIH/3T3 cells, whose viability remained over 80% after 48 h of incubation (Figure 8). Moreover, oxMWCNTs_HT did not affect cell proliferation in NIH/3T3 cells (Figure 8c,d) but significantly reduced the Tg/Tg cell population, especially at doses higher than 10 µg/mL and exposure for 48 h (Figure 8b). HT exerted a similar dose-dependent effect in both cell lines, with less than 20% surviving after treatment with 100 µg/mL for 24 h (Figure 8). both cell lines, with less than 20% surviving after treatment with 100 μg/mL for 24 h (Figure 8). Comparing the effect of free HT with that of HT bound to oxMWCNTs (oxMWCNTs_HT), we observed a more potent cytotoxic effect for oxMWCNTs_HT in Tg/Tg cells, which was enlarged with time ( Figure 9a,b), and a similar effect on cell viability in NIH/3T3 cells (Figure 9c,d). 24 h (a,c) and 48 h (b,d). Comparing the effect of free HT with that of HT bound to oxMWCNTs (oxMWC-NTs_HT), we observed a more potent cytotoxic effect for oxMWCNTs_HT in Tg/Tg cells, which was enlarged with time ( Figure 9a,b), and a similar effect on cell viability in NIH/3T3 cells (Figure 9c,d). both cell lines, with less than 20% surviving after treatment with 100 μg/mL for 24 h (Figure 8). Comparing the effect of free HT with that of HT bound to oxMWCNTs (oxMWCNTs_HT), we observed a more potent cytotoxic effect for oxMWCNTs_HT in Tg/Tg cells, which was enlarged with time ( Figure 9a,b), and a similar effect on cell viability in NIH/3T3 cells (Figure 9c,d). Ability of Cells to Form Colonies Treatment of both cells with HT led to the reduced formation of colonies compared to the control (p < 0.05). In Tg/Tg cells, the survival fraction was lower than in NIH/3T3 cells, indicating a more potent cytotoxic effect from HT (Figure 10a). Incubation of cells with 50 µg/mL oxMWCNTs had no effect on NIH/3T3 cells' ability to form colonies but a mild reduction (20%) was recorded for Tg/Tg cells (Figure 10b). The damages provoked by oxMWCNTs_HT also had a higher impact on Tg/Tg cells' ability to form colonies than on NIH/3T3 cells (Figure 10c). oxMWCNTs_HT (20 µg/mL) carrying 2 µg/mL HT allowed 75% of NIH/3T3 cells and 65% of Tg/Tg cells to retain their reproductive integrity (Figure 10c). Treatment of both cells with HT led to the reduced formation of colonies compared to the control (p < 0.05). In Tg/Tg cells, the survival fraction was lower than in NIH/3T3 cells, indicating a more potent cytotoxic effect from HT (Figure 10a). Incubation of cells with 50 μg/mL oxMWCNTs had no effect on NIH/3T3 cells' ability to form colonies but a mild reduction (20%) was recorded for Tg/Tg cells (Figure 10b). The damages provoked by oxMWCNTs_HT also had a higher impact on Tg/Tg cells' ability to form colonies than on NIH/3T3 cells (Figure 10c). oxMWCNTs_ΗΤ (20 μg/mL) carrying 2 μg/mL HT allowed 75% of NIH/3T3 cells and 65% of Tg/Tg cells to retain their reproductive integrity ( Figure 10c). Discussion Hydroxytyrosol is characterized by pharmaceutical companies as a molecule of high interest, and ongoing, extensive research on its biological activities and methods to enhance its beneficial effects is being conducted. Thus, the conjugation of HT to a soluble and biocompatible nanomolecule that effectively carries and releases HT to the bloodstream in a controlled manner is of importance. We demonstrated that the short-term toxicity of oxMWCNTs was extremely low against the two normal cell lines that we used. However, oxMWCNTs functionalized with HT decreased Tg/Tg viability after 48 h of exposure, indicative of time-dependent cytotoxic activity. Long-term toxicity estimated by the clonogenic assay (mimicking physiological conditions) revealed that Tg/Tg cells were more susceptible to oxMWCNTS_HT's cytotoxic effects than NIH/3T3 cells, which are potentially provoked by the production of ROS. Matching the amount of HT carried by the oxMWCNTs with free HT (0.5-10 μg/mL), similar cytotoxicity was observed in NIH/3T3 cells with a higher reduction in the viability of Tg/Tg cells. Several nanotechnological approaches have been employed to increase the delivery and efficiency of HT. Recently, Jadid et al. nano-encapsulated HT and curcumin into poly lactide-co-glycolide-co-polyacrylic acid (PLGA-co-PAA) to enhance their anticancer activity against pancreatic cancer cells and observed a significant increase in apoptotic rates [45]. The same nanocarrier was employed successfully by Ahmadi et al. to enhance the anticancer potential of doxorubicin and HT against HT-29 colon cancer cells [46]. Discussion Hydroxytyrosol is characterized by pharmaceutical companies as a molecule of high interest, and ongoing, extensive research on its biological activities and methods to enhance its beneficial effects is being conducted. Thus, the conjugation of HT to a soluble and biocompatible nanomolecule that effectively carries and releases HT to the bloodstream in a controlled manner is of importance. We demonstrated that the short-term toxicity of oxMWCNTs was extremely low against the two normal cell lines that we used. However, oxMWCNTs functionalized with HT decreased Tg/Tg viability after 48 h of exposure, indicative of time-dependent cytotoxic activity. Long-term toxicity estimated by the clonogenic assay (mimicking physiological conditions) revealed that Tg/Tg cells were more susceptible to oxMWCNTS_HT's cytotoxic effects than NIH/3T3 cells, which are potentially provoked by the production of ROS. Matching the amount of HT carried by the oxMWCNTs with free HT (0.5-10 µg/mL), similar cytotoxicity was observed in NIH/3T3 cells with a higher reduction in the viability of Tg/Tg cells. Several nanotechnological approaches have been employed to increase the delivery and efficiency of HT. Recently, Jadid et al. nano-encapsulated HT and curcumin into poly lactide-co-glycolide-co-polyacrylic acid (PLGA-co-PAA) to enhance their anticancer activity against pancreatic cancer cells and observed a significant increase in apoptotic rates [45]. The same nanocarrier was employed successfully by Ahmadi et al. to enhance the anticancer potential of doxorubicin and HT against HT-29 colon cancer cells [46]. CNTs have been used as carriers for several anticancer molecules, anti-inflammatory agents, steroids, etc. [47]. To the best of our knowledge, this is the first time that oxMWC-NTs have been functionalized with HT. The precursor molecule of HT, oleuropein, has successfully been absorbed in two positions (inside and outside) of different types of SWCNTs, with the inner position providing a stronger interaction with oleuropein [48]. However, no biological assays were performed. In 2015, a theoretical study was published on the feasibility of using SWCNTs for the delivery of quercetin, a plant flavonoid [49], and in 2022, SW-and MWCNTs were examined in vitro as nanocarriers for the delivery of 7-hydroxy flavone (7-HF), which belongs to the flavone subgroup of flavonoids and is a potent anti-inflammatory agent. In the latter study, the authors measured the cytotoxicity of pristine SW-and MWCNTs and -COOH-functionalized SWCNTs against several normal and cancer cell lines, but not CNTs functionalized with 7-HF [50]. Their results verified our findings that the cytotoxicity of CNTs is differentiated amongst the different cell lines and is potentially related to the uptake mechanism for each cell type [47]. Hsp-70 protein is a molecular chaperone that acts in a large variety of protein folding and remodeling processes in the cell. Hsp-70 overexpression can protect cells from stressinduced apoptosis [51,52]. It is known that phenols and antioxidant molecules can inhibit the expression of chaperones, such as Hsp-70, resulting in the loss of their protective activity in cells [52]. The effect of oxMWCNTs or oxMWCNTs_HT on cells overexpressing Hsp-70 is, for the first time, reported in our study. The reduction of Tg/Tg cell viability by oxMWCNTs_HT could be an indication that oxMWCNTs successfully delivered the antioxidant molecule HT, inside the cells, which in turn negatively affected Hsp-70's role; however, this hypothesis needs two-step verification: (a) the intracellular delivery of HT by oxMWCNTs and (b) the exact effect of HT on Hsp-70. Further research is warranted to identify the uptake mechanisms and cellular pathways initiated through the use of oxMWCNTs and oxMWCNTs_HT in this specific type of cells. Conclusions In summary, we have efficiently oxidized MWCNTs creating various oxygen functionalities on the external walls of the tubes. The oxMWCNTs interact via physisorption with the HT drug, creating an efficient platform for the delivery of HT. The oxMWCNTs were non-toxic to the cells. However, when acting as a carrier for HT delivery, oxMWCNTS_HT exerted differential effects against the cells, either by producing ROS (Tg/Tg cells) or by scavenging ROS (NIH/3T3 cells). Complementary data regarding the HT release rates from oxMWCNTs, endocytosis mechanisms, and signal transduction pathway activation are required to completely assess the safety and efficiency of these promising nanocarriers. Conflicts of Interest: The authors declare no conflict of interest.
8,638
2023-02-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Event Reconstruction in the Tracking System of the CBM Experiment The Compressed Baryonic Matter experiment (CBM) will investigate strongly interacting matter at high net-baryon densities by measuring nucleus-nucleus collisions at the FAIR research centre in Darmstadt, Germany. Its ambitious aim is to measure at very high interaction rates, unprecedented in the field of experimental heavy-ion physics so far. This goal will be reached with fast and radiation-hard detectors, self-triggered read-out electronics and streaming data acquisition without any hardware trigger. Collision events will be reconstructed and selected in real-time exclusively in software. This puts severe requirements to the algorithms for event reconstruction and their implementation. We will discuss some facets of our approaches to event reconstruction in the main tracking device of CBM, the Silicon Tracking System, covering local reconstruction (cluster and hit finding) as well as track finding and event definition. hardness, but also to the data acquisition system and data processing [5,6]. The readout concept of CBM foresees no hardware trigger at all; instead, autonomous front-end electronics will deliver timestamped hit messages on activation by a charged particle crossing the respective detector element. The full detector information is aggregated by the DAQ and delivered to an online compute cluster. Here, the raw data will be inspected in real time, and event data containing signatures of rare observables will be selected for storage. The challenge is to reduce the raw data rate of about 1 TB/s to an archival rate of several GB/s, thus by a factor of 300 or more. The required selectivity and the complex nature of the trigger signatures necessitate almost complete reconstruction of tracks and events online. Reconstruction in the CBM experiment Reconstruction in CBM, as in all high-energy physics experiments, means establishing the particle trajectories from the measurements in the detectors, and the determination of the event vertex. These entities are then subjected to higher-level physics analysis, either offline or in real time during the experiment operation. An "event" here means a collection of data presumably belonging to one collision. A reconstructed event is a collection of tracks (trajectories) originating from a common vertex. The number of charged particles created in a typical heavy-ion collision at CBM energies is about 1,000, out of which up to 500 are covered by the detector acceptance. Thus, reconstruction has to deal with very complex event topologies. Owing to the free-streaming design of the CBM readout and data acquisition, the input to reconstruction is a continuous stream of detector raw data, the atomic information units being messages from a single electronic channel typically containing address, time and charge. Unlike for triggered readout systems, where the hardware trigger defines an event data set on which the reconstruction routines can operate, the raw data of CBM will not be partitioned into events. So, events as subject to physics analysis have to be established by the reconstruction procedure itself, making use of the time information of all measurements. Reconstruction thus becomes 4-dimensional in space and time. It should be noted that a trivial decomposition of the data stream into events based on the time information of the raw data themselves is hardly possible at very high interaction rates, since the average time interval between two subsequent events is of the same order of magnitude as the time extension of a typical event; the case that events overlap in time on raw data level thus occurs rather frequently. Low-level reconstruction (cluster, hit and track finding) thus must operate on the continuous data stream. Figure 2. Schematic process graph for reconstruction in CBM. Local reconstruction provides hits (space points) in all detector systems (rings in case of the RICH detector). Tracks are first reconstructed in the STS and MVD inside the magnetic field; hits of the other detector systems are then attributed to these tracks. Tracks originating from a common vertex are grouped into events, which are then characterised by the PSD measurement in terms of centrality and event-plane angle. The procedures in the highlighted boxes are discussed in this article. The reconstruction procedure for CBM consists of several steps. A simplified and schematic process graph is depicted in Fig. 2. The first step is local reconstruction in the various detector systems; here "local" means that reconstruction is independent from other detector elements. Charged particles typically activate not a single, but several readout channels, which represent neighbouring readout elements (pixels, strips, pads). The cluster finding procedure thus groups simultaneously activated neighbouring channels to a cluster. For CBM, "simultaneous" means that the time difference of the measurements in a cluster is smaller than a pre-defined threshold value, which corresponds to the time resolution of the respective detector. From a found cluster, a "hit" can be reconstructed by assigning a cluster centre position. A typical method for this is to determine a charge-weighted mean. Hits are 4-dimensional points in space-time. They are the reconstructed intersection points of the particle trajectories with active detector elements and are the input to track finding. Track finding is first performed from the measurements in the STS and MVD. From these local tracks, global tracks are formed by associating the measurements in the downstream detectors. On the basis of the reconstructed tracks, events can be defined as groups of tracks originating from a common vertex in configuration space and time. Common for all software to be used in real-time reconstruction is the requirement to be extremely fast. CBM targets at an average event reconstruction time of 20 ms, which would result in a compute capacity of the online cluster of about 1 M HepSpec06 (about 65,000 current CPU cores). Timing performance is thus, besides efficiency and accuracy, a decisive design argument for all CBM reconstruction software. In the following sections, we describe some selected parts of the reconstruction graph, namely local reconstruction (cluster and hit finding) in the STS, track finding in the STS, and event finding. The reason for this choice is that these parts are some of the best established ones, and that they represent the computationally most demanding algorithms, since the track model in the magnetic dipole field is non-trivial, and track finding in the STS is a huge combinatorial problem. The STS detector [7] is an arrangement of eight stations positioned close to the target (z = 30 -100 cm) in the aperture of the dipole magnet (Fig. 1, right). It is constructed of about 900 double-sided silicon micro-strip sensors with a strip pitch of 58 µm. From digis to clusters: Cluster finding in the STS A charged particle traversing a sensor of the STS generates electron-hole pairs along its trajectory in the active material, which drift to the readout surfaces owing to the applied bias voltage (Fig. 3, left). The charges of both electrons and holes are collected on the strips into which the readout surfaces are segmented. From the figure it is obvious that, depending on the inclination of the particle trajectory with respect to the sensor surface, one or several strips will be illuminated. Even for perpendicularly impinging particles, effects like thermal diffusion and capacitive coupling of strips can cause more than one strip to be activated. The task of the cluster finder is thus to find a group of measurements corresponding to one particle trajectory. This group will consist of neighbouring strips which are activated close in time. Were all measurements grouped into events, this task would be straightforward -all measurements could be considered simultaneous; the problem is one-dimensional and can be solved by a simple loop over the sensor channels (strips). In the real situation, however, it is not a priori known which measurement belongs to which event, and thus the time coordinate becomes essential. Clusters are now two-dimensional in the discrete address space and in the continuous time axis as illustrated in Fig. 3, right. A straightforward approach for a 2d cluster finder, by discretizing the time axis into bins and performing double loops over channel and time bins, quickly runs into combinatorial problems for large data sets typically containing thousands of events. Thus, a conceptually new method was developed, which tries to accommodate the continuously streaming nature of the input data. The cluster finder is described in detail in [8]; we outline here the basic idea. The algorithm keeps track of the current state of the sensor in the sense that for each channel, the time of the last measurement and a link to the measurement itself are buffered. All incoming digis are processed one after the other. If in its channel or one of the neighbouring channels a digi is found, and the time difference of the old and new measurement is above a threshold value, the corresponding cluster is created, the channels contributing to the cluster are cleared from the buffer, and the new digi is added to the buffer. If, on the other hand, a neighbouring channel is found active, and the time difference is below the threshold, the new digi is added to the buffer, thus forming a cluster with the previous measurement. At the end of processing a data portion (time-slice), clusters are formed from the remaining digis in the buffer. This algorithm does not involve any loop over detector channels and is thus very fast -about 3 ms per average event including the determination of the cluster centre (see section 4), which takes about 50 % of the processing time. Moreover, the execution time per event is independent from the input data size as shown in Fig. 4, whereas the previous approach with 2d cluster finding shows the factorial dependence of execution time with data size typical for combinatorial problems. Determining the cluster position Having found a cluster consisting of a number of strips, each with a charge measurement, the question arises which coordinate across the strips to assign to the cluster. This cluster centre position and its error will enter the coordinates of the hit (see section 5) and is of importance for the track finding procedure which relies on accurate estimates of the hit position and errors. The usual answer to this question is: if the cluster consists of one strip only, assign the strip centre as coordinate; the error is the strip pitch divided by √ 12. If the cluster has more than one strip, take the charge-weighted mean of the strip centre coordinates; the error is somewhat smaller than the strip pitch divided by √ 12. We did not find this answer satisfactory and thus tried to investigate the problem in more detail [9]. It can be formulated in a rather generic way: given a continuous distribution q(x) which is integrated in discrete bins (histogrammed) into a set of values {q i }, is there a prescription x c = f ({q i }) which reproduces on average the first moment of q(x), and what is its error (second moment)? There is no such prescription for an arbitrary distribution q(x). However, some facts are easily to be shown: • If the shape h(x) of q(x) is known, i.e., q(x) = Q h(x−x 0 ), then x 0 can be reconstructed with arbitrary accuracy. • Any prescription x c will have a deterministic residual δ x = x c − x 0 which is a function only of the position of x 0 relative to the centre of the bin x 0 falls into: r = x 0 − x i . • We can define an error σ x if we assume r to be randomly distributed. It is then the second moment of the distribution of δ x . Note that in general, δ x will not be Gaussian-like distributed, since the conditions for the Central Limit Theorem are not fulfilled. • The prescription x c is unbiased, i.e., the first moment of δ x vanishes, if and only if δ x (r) = δ x (−r). This is in general not the case for the weighted mean. For our case, we can approximate q(x) by a constant distribution: q(x) = Q Θ(∆−|x− x 0 |), with the half-width ∆ given by the track inclination and the sensor thickness (see Fig. 3, right). This neglects the effects of non-uniform ionisation along the trajectory, charge diffusion and cross-talk. We can further assume r to be flatly distributed within the central strip, since the track density does not vary over the strip extension of only 58 µm. In addition, the distribution of ∆ can be assumed to be flat, which is substantiated by detector simulations. Under these conditions, we arrive at the following definition of cluster centres and their errors: • For one-strip clusters, the cluster centre is the centre of the strip: x c = x i . The error is σ x = p/ √ 24, with p being the strip pitch. Note that the error is a factor of √ 2 smaller than the usually given one. • For two-strip clusters, the cluster position and its error are . Here, x i is the centre position and q i the charge of strip i. • For n-strip clusters (n ≥ 2), the centre coordinate is with q being the charge in the central strips. This prescription is accurate; the position error is zero. In all cases, the time coordinate of the cluster is calculated as the average of the times of the contributing digis, independent of their charge. From clusters to hits: Hit finding in the STS A hit as a space point is derived from the combination of a cluster on the front side of the sensor with a cluster from its backside. The strips on both sides have a relative angle, which allows to retrieve two-dimensional coordinates from the two one-dimensional measurements as illustrated in Fig. 5, left. The problem is deterministic and purely trigonometrical. The errors of the cluster positions are propagated to the errors of the hit coordinates. The z coordinate of the sensor itself provides the third component of the three-dimensional space point. The time associated to the hit is the average of the two cluster times. It should be noted that the construction of hits from the projective strip geometry leads to ambiguities in case of several clusters being present in a sensor at a time (see Fig. 5, right). Depending on the track density, the so-called fake hits can outnumber the real hits; in a typical CBM situation, their amount is approximately equal to that of true hits. The fake hits are the price to pay for having a strip readout instead of a pixel readout and constitute an additional challenge for the track finding procedure. From hits to tracks: Track finding in the STS Track finding is usually the most involved problem in the reconstruction of high-energy physics experiments. This holds in particular for heavy-ion experiments with a large number of particles being produced in each collision, which results in highly occupied detectors. In fixed-target experiments like CBM, the kinematic forward focusing due to the Lorentz boost creates a highly inhomogeneous hit density, with highest value in the inner parts of the detectors close to the beam pipe. The situation is illustrated in Fig. 6 (centre), showing the distribution of hits in the STS for a typical collision. Several approaches to the track finding problem in CBM were investigated in the past, such as Hough Transform, Conformal Mapping, or Track Following methods. The algorithm currently considered best suited for CBM purposes is based on the Cellular Automaton method. In its event-byevent version, it has been reported on before [10]. In particular, its speed was demonstrated to be 8.5 ms for a typical CBM event, with good scalability on many-core systems [11]. In the recent past, the algorithm was further developed to operate on free-streaming input data instead of data sorted into events [12]. The key elements for this transition are to expand the track model by the time coordinate, such that the state vector becomes (x, y, t x , t y , q/p, t), and a pre-arrangement of the input hits in suitable three-dimensional grids (one per detector station) for fast look-up. These extensions of the original algorithm do not affect the reconstruction efficiency even at the highest interaction rates; it is about 98 % for primary tracks with p > 1 GeV. From tracks to events: Event finding With reconstructed tracks, it is finally possible to define events. The reasons for this are twofold: first, the track time is better defined than the single measurement time, because through the chain clusters -hits -tracks, the tracks consists of a number of digis -at least eight. Second, track finding effectively filters out hits from late spiraling electrons created by the interaction of particles in the target or detector materials. A simple peak-finding procedure in the time distribution of reconstructed tracks allows to group them into events as shown in Fig. 6, right. With this method, 99 % of all events can be resolved at 1 MHz interaction rate, and 80 % at 10 MHz. It is currently under investigation how the remaining "pile-up" events can be resolved or at least tagged. Possible handles for this would be the time structure of tracks within the event, or the detection of multiple primary vertices within the pile-up. Summary We have briefly described some aspects of the reconstruction of tracks and events in the Silicon Tracking System of the CBM experiment. From a stream of detector raw data, we have arrived at a set of events and associated tracks, which can be subjected to higher-level physics analysis. This is, of course, not the end of the story. Many aspects of the full event reconstruction, in particular the association of measurements from the downstream detectors, decisive for particle identification, have not been touched in this report. These are indispensable for the realisation of the physics programme of CBM, and work is continuing to bring them to a level comparable to the algorithms described in this report. However, since the computational problems discussed here are the most challenging in the reconstruction chain, we hope having demonstrated that the goal of the CBM experimentreconstruction and selection of complex events in real time with very fast algorithms -is in reach.
4,232.2
2020-07-01T00:00:00.000
[ "Physics" ]
Printable Poly(N-acryloyl glycinamide) Nanocomposite Hydrogel Formulations Printable synthetic polymer formulations leading to hydrogels with high strengths, swelling resistance, and bioactivities are required to control the mechanical and functional characteristics of biological scaffolds. Here, we present nanocomposite hydrogels prepared with the upper critical solution (UCST)-type polymer ink poly(N-acryloyl glycinamide) (PNAGA) and different concentrations of carbon nanotubes (CNTs). Nanofiller CNTs are recommended for increasing the bioactivities of hydrogel scaffolds. Printing methods were established in which the CNTs were included before and after the fabrication of the ink. The methods were compared to each other and their temperatures and shear-thinning properties were determined from the rheologies. A self-thickening method was utilized for 3D printing of nanocomposite constructs, and the printabilities varied with the CNT content and preparation method. After photopolymerization of the printed constructs, the nanocomposite hydrogel exhibited a slightly higher mechanical strength (15,500 Pa, Emod = 0.697 ± 0.222 MPa), great elasticity (elongation ~500%) and an electrical conductivity (5.2‧10−4 ± 1.5‧10−4 S‧m−1) comparable to that of the neat PNAGA hydrogel. Since high-strength constructs can be 3D printed with good resolution and low cytotoxicity, these nanocomposite hydrogel scaffolds could be used in biological and tissue engineering applications. 3D printable hydrogel inks were prepared by incorporating carbon nanotubes (CNT) in a low-concentrated, thermoreversible poly(N-acryloyl glycinamide) hydrogel. After adding the photoinitiator and N-acryloyl glycinamide monomer, the printed inks were further crosslinked into strong, elastic hydrogel networks. The hydrogels based on preliminary cell viability experiments are considered for bioscaffold applications. Introduction Hydrogels are three-dimensional crosslinked polymer networks that swell in aqueous media [1,2].As they absorb and hold large quantities of water, they are interesting materials for various applications.For example, hydrogels are used in combination with hydrochars for water retention and nutrient release in agriculture [3].In other applications, hydrogels are functionalized and used for wastewater treatment or in the pharmaceutical and food industries [4][5][6][7].In biomedical applications, they mimic the extracellular matrix (ECM) and be used as scaffolds due to their softness and elasticities [2,8,9].3D bioprinting is a layerby-layer assembly method that prints hierarchical scaffolds with high resolution and with tunable hydrogel-based bioinks.Ongoing research is conducted to satisfy the need for bioinks with good printabilities, biocompatibilities, and adequate mechanical stabilities [10,11].Other examples of 3D printed hydrogel materials include the stimuliresponsive actuators or solar evaporator gels used in applications such as regenerative biomedicine or desalination, respectively [12,13].These stimuli-responsive hydrogels change their volumes in response to external stimuli such as pH, ionic strength, or temperature [1,14].Among the known upper critical solution type (UCST) thermosensitive hydrogels, poly(N-acryloyl glycinamide) (PNAGA), which has a physically crosslinked polymer network, has received increasing attention in research on biological scaffolds.In addition to their thermosensitivities, PNAGA hydrogels exhibit high mechanical strengths, excellent elasticities, and anti-swelling properties stemming from hydrogen bonding of its dual-amide moieties [15,16].At low concentrations, PNAGA forms a soft, thermoreversible hydrogel with a sol-gel phase transition, while at high concentrations, high-strength, anti-swelling hydrogels are produced [16].3D printing of pure monomer inks is imprecise due to their low viscosities; therefore, filler materials or other viscosity-increasing methods can be used with the inks.Xu et al. utilized the concentration-dependent hydrogen bonding-strengthening property of PNAGA to prepare a self-thickening PNAGA hydrogel as a meniscus substitute [17].NAGA monomers and a photoinitiator were loaded into the highly viscous PNAGA hydrogels to create inks, and they could be thermally extruded from a nozzle due to the sol-gel transition.The printed gels were crosslinked under UV light to give high-strength constructs via formation of additional hydrogen bonds.In addition to the self-thickening strategy, tackifiers such as clay have been used to prepare viscous PNAGA nanocomposite inks used in 3D printing for bone regeneration.The clay increased the mechanical strengths of PNAGA hydrogels and enhanced cell interactions of the otherwise nonbioactive PNAGA at the same time [18].Carbon nanotubes (CNTs) are widely used as nanofillers in nanocomposite hydrogels.They enhance the mechanical properties and add thermal and electrical conductivity to a hydrogel network [19].Short multiwalled carbon nanotubes (MWCNTs) can be incorporated into a physically crosslinked hydrogel.Therefore, they are suitable for use with PNAGA inks and hydrogels.Nanocomposite hydrogels based on CNTs have shown increased stiffness and electrical conductivity, which is favorable for adhesion and proliferation of cells [20][21][22]. In this work, 3D-extrusion-printable PNAGA CNT nanocomposite inks were prepared.Two preparation methods were used in which the MWCNTs were added before and after the formation of the ink.The selfthickening inks were loaded into a 3D printer, and highstrength hydrogels were obtained after additional photopolymerization.The MWCNTs acted as a nanofiller to mechanically reinforce the physically crosslinked network and add bioactivity to the PNAGA hydrogel, as the cells tended to adhere to rigid surfaces.Owing to the selfthickening preparation method and the bioactivity-inducing CNTs, constructs with the PNAGA CNT nanocomposite hydrogels could be used to stimulate cell growth in bioapplications or to create scaffolds for tissue engineering via 3D printing. Analytical measurements The equilibrium swelling ratios (ESR) were determined by placing the washed PNAGA CNT nanocomposite hydrogels in polystyrene Petri dishes and swelling them in pure water at room temperature for 24 h.After removing the swelling medium, the hydrogels were blotted with filter paper and weighed (W t ).The gels were reswelled in pure water and weighed again after five days.The weighing process was repeated after two days and another three days.The hydrogels were then vacuum-dried (BINDER GmbH) at 40 °C for 24 h and weighed (W d ).The equilibrium swelling ratios (ESRs) of the gels were calculated as W t /W d .The mechanical strengths of the hydrogels were studied by viscoelastic rheological measurements on an Anton Paar MCR 203 rheometer using PP25 as a measuring system and a constant force of 0.5 N. Round gels were cut out with a 25 mm punching tool.Before the measurement, the linear viscoelastic range of the hydrogel was determined (LVE).Frequency sweep measurements were performed over the range 0.1-100 rad s −1 .The temperature-dependent viscosity changes of the PNAGA CNT inks were determined by rheology studies in the temperature range 30-90 °C.A cooling/heating cycle was applied with a cooling/heating rate of 5 °C min −1 .The frequency was 1 Hz, and the strain was 1%.Shear thinning was studied with a steady-state flow test at 80 °C and with shear rates of 1 to 1000 s −1 , and complex viscosity curves were obtained.Uniaxial mechanical tensile tests of the PNAGA CNT nanocomposite hydrogels were performed with a BT1-FR 0.5TND14 system (Zwick/Roell) at room temperature.The specimen samples were prepared by crosslinking a pregelled solution in a mold with the dimensions specified in DIN53504 S3.The thicknesses of the hydrogels were 2 mm.The hydrogels were stored at room temperature for 24 h and then measured with a test speed of 50 mm min −1 .The grip-to-grip separation was 20 mm.The elastic modulus was determined from the slope of the linear region of the stress-strain curve, and the tests were repeated at least three times.Thermogravimetric analyses (TGA) of the carbon nanotubes (CNTs) were performed with a TG 209 F1 Libra system (Netzsch).The samples were studied over the range 25-600 °C under nitrogen and synthetic air (O 2 /N 2 , 20/80, v/v) with a flow rate of 50 mL min -1 .Proteus Analysis version 8.0 was used to analyze the data.The morphologies of the ox-MWCNTs were studied by elastic bright-field transmission electron microscopy (TEM) utilizing a JEOL JEM-2200FS EFTEM (JEOL GmbH, Freising, Germany) electron microscope operated at an acceleration voltage of 200 kV.A sample drop of an ox-MWCNT dispersion was trickled on a piece of carbon-coated copper grid.Before it was placed into the TEM specimen holder, the copper grid was air-dried under ambient conditions.Zero-loss filtered images were recorded with a Gatan CMOS (OneView) camera with GMS 3.11.The resistivity was measured with the four-point method.After the PNAGA CNT nanocomposite hydrogels were prepared, they were swelled in pure water for one week.The swelling medium was changed twice daily to remove ionic impurities.The distance between the electrodes was 1 mm.A Keithley 2401 Sour-ceMeter was used as a source and measuring device.A current was applied through the electrodes, and a V-I graph was obtained.The resistance R was the slope of the linear region.The conductivity σ in S m −1 was calculated as follows: where σ is the conductivity in S‧m −1 , d is the distance between electrodes, H is the sample thickness, and W is the sample width.The measurements were repeated at least three times, and the average was taken. Oxidation of the multiwalled carbon nanotubes (ox-MWCNTs) Oxidized MWCNTs (ox-MWCNTs) were prepared with a previously published procedure [24].Briefly, 490 mg of multiwalled nanotubes (OD 20-30 nm, length 0.5-2 μm) were dispersed in 40 mL of a 3:1 solution of 95% H 2 SO 4 and 65% HNO 3 (v/v) and sonicated for four hours at RT.After stirring for 24 h, the dispersion was purified with a dialysis tube (molecular weight cut off MWCO = 12-14 k) for seven days.The dispersion was frozen in liquid nitrogen and freeze-dried to obtain 395 mg of the ox-MWCNTs. Preparation of self-thickening PNAGA CNT inks Self-thickening PNAGA CNT inks were prepared with a modified literature procedure [17].Two different formulations were prepared via Method 1 and Method 2. In Method 1, a thermoreversible hydrogel with 4 wt% PNAGA was obtained by photopolymerizing a pregelled solution containing 300 mg of NAGA, 1.5 mg of potassium persulfate (0.5 wt% relative to NAGA) and 0.1 wt% CNTs (relative to NAGA) in 7488 μL of H 2 O for 40 min.The gel was heated to 85 °C to become a sol.NAGA (3173 mg, 30 wt% relative to the gel mass) and IRGACURE-2959 (31.7 mg, 1 wt% relative to the gel mass) were added to the sol, and inks were obtained after cooling to RT. Inks with CNT contents of 0.25 wt% and 0.33 wt% were prepared analogously.As a control, an ink without any added CNTs was prepared likewise. In Method 2, the thermoreversible hydrogel with 4 wt% PNAGA was prepared via photoinitiated polymerization with IRGACURE-2959 as the initiator and without added CNTs.After heating to 85 °C, the NAGA (30 wt% relative to the gel mass) and CNTs (0.1 wt% relative to the added NAGA) were dispersed in the sol and stirred for 10 min.After cooling to RT, the inks were obtained.The preparation was repeated analogously with 0.25 wt% and 0.33 wt% CNTs.As a control, an ink without added CNTs was prepared likewise. The inks prepared were labeled Ink_x_y, where x denotes the preparation method used for formulation and y denotes the CNT content in wt% relative to NAGA. Preparation of a self-thickening PNAGA CNT nanocomposite hydrogel PNAGA CNT nanocomposite hydrogels were prepared by heating the inks from Method 1 or Method 2 to their sol state at 85 °C; they were transferred into round Teflon molds (diameter of 2.5 cm and thickness of 1 mm), and covered with glass slides.The photocrosslinking reaction was conducted at RT and with irradiation by a UVAHAND lamp for 10 min.The gels were stored in pure water at room temperature, and the swelling medium was replaced daily for two days.The hydrogels were labeled Gel_x_y, where x denotes the method used for the gel preparation and y denotes the CNT content in wt% relative to NAGA (e.g., Gel_2nd_0.33). 3D printing of the self-thickening PNAGA CNT nanocomposite hydrogels The PNAGA CNT inks were transferred to alumina cartridges (Cellink) and placed in the printhead of a pneumatic bioprinter Inkredible+ (Cellink).The gauge of the needle was 24 G.The inks were tempered at different printing temperatures for 8 min before printing.A pressure of 60 kPa was applied.G-code was generated with the software Heartware (Cellink).The feed rate was specific to the printer and included in the G-code generated by the software. Live/Dead viability assay The hydrogel discs were preincubated in 24-well plates with complete cell culture medium (HBSS) for 24 h.Then, L929 cells were incubated on the hydrogel discs for 96 h (0.05 × 10 6 cells/well).After that, they were stained with 200 µL of dye/well (Live/Dead dye, Reduced Biohazard Kit, Thermo Fisher Scientific) for one hour.The staining solution was removed, and discs in the wells were washed four times with 200 µL HBSS/well.Then, the cells were fixed with 4% glutaraldehyde in HBSS for three hours at RT.After removal of the fixative, the discs were washed once with HBSS and stored at 4 °C in HBSS (250 µL/well). He stained cells were examined with a Leica DMR fluorescence microscope.An excitation filter cube with a wavelength range of 450-490 nm, a dichromatic mirror of 510 nm, and a suppression filter of LP 515 nm distinguished the live cells (green) and dead cells (red) in the microscope image.The images were visualized with Image Capture and Leica QWin software. Statistical analyses The hydrogel equilibrium swelling experiments, tensile tests, and rheological experiments were performed at least three times unless otherwise stated.The means and standard deviations are given. Preparation of self-thickening PNAGA CNT nanocomposite hydrogels Before preparation of the self-thickening poly(N-acryloyl glycinamide) carbon nanotube (PNAGA CNT) nanocomposite hydrogels, the multiwalled carbon nanotubes (MWCNTs) were modified into a more soluble form to ensure homogenous dispersion of the CNTs in the pregel solution.The solubility of the carbon nanotubes was increased by oxidizing them in a highly acidic environment (ox-MWCNT) (Fig. S1a).The resulting carboxylate functionalities on the CNT surface allowed enhanced dispersion in aqueous solutions.The successful oxidation of the MWCNTs and their characterization data are explained in Fig. S1A-D.Briefly, the CNTs were ultrasonicated in a mixture of H 2 SO 4 and HNO 3 for several hours to introduce these hydrophilic carboxyl groups.The modified CNTs showed different thermal degradation and dispersion behaviors in water.In the PNAGA CNT nanocomposite formulations, the CNT concentrations were kept deliberately low to avoid incomplete conversion to the hydrogel, as the CNTs could absorb the UV light needed for the crosslinking reaction. Self-thickening PNAGA hydrogels were prepared by first photopolymerizing a low-concentration pregel solution with 4 wt% NAGA and forming a soft PNAGA hydrogel with a sol-gel phase transition.The free-radical polymerization of NAGA into the PNAGA hydrogel is depicted in Scheme 1. The preparation of the self-thickening PNAGA CNT nanocomposite hydrogels is depicted in Scheme 1.As shown, 30 wt% of NAGA monomer was loaded (loading step) into the thermoreversible gel to obtain the printable inks.The concentrations used in preparing the inks were chosen according to a previously established formulation [17].Depending on whether the CNTs were added to the pregel solution or during the loading step, two different preparation methods, Method 1 and Method 2, respectively, were developed.In Method 1, a low-strength, thermoreversible PNAGA hydrogel was prepared by thermal polymerization of the NAGA monomer by using potassium persulfate (KPS) as the initiator in the presence of the CNTs.Prolonged ultrasonication was used to ensure homogenous dispersion of the CNTs in solution.Since the CNTs absorb light, photoinitiators were not chosen because conversion into soft hydrogels would be inefficient [25].Upon heating to a temperature of 85 °C, the gel became a sol; NAGA and the photoinitiator were added in the next step to obtain an ink.The inks could be reshaped into discs.Since they were thin, sufficient long-wavelength UV crosslinking yielded the PNAGA CNT nanocomposite hydrogels (Scheme 1A-D).With increasing CNT concentration, the discs became darker.In Method 2, the CNTs were added in the loading step after photocrosslinking of the soft PNAGA hydrogels.Since no CNTs were added to the pregel solution, photopolymerization was performed instead of thermal polymerization, and this resulted in high conversion rates.While the thermoreversible gel became a sol at high temperature, the CNTs were added, and the mixture was stirred and ultrasonicated.However, in contrast to Method 1, the CNTs were not adequately homogenized throughout the sol.After the UV treatment, discs were obtained in which domains with different CNT concentrations were visible (Scheme 1A'-4D').The disc thicknesses also decreased with increasing CNT concentration, since a higher concentration led to partial absorption of the UV light needed for polymerization.This led to bending or rolling of the hydrogel discs, as they were not stable in a planar state.It should be noted that the concentration of CNTs in Method 1 was relative to the NAGA monomer concentration in the pregel solution (4 wt% NAGA), while in Method 2, the concentration of CNTs was relative to the NAGA monomer added during the loading step (30 wt% NAGA).Therefore, the total concentration of CNTs in the Method 1-derived gels was lower than that in the Method 2-derived hydrogels.As higher CNT concentrations absorb more of the UV light required for photocrosslinking polymerization, the Method 2-type hydrogels exhibited less efficient conversion into fully crosslinked hydrogels than the Method 1-type hydrogels. Characterization of the PNAGA CNT inks For a proper 3D printing setup exhibiting smooth printing, the viscosities of the inks were studied as a function of the temperature or shear rate.In this work, CNTs with different concentrations from 0 to 0.33 wt% relative to the NAGA monomer were included in the ink mixture.As the inks were used for extrusion-based 3D printing, the temperaturedependent rheological properties of the unloaded PNAGA hydrogel and the loaded ink were studied for both preparation methods (Fig. S2).When no CNT or additional NAGA monomer were added, the neat soft 4 wt% PNAGA hydrogel passed into the sol state at approximately 85 °C (Figs.S2A and S3A).When the CNTs were used in the first preparation step of the 4 wt% PNAGA gel, the sol-gel phase transition was shifted to higher temperatures for the Method 1-type inks (85-92 °C) (Fig. S2B-D, Step I in Scheme 1).The CNTs provided mechanical reinforcement of the polymer matrix, which required a higher temperature to break the hydrogen bonds formed in the physically crosslinked hydrogel [26].On the other hand, the intermolecular interactions were disturbed by adding the NAGA monomer.The ink exhibited a lower phase transition temperature and weaker mechanical strength (Fig. S2E-G).For printability, a lower phase transition temperature is desirable, so the printing parameters were fine-tuned by changing the concentration of the CNTs. In Method 2-type inks, the CNTs were added during the NAGA loading step.The viscosity changes occurring in the thermoreversible PNAGA hydrogels at various temperatures were compared before and after loading (Fig. S3B-D).In Method 2, a photoinitiator was used to prepare the gels, while the Method 1-type inks were prepared via redox polymerization with KPS.The sol-gel phase transition temperature of the latter was higher, possibly due to the gentler reaction conditions that preserved the hydrogen bonds constructed between the chains.Upon adding the monomer, the phase transition temperature was lowered.As with the Method 1-type inks, the NAGA monomer disturbed the hydrogen bonding of the gel, which resulted in a lower phase transition temperature.The effects of CNTs in the gel were not observed, as a similar drop in phase transition temperature was found regardless of the CNT concentration. Steady-state flow experiments were performed with the PNAGA CNT inks to study the dynamic viscosities for different shear rates at 80 °C (Fig. S4).It is known that PNAGA hydrogels exhibit shear-thinning [16,17].A temperature of 80 °C was chosen because the inks were near the sol-gel phase transition (Figs.S2 and S3).The viscosities of the studied inks decreased gradually with increasing shear rates.At 1 s −1 , the soft 4 wt% PNAGA hydrogel without CNTs or added NAGA had the lowest starting viscosity (Methods 1 and 2, Fig. S4, black).With increasing shear rates, the viscosity dropped to the lowest value among the inks studied here.Adding the NAGA monomer to the 4 wt % PNAGA hydrogel (Methods 1 and 2) increased the viscosity relative to that for the unloaded PNAGA when continuous shearing was applied.When the CNTs were added together with the NAGA monomer, the PNAGA CNT inks showed slightly weaker shear-thinning than the loaded neat PNAGA hydrogel (Method 2, Fig. S4, red, green, blue, turquoise).Therefore, the CNT content in the ink influenced the shear-thinning properties to some degree.It is assumed that shear-thinning would be further reduced if inks with higher CNT contents were subjected to shearing forces.Overall, the CNTs had a positive influence on the temperature-dependent viscosity changes by reinforcing the polymer network structure, while in the steady-state flow experiments, the CNTs only slightly lowered the shear thinning capacities of the inks. Mechanical characterization of the PNAGA CNT Hydrogels The mechanical toughnesses of the PNAGA CNT hydrogels were studied with frequency sweep experiments (Figs.S5 and S6).The storage moduli of the Method 1-type hydrogels differed slightly from that of the non-CNT PNAGA.The hydrogels made with low CNT concentrations exhibited storage moduli of ~7000 Pa, as shown in Fig. S5B, C, which was lower than that of the neat PNAGA hydrogel (G' ~12,500 Pa).However, the highly concentrated CNT hydrogel exhibited a storage modulus of approximately 15,500 Pa. (Fig. S5D).As discussed, the CNTs increased the gel stiffness when incorporated into the network [20,27].A higher concentration of CNTs would strengthen the hydrogel network even more, but a lower conversion would result for the UV crosslinking reactions, so a CNT concentration of 0.33 wt% was chosen as the threshold. In the Method 2-type hydrogels, the CNTs were added together with the monomers to the 4 wt% PNAGA hydrogels during the loading step.Because of the high viscosity, it was challenging to integrate the CNTs into the gel matrix by mixing.As a result, the CNTs were heterogeneously dispersed in the ink [28].After thickening of the ink, the mechanical stiffness was studied with rheological experiments (Fig. S6).The added CNTs barely changed the mechanical characteristics of the Method 2-type hydrogels.A storage modulus of approximately 4000 Pa at a frequency of 1 s −1 was reported for the hydrogels.As mentioned before, the difficulty in dispersing the CNTs in the viscous matrix was attributed to the formation of CNT aggregates.In this case, the network did not fully utilize the mechanical strengthening capability of the CNTs.However, Gel_2nd_0.33,which exhibited the highest CNT concentration in the studied series, had a marginally higher storage modulus of 4500 Pa.As explained for Method 1, the CNTs reinforced the mechanical stiffness, and theoretically, a higher CNT concentration would increase the hydrogel rigidity even more. Physically crosslinked PNAGA has excellent elasticity due to its flexible hydrogen bonding interactions.Consequently, PNAGA CNT hydrogels should perform similarly when stretched.Figure 1 shows the stress-strain curves for various PNAGA CNT hydrogels prepared with both Methods 1 and 2. Neat PNAGA was prepared without any CNT content via preparation Method 1 and had an elongation at break and elastic modulus E mod (442.0 ± 96.7% and 0.277 ± 0.084 MPa) similar to those of the Method 2-type samples (533.8 ± 125.2% and 0.327 ± 0.068 MPa).In Fig. 1 Stress-strain curves for the PNAGA CNT nanocomposite hydrogels Method 1, the ink was prepared by thermal polymerization initiated with KPS, while in Method 2 the inks were prepared by photoinitiation.Furthermore, the difference between the two methods was the different polymerization times.As the polymerization was catalyzed with the accelerator TEMED, the gel in Method 1 was formed after a short polymerization time.For photopolymerization Method 2, a much longer polymerization time with exposure to UV light was employed for conversion into the hydrogel, which might have resulted in slightly different mechanical strengths and elasticities.The elongation at break for the Method 1-type hydrogel was approximately 500%, similar to that for the neat PNAGA hydrogel.The CNT concentration affected the elastic modulus in Gel_1st_0.33by reinforcing the hydrogel network. Here, a considerably high E mod of 0.697 ± 0.222 MPa was reported.The change in mechanical strength resulting from mechanical reinforcement by the CNTs was evident for the Method 1-type hydrogels.Among the Method 2-type hydrogels, Gel_2nd_0.1 had the shortest elongation at break, 409.6 ± 2.1%.However, the elongations at break for Gel_2nd_0.25 and 0.33 were noticeably high at 1428.1 ± 453.8% and 1532.8 ± 338.4%, respectively.Since they had the highest amounts of CNTs added to the ink, the CNTs absorbed a significant amount of UV light required for the complete conversion of the ink into the hydrogel.The hydrogels were thin, and the low crosslinking conversion led to highly elastic hydrogels with low E mod values (0.031 ± 0.002 MPa and 0.021 ± 0.007 MPa, respectively). Equilibrium swelling studies Swelling of the different PNAGA CNT hydrogels was studied gravimetrically over seven days (Fig. 2).The equilibrium swelling ratios (ESRs) of the gels remained similar at approximately 3.4 to 4, even after addition of the CNTs.The ESRs of the Method 1-type hydrogels barely differed from each other.The effects of the CNTs on swelling of the polymer network were negligible due to the low concentration of CNTs dispersed in the hydrogel.In the Method 2-type hydrogels, the equilibrium swelling ratios were slightly higher for hydrogels with higher CNT contents, such as Gel_2nd_0.25 and Gel_2nd_0.33.As discussed earlier, the high CNT content absorbed the UV light needed for complete conversion into the hydrogel.A lower conversion was accompanied by a lower crosslinking degree, which resulted in greater swelling [29].The ESRs of the hydrogels remained constant for seven days, demonstrating the anti-swelling properties already reported for self-thickening, non-CNT PNAGA [17].For 3D printing and tissue engineering purposes, the limited swelling of gels was particularly important for preparing constructs that remained stable in shape and size. Conductivity experiments The CNT-enhanced hydrogels were shown to have conductivities essential for cell compatibility [28,30].In this work, the conductivities of the PNAGA CNT hydrogels were calculated from resistivity measurements.The conductivities of the Method 1-type hydrogels increased with increasing CNT concentration, as shown in Fig. 3A.The non-CNT PNAGA hydrogel had a conductivity of 1.5‧ 10 −4 ± 2.9‧10 −5 S‧m −1 , which was attributed to residual acrylate groups or dissolved CO 2 .Gel_2nd_0.33 had the highest CNT concentration and, therefore, the highest conductivity of 4.1‧10 −4 ± 9.8‧10 −5 S‧m −1 . In the Method 2-type hydrogels, the conductivities increased gradually from the lowest value of 2.2‧10 −4 ± 1.9‧ 10 −5 S‧m −1 for neat PNAGA to the highest value of 5.2‧ 10 −4 ± 1.5‧10 −4 S‧m −1 for Gel_2nd_0.25 (Fig. 3B).Curiously, the Gel_2nd_0.33hydrogel with the high CNT concentration had a comparatively low conductivity of 2.3‧10 −4 ± 5.7‧10 −5 S‧m −1 .Since carbon nanotubes form aggregates when dispersed in highly viscous polymer mixtures, as was the case for the Method 2-type inks, the heterogeneously dispersed CNTs Printable Poly(N-acryloyl glycinamide) Nanocomposite Hydrogel Formulations generated anisotropic conductivities [28].It was expected that conductivity would not increase linearly with the CNT content due to aggregation.Instead, the conductivity may drop when the CNTs aggregated in the ink mixture, which is likely with a sufficiently high CNT concentration. Extrusion-based 3D printing of the PNAGA CNT nanocomposite hydrogels Due to their thermoreversibilities and shear-thinning properties, the PNAGA CNT inks should be viable for 3D printing.The shape fidelity, homogeneity, and strength of the printed construct can be fine-tuned with parameters such as the printing temperature, pressure, nozzle size, and feed rate. In preliminary experiments, 4 wt% PNAGA hydrogels without monomer, CNT, or initiator were printed with different parameters (Fig. S7).As indicated by the rheological studies, the unloaded hydrogel had a high sol-gel phase transition temperature because loading with monomeric NAGA weakened the intermolecular hydrogen bonds supporting the gel network.Therefore, different printing parameters were expected for successful printing with the unloaded and loaded hydrogels.Due to their soft natures and moderate shear thinning properties, printing was difficult with the unloaded PNAGA gels.From A-D, the printing pressure was gradually reduced from 300 to 150 kPa.At higher pressures, the shape fidelity and homogeneity of the construct were insufficient, and the line thickness was very substantial (Fig. S7A).Upon decreasing the pressure, finer lines were obtained.At these low pressures, higher temperatures of 63 to 65 °C were necessary; otherwise, the solgel transition temperature of the gel was not reached during continuous printing.Overall, with a pressure of 150 kPa and printhead temperature of 65 °C, a thin construct exhibiting moderately good shape fidelity and structural integrity was obtained (Fig. S7D). After loading the soft 4 wt% PNAGA with NAGA monomer into the inks, a different printing behavior was observed (Fig. S8).A drastic improvement was observed in the shape fidelity and homogeneity of gel printing.The added NAGA monomer influenced the hydrogen bonding in the gel network, as shown in the rheological experiments.Usually, NAGA interacts with the hydrogen bonds formed in soft hydrogels, weakens them and lowers the sol-gel transition temperature.The inks were tempered at higher temperatures than those used with the nonloaded inks.At a temperature of 75 °C, a construct was printed with good shape fidelity, thinner lines, and continuous printing without interruption.To be exact, loading the monomer into the ink lowered the phase transition temperature.With a moderate pressure of 200 Pa, favorable conditions for printing the loaded inks were realized (Table S2). In the final 3D printing experiment, the printing inks made with Methods 1 and 2 were studied (Fig. 4, Table 1).The addition of CNTs during or after preparation of the thermoreversible hydrogels (Methods 1 and 2, respectively) altered the printabilities and appearances of the photocrosslinked structures.As discussed above, the CNTs were not homogenously distributed in the Method 2-type hydrogels (Fig. 5C, D).While the gel became a fluid at high temperatures, the CNTs were barely dispersed in the gel matrix, which resulted in an inhomogeneous distribution of the CNTs even after crosslinking.In the Method 1-type inks, the CNTs were more dispersed because they were added prior to the crosslinking reaction of the thermoreversible hydrogel ink.More homogenous printed structures were obtained (Fig. 4A, C). Increased shape fidelity and structural integrity were observed in printing with the CNT-containing inks compared to previous printing attempts without the CNTs.The CNTs reinforced the hydrogen bonds, increased the mechanical strength of the PNAGA hydrogel network and A A) enabled smoother printing.A printing pressure of 150 kPa, a temperature of 70 °C, and a feed rate of 200 enabled the printing of fine lines.However, the printing parameters could be further optimized to increase the homogeneity and enable printing of more complex 3D structures. Live/dead cell viability assay After incorporating CNTs into the hydrogel network, the cytocompatibilities of the PNAGA CNT hydrogels was studied with live/dead cell viability assays.L929 cells were seeded onto the hydrogels for 48 h.The live cells were stained with SYTO 10, a green fluorescent nucleic acid dye, and the dead cells were stained with DEAD Red, a red fluorescent nucleic acid dye.The cells on the Method 1-type hydrogels were viewed under a fluorescence microscope, and the resulting images are shown in Fig. 5. Live or dead cells were barely observed as clusters on the neat PNAGA hydrogel without CNT incorporation, as only single live cells were found on the hydrogel.With increasing CNT concentration, larger clusters of the live cells were observed.In particular, Gel_1st_0.33had several clusters of live cells (Fig. 5D).Since the CNTs increase biocompatibility by reinforcing the mechanical strength and electrical conductivity of a gel, more cells should have grown on the hydrogel.A similar observation was made for the Method 2-type hydrogels (Fig. S9).The non-CNT PNAGA hydrogel showed singular cells, but large clusters were found only in the PNAGA CNT hydrogels.In the Gel_2nd_0.33hydrogels, single dead cells (red) were seen, but green live cell clusters were predominant.Overall, the PNAGA CNT hydrogels showed good cytocompatibilities with different CNT concentrations.Cytometric methods should be deployed to understand the cell behavior in the hydrogels. Conclusion Physically crosslinked nanocomposite hydrogels based on PNAGA and CNTs were successfully prepared via two distinct methods.The self-thickening property of the PNAGA was utilized when the thermoreversible low-concentration PNAGA was combined with monomeric NAGA and CNTs to form printable inks.Depending on when CNTs were added during the preparation procedure, the inks showed varying homogeneities for the CNTs inside the gel matrixes.The solgel phase transition temperature could be fine-tuned with the addition of CNTs, as shown for the inks with greater homogeneous CNT dispersion.The shear-thinning properties increased with increasing CNT concentrations, which facilitated modification of the printability of the PNAGA CNT inks.The mechanical properties of the PNAGA CNT hydrogels were studied with tensile tests and rheological methods, and it was found that CNTs reinforced the stiffness of the hydrogel network, and elongation depended on the conversion of the ink into the hydrogel.The 3D printing resolution depended on the homogeneity of the CNT dispersion in the inks, and suitable printability was achieved with moderate pressures and temperatures.The CNTcontaining printed hydrogels showed greater shape fidelity and structural integrity than the non-CNT PNAGA hydrogels.Cell viability studies showed that the PNAGA CNT nanocomposite hydrogels supported the growth of larger cell The bold parameters were found to be the most suitable for printing clusters.The cytotoxicity was low, as only single dead cells were found at higher CNT concentrations.The cell viabilities of these PNAGA CNT nanocomposite hydrogels should be studied further to determine the suitability of the PNAGA CNT hydrogels for cell growth.The printing parameters could be fine-tuned to print complex structures for tissue engineering or other bioapplications. Scheme 1 A Scheme 1 A Free-radical polymerization of NAGA into PNAGA with initiators such as potassium persulfate (KPS) or the photoinitiator IRGACURE-2959.The polymer PNAGA formed hydrogen bonds in aqueous solution.B Preparation of PNAGA CNT inks and gels.In preparation Method 1, appropriate amounts of monomeric NAGA, KPS, and CNTs were polymerized into a soft and thermoreversible hydrogel by heating with the accelerator TEMED (4 wt%, Step I).Then, the gel was heated to 85 °C and loaded with 30 wt% additional Fig. 2 Fig. 2 Equilibrium swelling ratios of (A) the Method 1-type hydrogels and (B) the Method 2-type hydrogels Table 1 Parameters used for 3D printing the constructs A-D in Fig.
7,691.6
2023-06-19T00:00:00.000
[ "Materials Science", "Engineering" ]
Chemical strategies to modify amyloidogenic peptides using iridium(iii) complexes: coordination and photo-induced oxidation† †Electronic supplementary information (ESI) available: Experimental section and Fig. S1–S19. See DOI: 10.1039/c9sc00931k Effective chemical strategies, i.e., coordination and coordination-/photo-mediated oxidation, are rationally developed towards modification of amyloidogenic peptides and subsequent control of their aggregation and toxicity. Introduction A substantial amount of research effort has been dedicated towards identifying the association of amyloidogenic peptides with the pathologies of neurodegenerative diseases. Among these amyloidogenic peptides, amyloid-b (Ab), a proteolytic product of the amyloid precursor protein found in the ADaffected brain with a self-aggregation propensity, has been implicated as a pathological factor in Alzheimer's disease (AD). [1][2][3][4] As the main component of senile plaques, Ab accumulation is a major pathological feature of AD. [1][2][3]5 Recent developments in Ab research (e.g., clinical failures of Ab-directed therapeutics) have led to the re-evaluation of the amyloid cascade hypothesis. 6 Ab pathology, however, remains a pertinent facet of the disease with indications of Ab oligomers as toxic species responsible for disrupting neuronal homeostasis. [1][2][3]7 Furthering our elucidation of Ab pathology presents an investigative challenge arising from its heterogeneous nature and intrinsically disordered structure. 1,2 To overcome this obstacle and advance our understanding of the Abrelated contribution towards AD, in this study, we illustrate chemical approaches to modify Ab peptides at the molecular level using transition metal complexes. Transition metal complexes have been reported to harness their ability to induce peptide modications (e.g., hydrolytic cleavage and oxidation), inhibit the activities of enzymes, and image cellular components. In particular, the ability of transition metal complexes to alter peptides stems from their properties, such as the capacity for peptide coordination. [17][18][19][20][21][22][23][24][25][26][27][28][29]36,37 Herein, we report effective chemical strategies for modication of Ab peptides using a single Ir(III) complex in a photo-dependent manner (Fig. 1). Ab modications, achieved by our rationally engineered Ir(III) complexes, include two events: (i) complexation with Ab in the absence of light; (ii) Ab oxidation upon coordination and photoactivation, which can signicantly regulate their aggregation and toxicity. Through our multidisciplinary studies, presented in this work, we demonstrate the development of new chemical tactics for modication of amyloidogenic peptides using transition metal complexes, useful for identifying their properties, such as aggregation, at the molecular level. Results and discussion Rational strategies for peptide modication using Ir(III) complexes To chemically modify Ab peptides in a photoirradiationdependent manner (Fig. 1a), four Ir(III) complexes (Ir-Me, Ir-H, Ir-F, and Ir-F2; Fig. 1b) were rationally designed and prepared. Iridium is a third row transition metal exhibiting strong spinorbit coupling at the center of Ir(III) complexes with facile electronic transitions. 44,45 This spin-orbit coupling can be further strengthened by ne-tuning the ancillary ligands of Ir(III) complexes. As a result, Ir(III) complexes confer notable photophysical properties upon excitation by relatively low energy irradiation in the visible range, including their ability to generate reactive oxygen species [ROS; e.g., singlet oxygen ( 1 O 2 ) and the superoxide anion radical (O 2 c À )] via electron or energy transfer. [46][47][48] In addition, Ir(III) complexes with octahedral geometry are relatively stable upon light activation. 48 Incorporation of 2-phenylquinoline derivatives as ligands yielded high emission quantum yield (F) and robust ROS generation. 46 Therefore, ancillary ligands of four complexes were constructed based on the 2-phenylquinoline backbone by applying simple structural variations to provide appropriate structural and electronic environments to promote the photochemical activity of the corresponding Ir(III) complexes. 46 Moreover, uorine atoms were introduced into the ancillary ligand framework affording Ir-F and Ir-F2 to chemically impart the ability to interact with Ab through hydrogen bonding, alter photophysical properties of the complexes, and enhance the molecules' biocompatibility. [49][50][51] Two water (H 2 O) molecules were incorporated as ligands to enable covalent coordination to Ab via replacement with amino acid residues of the peptide, e.g., histidine (His). 20,52,53 The four Ir(III) complexes were synthesized following previously reported procedures with modications (Scheme 1 and Fig. S1-S3 †). 20,54-56 As depicted in Fig. S4 and S5, † these Ir(III) complexes were conrmed to coordinate to His or Ab in both H 2 O and an organic solvent [i.e., dimethyl sulfoxide (DMSO)] under our experimental conditions. Coordination-dependent photophysical properties and ROS production of Ir(III) complexes Photophysical properties of the prepared Ir(III) complexes were investigated by UV-vis and uorescence spectroscopy. As shown in Table 1 and Fig. S6, † in the absence of His or Ab, low F values of the four Ir(III) complexes were observed, along with relatively poor 1 O 2 generation with photoactivation. Note that a solar simulator (Newport IQE-200) was used to irradiate the samples at a constant intensity (1 sun light; 100 mW cm À2 ). Upon addition of His, the F values of the four Ir(III) complexes drastically increased (e.g., F Ir-F ¼ 0.0071 versus F Ir-F+His ¼ 0.26; Table 1), indicating His coordination of the complexes, which was further conrmed by electrospray ionization-mass spectrometry (ESI-MS) (Fig. S5a †). The F values and 1 O 2 formation of the four Ir(III) complexes with His binding exhibited trends similar to their binding affinity with His (Ir-F > Ir-H > Ir-Me > Ir-F2). Ir-F, indicating the strongest binding affinity with His ( Fig. S5b †), among the four Ir(III) complexes, showed notable binding affinities towards different Ab species (for monomers, K d ¼ 1.6 Â 10 À4 M; for oligomers, K d ¼ 2. photoactivation ( Fig. S6 and S8 †). Based on these properties, we selected Ir-F as a representative candidate of our Ir(III) complexes and illustrated its ability to modify Ab peptides in detail (vide infra). (Fig. 2a, bottom). Ab 40 oxidation manifested a conformational change as probed by IM-MS (Fig. 2d). The most dominant arrival time indicated a peak at 9.92 ms. These results suggest that Ab 40 oxidation induced by Ir-F can alter the structural distribution of Ab 40 . Similar observations were observed with Ir-Me, Ir-H, and Ir-F2, where the complexes were able to oxidize Ab 40 and consequently vary its structural distribution (Fig. S9 and S10 †). In order to determine the location of peptide oxidation, the Ab fragment ions, generated by selectively applying collisional energy to singly oxidized Ab, were analyzed by ESI-MS 2 (Fig. 2e). All b fragments smaller than b 13 were detected in their nonoxidized forms, while those larger than b 34 were only monitored in their oxidized forms. The b fragments between b 13 and b 34 were indicated in both their oxidized and nonoxidized forms. Such observations, along with previous reports regarding Ab oxidation, 19,57 suggest His13, His14, and Met35 of Ab as plausible oxidation sites. Collectively, our studies demonstrate that Ab peptides can be modied upon treatment with Ir-F [(i) coordination to Ab by replacing two H 2 O molecules with the peptide in the absence of light; (ii) coordination-mediated oxidation of Ab at three possible amino acid residues (e.g., His13, His14, and Met35) upon photoactivation (Fig. 1a)]. Note that the Ab samples produced by treatment of photoactivated Ir-F showed high uorescence intensity and were relatively stable in both H 2 O and cell growth media (Fig. S11 †). Effects of peptide modications triggered by Ir(III) complexes on Ab aggregation Based on the photoirradiation-dependent Ab modications by Ir(III) complexes, the impact of such variations on the aggregation of Ab was determined employing Ab 40 and Ab 42 , two main Ab isoforms found in the AD-affected brain. [2][3][4][58][59][60][61][62] For these experiments, freshly prepared Ab solutions were treated with Ir(III) complexes with and without light under both aerobic and anaerobic conditions. The molecular weight (MW) distribution and the morphology of resultant Ab species were analyzed by gel electrophoresis with Western blotting (gel/Western blot) using an anti-Ab antibody (6E10) and transmission electron microscopy (TEM), respectively (Fig. 3a). Under aerobic conditions (Fig. 3b, le), the aggregation of Ab 40 was affected by treatment with Ir-F prompting a shi in the MW distributions in the absence of light. Photoactivation of the Ir-F-treated Ab 40 sample resulted in a more diverse MW distribution compared to that of the corresponding sample without light (light, MW # 100 kDa; no light, MW < 15 kDa). The distinct modulation of Ab 40 aggregation upon addition of Ir-F with photoirradiation is likely a consequence of the complex's ability to generate 1 O 2 and oxidize Ab through photoactivation as observed in our spectrometric studies (vide supra; Fig. 2). Therefore, the same experiments were performed under anaerobic conditions to directly monitor the role of O 2 in Ir-F 0 s modulative reactivity against Ab 40 aggregation. In the absence of O 2 (Fig. 3b, right), Ab 40 aggregation was also altered by Ir-F regardless of light treatment. Our results suggest that both light and O 2 are important in the regulation of Ab 40 aggregation through coordination-/photo-mediated peptide oxidation triggered by Ir-F. In addition, in the absence of light and O 2 , Ab 40 aggregation is directed by the covalent interactions between Ir-F and the peptide. Similar modulation of Ab 42 aggregation was observed upon incubation with Ir-F exhibiting different MW distributions compared to the Ab 42 samples without Ir-F in the absence and presence of light and O 2 (Fig. 3c). Moreover, smaller amorphous aggregates of both Ab 40 and Ab 42 , reported to be less toxic, 63,64 were visualized by TEM from the samples containing Ir-F regardless of irradiation ( Fig. 3d and S12c †). Furthermore, preformed Ab aggregates, generated at various preincubation time points (i.e., 2, 4, and 24 h), were disassembled and their aggregation pathways were altered when Ir-F was introduced (Fig. S13 †). Such Ir-F-induced effects on preformed Ab aggregates were observed to be dependent on photoirradiation. Moreover, the aggregation of both Ab 40 and Ab 42 was also changed with addition of the other Ir(III) complexes (i.e., Ir-Me, Ir-H, and Ir-F2) with and without light (Fig. 3e, S12 and S13 †). In addition to Ab, Ir-F was able to interact with and modify other amyloidogenic peptides [i.e., a-synuclein (a-Syn) and human islet amyloid polypeptide (hIAPP)] affecting their aggregation pathways (Fig. S14 †). Cytotoxicity of Ab species generated upon incubation with Ir(III) complexes Prior to cytotoxicity measurements, the resultant species upon 24 h treatment of Ab 40 with Ir-F with light exposure were incubated with murine Neuro-2a (N2a) neuroblastoma cells in order to determine their cellular uptake. As depicted in Fig. S15, † the lysates of the cells added with the resultant species for 24 h, analyzed by inductively coupled plasma-mass spectrometry (ICP-MS), indicated an Ir concentration of 39 mg L À1 , demonstrating the cellular uptake of the species containing Ir(III). Note that the Ir concentration (0.17 and 34 mg L À1 ) was measured from the lysates of the cells treated only with either Ab 40 or Ir-F, respectively. Moving forward, the toxicity of Ab species produced by treatment with our Ir(III) complexes was monitored by the MTT assay [MTT ¼ 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide] (Fig. 4). The cytotoxicity of Ab 40 species incubated with our Ir(III) complexes was noticeably reduced in a photoirradiation-dependent manner. In the absence of light, the Ab 40 samples incubated with our Ir(III) complexes exhibited a decrease in cytotoxicity (ca. 20%) compared to the sample of the complex-free Ab 40 . As for the photoirradiated samples, Ab 40induced toxicity was lowered by ca. 35% by treatment with our Ir(III) complexes. This result suggests that modication of Ab, such as oxidation, could attenuate Ab-triggered toxicity in living cells. 65 Furthermore, the cytotoxicity of Ab 42 species formed with Ir(III) complexes was also diminished by ca. 20% regardless of photoactivation. Note that the survival ($80%) of cells treated with our Ir(III) complexes at the concentration used for cell studies with Ab peptides was observed with and without light exposure (Fig. S16 †). Ternary complexation with Ab and intramolecular and intermolecular Ab oxidation Premised on Ir-F 0 s covalent bond formation with Ab and oxidation of Ab (vide supra), additional studies regarding ternary complexation and promotion of intermolecular oxidation of Ab were carried out employing Ir-F (Fig. 5). Ab 28 , a fragment of Ab equipped with the metal binding and selfrecognition sites of the peptide with a relatively low propensity to aggregate than the full-length peptides, Ab 40 and Ab 42 , 1,66-68 was used to form a complex with Ir-F 0 (Fig. 2b) as evidenced by ESI-MS (1301 m/z; Fig. 5b) and increased uorescence (Fig. 5b, inset). As shown in Fig. 5a, following incubation, the sample of the Ab 28 -Ir-F 0 complex was treated with freshly prepared Ab 42 to monitor its effect on Ab 42 aggregation. Based on the gel/Western blot and TEM analyses, the aggregation of Ab 42 was modulated by the Ab 28 -Ir-F 0 complex ( Fig. 5c and d). Such modulative reactivity of the Ab 28 -Ir-F 0 complex was also observed against Ab 40 aggregation (Fig. S17 †). Our mass spectrometric studies conrmed that such control of Ab 42 aggregation by the Ab 28 -Ir-F 0 complex was a result of ternary complex formation with Ab 42 , i.e., (Ab 28 -Ir-F 0 )-Ab 42 , and (ii) oxidation of Ab, both intramolecular and intermolecular, upon photoactivation (Fig. 6). Based on previous reports detailing intermolecular interactions between Ab peptides, hydrophobic interactions between the self-recognition sites (LVFFA; Fig. 3a and 5a) of Ab are likely responsible for ternary complexation, 1,2,69 consequentially altering the aggregation pathways of Ab in the absence of photoirradiation. Furthermore, these studies indicate that intermolecular oxidation of Ab can be promoted by Ir-F upon photoactivation (Fig. 6, S18, and S19 †). This observation may explain the distinct difference between the modulation of Ab aggregation with and without light as the intermolecular oxidation of Ab by Ir(III) complexes could modify Ab at sub-stoichiometric levels. Conclusions Effective chemical strategies (i.e., coordination to Ab and coordination-/photo-mediated oxidation of Ab) for modication of Ab peptides using a single Ir(III) complex were rationally developed. Such dual mechanisms (i.e., coordination and oxidation) exhibiting photo-dependency for altering Ab peptides are novel and effective in controlling peptide aggregation and cytotoxicity. Our Ir(III) complexes can covalently bind to Ab by replacing two H 2 O molecules bound to the Ir(III) center with Ab regardless of light and O 2 [coordination to Ab; Fig. 1a(i)]. In the presence of light and O 2 , Ir(III) complexes bound to Ab are capable of inducing the intramolecular and intermolecular oxidation of Ab at His13, His14, and/or Met35 [oxidation of Ab; Fig. 1a(ii)]. Taken together, our multidisciplinary studies demonstrate the feasibility of establishing new chemical approaches towards modication of amyloidogenic peptides (e.g., Ab) using transition metal complexes designed based on their coordination and photophysical properties. In general, chemical modications in peptides of interest can assist in furthering our understanding of principles of their properties, such as peptide assembly. Furthermore, peptide aggregation and cytotoxicity can be affected by biomolecules, including lipid membranes; 70-73 thus, the regulatory reactivity of Ir(III) complexes towards amyloidogenic peptides in the presence of lipid membranes will be investigated in the future. Conflicts of interest There are no conicts to declare.
3,494.4
2019-06-05T00:00:00.000
[ "Chemistry", "Biology" ]
Simion and Kelp on trustworthy AI Simion and Kelp offer a prima facie very promising account of trustworthy AI. One benefit of the account is that it elegantly explains trustworthiness in the case of cancer diagnostic AIs, which involve the acquisition by the AI of a representational etiological function. In this brief note, I offer some reasons to think that their account cannot be extended — at least not straightforwardly — beyond such cases (i.e., to cases of AIs with non-representational etiological functions) without incurring the unwanted cost of overpredicting untrustworthiness. Introduction Increasingly, the question of whether -and if so under what conditions -artificial intelligence (AI) can be 'trustworthy' (as opposed to merely reliable or unreliable) is being debated by researchers across various disciplines with a stake in the matter, from computer science and medicine to psychology and politics. 1 Given that the nature and norms of trustworthiness itself have been of longstanding interest in philosophy, 2 philosophers of trust are well situated to help make progress on this question. In their paper "Trustworthy Artificial Intelligence" (2023), Simion and Kelp (hereafter, S&K) aim to do just this. I think they largely succeed. That said, in this short note I am going to quibble with a few details. In short, I worry that their reliance on function-generated obligations in their account of trustworthy AI helps their proposal get exactly the right 1 3 18 Page 2 of 8 result in certain central AI cases, such as cancer diagnostic AIs, but at the potential cost of overpredicting untrustworthiness across a range of other AIs. Here's the plan for the paper. In Sect. 2 I'll provide a brief overview of S&K's account of trustworthy AI, emphasising the core desiderata they take themselves to have met. In Sect. 3, I'll then raise some potential worries, and discuss and critique some lines of reply. S&K's line of argument A natural strategy for giving an account of trustworthy AI will be a kind of 'application' strategy: (i) give a compelling account of trustworthiness simpliciter and then (ii) apply it to AI, and make explicit what follows, illuminating trustworthy AI in the process. But, as S&K note, there is a problem that faces many extant accounts of trustworthiness that might try to opt for that strategy. The problem is this: many accounts of trustworthiness are such that the psychological assumptions underlying them (e.g., that being trustworthy involves something like a good will or virtue) are simply too anthropocentric. As S&K ask: Do Als have something that is recognizable as goodwill? Can AIs host character virtues? Or, to put it more precisely, is it correct to think that AI capacity for trustworthiness co-varies with their capacity for hosting a will or character virtues? (p. 4). The situation seems to be this: an account of trustworthiness with strongly anthropocentric psychological features 'baked in' will either not be generalisable to AI (if AI lacks good will, virtue, etc.), or it will be generalisable only by those willing to embrace further strong positions about AI. Ceteris paribus, a more 'generalisable' account of trustworthiness, when it comes to an application to AI specifically, will be a less anthropocentric one that could sidestep the above problem. 3 One candidate such account they identify is Hawley's (2019) negative account of trustworthiness, on which trustworthiness is a matter of avoiding unfulfilled commitments. 4 S&K have argued elsewhere 5 at length for a different -and similarly non overtly anthropocentric -account of trustworthiness, which they take to have advantages (I won't summarise these here) over Hawley's: on S&K's preferred account, trustworthiness is understood as a disposition to fulfil one's obligations. What is prima facie attractive about an obligation-centric account of trustworthiness, for the purpose of generalising that account to trustworthy AI, is that (i) artifacts can have functions; and (ii) functions can generate obligations. Let's look at the first point first. S&K distinguish between design functions (d-functions), sourced in the designer's intentions, and etiological functions (e-functions), sourced in a history of success, noting that artefacts can acquire both kinds of functions. S&K use the example of a knife to capture this point: My knife, for instance, has the design function to cut because that was, plausibly, the intention of its designer. At the same time, my knife also has an etiological function to cut: that is because tokens of its type have cut in the past, which was beneficial to my ancestors, and which contributes to the explanation of the continuous existence of knives. When artefacts acquire etiological functions on top of their design functions, they thereby acquire a new set of norms governing their functioning, sourced in their etiological functions. Design-wise, my knife is properly functioning (henceforth properly d-functioning) insofar as it's working in the way in which its designer intended it to work. Etiologically, my knife is properly functioning (henceforth properly e-functioning) insofar as it works in a way that reliably leads to cutting in normal conditions (p. 9). While d-functions and e-functions (i.e., proper functioning) will often line up, these functions can come apart (e.g., when artifacts are designed to work in none-function-filling ways). When they don't line up, S&K maintain that e-functions generally override. As they put it: what we usually see in cases of divergence is that norms governing properfunctioning tend to be incorporated in design plans of future generations of tokens of the type: if we discover that there are more reliable ways for the artefact in question to fulfil its function, design will follow suit (Ibid., p. 9). So we have in view now S&K's thinking behind the idea that artifacts (of which AI is an instance) can acquire functions. What about the next component of the view: that functions can generate obligations? The crux of the idea is that a species of obligation, function-generated obligation, is implicated by facts about what it is for something to fulfil its e-function. The heart has a purely e-function-generated obligation to pump blood in normal conditions (the conditions under which pumping blood contributed to explanation of its continued existence). In maintaining this, on S&K's line, we aren't doing anything objectionably anthropocentric, any more than when we say a heart should (qua heart) pump blood. We can easily extend this kind of obligation talk over to artifacts, then: just as a heart is malfunctioning (and so not meeting its e-functionally sourced obligations) if it stops pumping blood, a diagnostic AI is malfunctioning (and not meeting its e-functionally sourced obligations) if it stops recognising simple tumours by their appearance, and miscategorises them. Against this background, then, S&K define an AI's being maximally trustworthy at phi-ing as being a matter of having a "maximally strong disposition to meet its functional norms-sourced obligations to phi." The conditions for outright AI trustworthiness attributions can then be characterised in terms of maximal AI trustworthiness in the following way: an outright attribution of trustworthiness to an AI is true in a context c iff that AI approximates "maximal trustworthiness to phi" 6 closely enough to surpass a threshold on degrees of trustworthiness determined by c, where the closer x approximates maximal trustworthiness to phi, the higher x's degree of trustworthiness to phi. Critical discussion I suspect that a typical place one might begin to poke to look for a hole in the above account would be the very idea that a machine could have an obligation in the first place. Imagine this line or reply: "But S&K have complained that extant accounts of trustworthiness that rely on 'virtue' and 'good will' as psychologically demanding prerequisites for being trustworthy are too anthropocentric to be generalisable to AI. But isn't being a candidate for an 'obligation' equally psychologically demanding and thereby anthropocentric? If so, haven't they failed their own generalisability desiderata by their own lights?". The above might look superficially like the right way to press S&K, but I think such a line would be uncharitable, so much so that it's not worth pursuing. First, we humans often have our own obligations to others sourced in facts about ourselves (substantive moral agreements we make, etc.) that are themselves predicated on our having a kind of psychology that we're not yet ready to attribute to even our most impressive AI. But S&K's argument is compatible with all of this -viz., with granting that obligations oftentimes for creatures like us arise out of features AI lack. What matters for their argument is just that AI are candidates for e-function-generated obligations, and it looks like this is something we can deny only on pain of denying either that AI can have e-functions, or that e-functions can generate norms. 7 I think we should simply grant both of these -rather than incur what looks like an explanatory burden to deny either. The right place to press them, I think, is on the scope of the generalisability of their account. Here it will be helpful to consider again the case of a cancer-diagnostic AI which they use for illustrative purposes. The etiological function that such cancer diagnostic AIs acquire (which aligns with their d-function) is going to be a purely representational function. Cancer diagnostic algorithms are updated during the AI's supervised learning process (i.e., as is standard in deep learning) against the metric of representational accuracy; the aim here is reliably accurately identifying (and not misidentifying) e.g., tumours from images, and thus to maximise representational accuracy via sensitivity and specificity in its classifications. The AI becomes valuable to the designer when and only when, and to the extent that, this is achieved. To use but one example, take the case of bladder cancer diagnosis. It is difficult using standard human tools to reliably predict the metastatic potential of disease from the appearance of tumours. Digital pathology via deep learning AI is now more reliable than humans at this task, and so can predict disease with greater accuracy than through use of human tools alone (see Harmon et al., 2020). This predictive accuracy explains the continued use (and further accuracy-aimed calibration by the designers) of such diagnostic AIs. There are other non-diagnostic AIs with representational functions as their e-functions. An example is FaceNet, which is optimised for accuracy in identifying faces from images (Schroff et al., 2015;William et al., 2019). AIs with purely representational e-functions, however, are -perhaps not surprisingly -an outlier in AI more broadly. Let's begin here by considering just a few examples of the latest deep learning AI from Google's DeepMind. AlphaCode, for instance, is optimised not for representational accuracy but for practically useful coding. Supervised training, in this case, is not done against a representational (mind-to-world) metric, but against a kind of usefulness (world-to-mind) metric. In competitive coding competitions, for instance, AlphaCode's success (and what explains its continued existence) is developing coding solutions to practical coding problems and puzzles. Perhaps even more ambitiously, the research team at DeepMind is developing an AI optimised to 'interact' in human-like ways three-dimensional space in a simulated 3-D world (Abramson et al. 2022). This AI is optimised in such a way that it will (given this aim) acquire an e-function that is at most only partly representational (e.g., reliably identifying certain kinds of behaviour cues), while also partly practical (moving objects in the 3-D world). 8 Next, and perhaps most notably, consider -in this case due to the OpenAI research team -ChatGPT, a chatbot built on OpenAI's GPT-3 language models, and which provides 'human-like' responses to a wide range of queries. Although ChatGPT is often used for purposes of 'fact finding' (e.g., you can ask ChatGPT to explain complex phenomena to you), it is not right to say that this AI has a representational e-function. On the contrary, ChatGPT is optimised for conversational fluency; to the extent that accuracy misaligns with conversational fluency, ChatGPT is optimised to favour the fluency metric. Finally, consider a familiar AI -YouTube's recommender system -which is optimised against the metric of (in short) 'keeping people watching', and thus, generating advertising revenue (Alfano et al., 2020). When the accuracy of a recommendation choice (with respect to clustering towards videos of a similar contenttype which the user has watched) misaligns with a choice more likely to keep the user watching more content, the algorithm is optimised to recommend the latter. This feature of YouTube's recommender system has been identified as playing a role in the disproportional recommendation of conspiratorial content on YouTube relative to viewers ex ante search queries. 9 With the above short survey in mind, let's now return to the matter of the scope of the generalisability of their S&K's account of trustworthy AI. As I see it, at least, S&K's account can explain trustworthy AI in cases where AI acquires representational e-functions, such as the diagnostic AI example, and other AIs with representational functions, like FaceNet. But -and here is where I am less confident about their account -we've just seen that many of the most touted and promising recent AIs either lack a representational e-functions altogether (e.g., AlphaCode, ChatGPT, etc.) or have such a function but only alongside other practical e-functions (e.g., DeepMind's virtual world AI). S&K seem to face a dilemma here. On the one hand, if e-function generated obligations of the sort that a disposition to fulfil them matters for AI trustworthiness are not limited to those obligations generated by representational e-functions (but also include obligations generated by non-representational e-functions), then it looks like the view -problematically -predicts that YouTube's recommender system, a known source of conspiratorial content, is maximally trustworthy so long as it is maximally fulfilling all the obligations generated by the e-function it has to 'keep viewers watching' (in turn, maximising ad revenue profits). I take it that this result is a non-starter; in so far as S&K are aiming to distinguish trustworthy from untrustworthy AIs, YouTube's recommender system has features that will line up as a paradigmatic case of the latter. 10 Which brings us to the more plausible option and restrictive option: which is for a proponent of S&K's view of trustworthy AI to hold that e-function-generated obligations of the sort that a disposition to fulfil them matters for AI trustworthiness are limited to those obligations generated by representational e-functions -such as, e.g., cancer diagnostic AIs, FaceNet, etc. Let's assume this latter more restrictive route is taken. On this assumption, we seem to get the result that, on S&K's view, all but the minority of AIs being developed (those like cancer diagnostic AIs, FaceNet, etc.) fail to meet the conditions for trustworthy AI. So does this result overpredict untrustworthiness in AI? Here is one reason for thinking that perhaps it does. Even if we grant that, e.g., YouTube's recommender system (in virtue of its documented propensity to recommend conspiratorial content, a propensity that aligns with its fulfilling its practical e-function) is an example of an 'untrustworthy AI' (and agree that S&K's view predicts untrustworthiness correctly here), it's less clear that, e.g., AlphaCode should get classed together with YouTube's recommender system. At least, it's not clear to me what resources S&K's proposal have for distinguishing them given that neither has been optimised to acquire a representational e-function. Without some additional story here, then, the concern is that S&K might overpredict untrustworthy AI even granting that the view diagnoses some cases of untrustworthy AI (e.g., YouTube's recommender system) as it should. Concluding remarks Giving a plausible account of trustworthy AI is no easy task; it is no surprise that, at least in 2023, the themes of trustworthy and responsible AI are among the most widely funded 11 S&K's account offers a welcome intervention in this debate because it clarifies the kind of anthropocentric barrier to getting a plausible account up and running from the very beginning, and it offers an example of how such an account that avoids this problem might go. My quibbles with the scope of the account in Sect. 3 remain, but they should be understood as just that: quibbles that invite further development of an account that is, on the whole, a promising one. 12 Funding For supporting this research I am grateful to the AHRC Digital Knowledge (AH/W008424/1) project as well as to the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 948356, KnowledgeLab: Knowledge-First Social Epistemology). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
3,978.8
2023-05-02T00:00:00.000
[ "Philosophy", "Computer Science" ]
Efficacy of Msplit Estimation in Displacement Analysis Sets of geodetic observations often contain groups of observations that differ from each other in the functional model (or at least in the values of its parameters). Sets of observations obtained at various measurement epochs is a practical example in such a context. From the conventional point of view, for example, in the least squares estimation, subsets in question should be separated before the parameter estimation. Another option would be application of Msplit estimation, which is based on a fundamental assumption that each observation is related to several competitive functional models. The optimal assignment of every observation to the respective functional model is automatic during the estimation process. Considering deformation analysis, each observation is assigned to several functional models, each of which is related to one measurement epoch. This paper focuses on the efficacy of the method in detecting point displacements. The research is based on example observation sets and the application of Monte Carlo simulations. The results were compared with the classical deformation analysis, which shows that the Msplit estimation seems to be an interesting alternative for conventional methods. The most promising are results obtained for disordered observation sets where the Msplit estimation reveals its natural advantage over the conventional approach. Introduction and Motivation Consider the classical functional model of geodetic observations, which is given for l = 1, . . . , q different measurement epochs, namely where y l = [y 1,l , · · · , y n l ,l ] T are the observation vectors whose elements belong to the respective sets Φ l = y 1,l , . . . , y n l ,l ; X l = [X 1,l , · · · , X r,l ] T are the parameter vectors; v l = [v 1,l , · · · , v n l ,l ] T are vectors of random errors; and A l ∈ R n l ,r are known coefficient matrices. Such models are the basis of deformation analysis, namely for determining the shifts ∆X (k,l) = X l − X k between the epochs l and k (for example, the changes of the point coordinates between such epochs). The vectors ∆X (k,l) can be estimated by applying different methods or strategies (e.g., [1][2][3]). The least squares method (LS-method) is still the most popular approach in such an analysis, note that LS-estimates are often supplemented with respective statistical tests (e.g., [4][5][6]). However, some unconventional methods are also in use, for example, robust M-estimation [7,8] or R-estimation [9][10][11][12][13][14]. In the case of relative networks, one can also apply methods of free adjustment (e.g., [15][16][17][18]). Some methods as well as their properties are well known, other methods are still being researched. The M split estimation surely belongs in the latter group. The M split estimation was proposed by Wiśniewski [19,20] and has been applied to some practical problems in which each observation could be assigned to several different functional models. For example, it was used in remote sensing (terrestrial laser scanning or ALS data) for data modeling [21] and in some geodetic problems, for example, in deformation analysis [22][23][24][25], and robust estimation (e.g., [26]). Automatic assignment of each observation to the best fitted model is one of the most important features of M split estimation. It is also very useful in deformation analysis, when the observation set might include observations from all measurement epochs (the set is an unrecognized mixture of such observations). Note that there is usually no problem with separating observations from different epochs and hence with separate analyses. However, there are some cases when the application of the M split estimation is advisable. For example, when a point is displaced during an observation session, thus, one should consider two pseudo-epochs, and the M split estimation allows us to estimate the parameters of the functional models for such pseudo-epochs. Such models can also be applied when an observation set is disturbed by outliers [23,24]. Note that the method under investigation can be applied in all observation sets that are an unrecognized (and/or unordered) mixture of observation aggregations. Such data can result from different sources or instrumentations. In fact, the source of data does not matter here. It can be, for example, geodetic instruments: total stations, GNSS receivers, etc., or in remote sensing such as terrestrial or airborne laser scanners. The main properties of the M split estimation are discussed in the papers cited above; that study focused on the efficacy of the method in estimating parameters of the competitive functional models, hence also in estimating point displacements. The analyses were based on simulations of the crude Monte Carlo method and the application of elementary functional models or models of a leveling network. The results were compared with the results of the LS-method. Theoretical Foundations Without loss of generality, we can assume two measurement epochs, thus in the model of Equation (1), we have q = 2. Then, the optimization criterion of the LS-method and its solution can be written in the following way (l = 1, 2) where v i,l = y i,l − a i,l X l , D LS,l = (A T l P l A l ) −1 A T l P l , P l are respective weight matrices, (a i,l -ith row of matrix A (l) ). The difference ∆X LS(1,2) =X LS,2 −X LS,1 is a LS-estimate of the shift ∆X (1,2) . In the case of the M split estimation, we assumed that each observation belonged to either of two sets Φ 1 or Φ 2 ; however, there is one observation set Φ = Φ 1 ∪ Φ 2 and one observation vector y = [y 1 , · · · , y n ] T , n = n 1 + n 2 . There are two competitive functional models with two competitive versions of the parameter X, namely X (1) and X (2) (A ∈ R n,r ,rank(A) = r). The vectors v (1) , v (2) ∈ R n are two competitive versions of the observation errors related to all elements of the vector y. The theoretical basis of the M split estimation is an assumption that every observation y i can be assigned to either of two density function f (y i ; X (1) ) or f (y i ; X (2) ). If y i occurs, it brings the f -information I f (y i ; X (1) ) = − ln f (y i ; X (1) ) or the f -information I f (y i ; X (2) ) = − ln f (y i ; X (2) ), which are competitive to each other. M split estimates of the parameters X (1) and X (2) , namelyX (1) andX (2) , minimize the following global information that is brought by all elements of the vector y [19] I f (y; X (1) , X (2) [− ln f (y i ; X (1) )][− ln f (y i ; X (2) )] (4) In other words, the estimators in question are the solutions of the following optimization problem: minI f (y; X (1) , X (2) ) = minI f (y;X (1) ,X (2) ) (5) For such solutions, the occurrence of the particular observation vector is the most probable. If X (1) = X (2) = X, then I f (y; X (1) , X (2) , which is the objective function of the maximum likelihood method (ML-method). In such a context, the M split estimation is a special development of the ML-method. Huber [27,28] generalized the ML-method to M-estimation by introducing ϕ M (X) = n i=1 ρ(y i ; X), where ρ(y i ; X) is an arbitrary function for which estimators obtain the desired properties (for example, they are robust against outliers). A similar generalization was also proposed for the M split estimation [19,20]. The objective function of Equation (4) is replaced by the following function Of course, M split estimation is also a development of classical M-estimation, if only X (1) = X (2) = X, and hence ϕ ρ (X (1) , X (2) There are several variants of M split estimation that differ from one another in the objective function or assumed parameters [19,22,29]. So far, the most popular is the squared M split estimation for which ρ (1) (y i ; X (1) ) = p i v 2 i(1) and ρ (2) (y i ; X (2) ) = p i v 2 i(2) . Hence, one can write the following optimization problem of such a method [19,30] as where P = Diag(p 1 , . . . , p n ) is a diagonal weight matrix of the observations y ( * -the Hadamard product). It is obvious that if X (1) = X (2) = X, then ϕ sq (X (1) , X (2) ) = n i=1 p i v 2 i , which means that the squared M split estimation is a development of the LS method. Considering such a relationship and the range of the practical applications of the M split estimation, we will only discuss the squared M split estimation. To compute M split estimates, one can use the sufficient conditions for the minimum of the objective function. Considering the optimization problem (7), one can write the following equations g (1) (X (1) , X (2) ) = ∂ϕ(X (1) , X (2) )/∂X (1) T X (1) =X (1) X (2) =X (2) = 0 g (2) (X (1) , X (2) ) = ∂ϕ(X (1) , X (2) )/∂X (2) where g (1) (X (1) , X (2) ) and g (2) (X (1) , X (2) ) are the gradients of the function ϕ sq (X (1) , X (2) ). The following matrices are diagonal weight matrices that are based on the cross-weighting functions [20,31] w (1) (v i(2) ) = The solutions of Equation (8) are the following M split estimatorŝ X (1) = D (1) (v (2) )y,X (2) = D (2) (v (1) )y (11) where Thus,X (1) is a function ofv (2) = y − AX (2) , whereasX (2) is a function ofv (1) = y − AX (1) . For this reason, this solution has an asymptotic character. The following iterative procedure can be applied to compute the sought estimates (j = 1, . . . , m) (for the given starting point, for example, v 0 (2) = y − AX LS ). The process stops when for each l = 1, 2, it holds that g (l) (X (1) ,X (2) ) = 0 and henceX (l) = X m (l) = X m−1 (l) . Note that other iterative processes that use both the gradients and the Hessians of ϕ(X (1) , X (2) ), namely Newton's method, can be found in [19,20,29]. Now, the elementary property the of M split estimates is shown. Here, we consider a basic example that precedes the more detail analysis presented in the next section. Let us assume the functional model y i = X + v i , i = 1, . . . , 7, and the observation set Φ as a following vector Figure 1a). Then,X LS = 1.49. For the sake of comparison, let the robust M-estimate be computed. By applying the Huber method [27,28], where the weight function is w(v) = min{1, k/|v|} and k = 3, one can obtainX M = 1.27 ( Figure 1b). Both estimates in question are not satisfactory, and do not reflect the nature of the observation set. The robust estimatê X M lies closer to the "bigger" aggregation of observations. Next, the question of how to treat the observations that are furthest from that estimate arises. In the classical approach, such observations are regarded as outliers (for example, affected by gross errors), and we are no longer interested in such observations. Different conclusions follow the M split estimation whereX (1) = 1.10 andX (2) = 2.00 ( Figure 1c). The Msplit estimates show that set  consists of two subsets 1  and 2  (Figure 1d), whose elements can be regarded as realizations of two different random variables that differ from each other in location parameters 1 X and 2 X , respectively. Similar assumptions can also be found in other estimation problems, for example, cluster analysis (e.g., [32,33]); or in a mixed model estimation The M split estimates show that set Φ consists of two subsets Φ 1 and Φ 2 (Figure 1d), whose elements can be regarded as realizations of two different random variables that differ from each other in location parameters X 1 and X 2 , respectively. Similar assumptions can also be found in other estimation problems, for example, cluster analysis (e.g., [32,33]); or in a mixed model estimation applied in geosciences (e.g., [34,35]). Such approaches can be regarded as alternatives; however, we should have some understanding that they differ significantly in their general ideas. Assigning each observation to the model that is the most suitable for it is a natural process in M split estimation. This property can be applied in the analysis of network deformation where there are two functional models: y 1 = A 1 X 1 + v 1 and y 2 = A 2 X 2 + v 2 for two measurement epochs, respectively. Thus, one can create one common observation vector y = [y T 1 , y T 2 ] T , the common weight matrix P = Diag P (1) , P (2) , and the coefficient matrix . It is noteworthy that the order of the observation within vector y can be arbitrary. The actual order of the observations must coincide with the order of the rows within matrix A and order of the weights in weight matrix P. Here, the shift ∆X (1,2) can be estimated by ∆X (1,2) =X (2) −X (1) . It is worth noting that ∆X (1,2) can also be estimated directly by applying the Shift-M split estimation proposed by Duchnowski and Wiśniewski [22]. Elementary Tests The elementary analysis was based on the univariate models and simulations of observations related to such models. Thus, where 1 n l = [1 1 , · · · , 1 n l ] T ; X 1 and X 2 are parameters that differ from each other in the shift ∆X (1,2) = X 2 − X 1 . The measurements, namely the elements of vectors y 1 and y 2 , were simulated by using the Gaussian generator randn(n, 1) of MATLAB. We assumed that σ = 1, and the following theoretical values of the parameters: X t 1 = 0 and hence X t 2 = X t 1 + ∆X (1,2) = ∆X (1,2) . Considering the LS-estimation of X 1 and X 2 we can apply the model of Equation (14) or Equation (1) where A 1 = 1 n 1 and A 2 = 1 n 2 . In the case of M split estimation, we assumed the model of Equation (3), T ∈ R n , n = n 1 + n 2 , and A = [1 T n 1 , 1 T n 2 ] T = 1 n . We also applied the iterative procedure of Equation (13) by taking LS-estimates as the starting point (note that the starting point can usually be arbitrary). Let us now consider an example of observation simulation for which ∆X (1,2) = 5σ = 5 and n 1 = 50, n 2 = 10. The parameter estimates, together with the respective residuals, are presented in Figure 2. Now, let us consider more simulated observation sets. By applying the crude Monte Carlo method (MC) for N simulations, one can compute the MC estimates by applying the formulâ whereθ i are the estimates obtained for the ith simulation. The location of the MC estimates for N = 5000 and ∆X (1,2) = 5 or ∆X (1,2) = 20 is presented in Figure 3. This shows that the MC estimates that were obtained for both estimation methods were close to the respective theoretical values (considering the simulated standard deviation). Generally, the LS estimates seemed more satisfactory. Please note that the results obtained for different values of shift ∆X (1,2) indicate that M split estimation is more satisfactory for bigger shifts than for smaller ones. Thus, let us examine how efficient the M split estimation is for different shifts. Let the measure of efficacy be defined in relation to the LS estimates, thus The application of MC simulations allowed us to present the success rate (SR), which can be computed for different values of the shift is the value of Equation (17) at the ith simulation. Note that when λ (l) (X (l) ,X LS,l ) < 0, then the M split estimate is closer to the theoretical value than the LS estimate. Now, we can define the following function of an elementary success of M split estimation The application of MC simulations allowed us to present the success rate (SR), which can be computed for different values of the shift ∆X (1,2) γ (l) (X (l) ,X LS,l ; ∆X (1,2) where s i (l) (X (l) ,X LS,l ) is the value of Equation (17) at the ith simulation. Note that such a SR is defined in a very similar way to the mean success rate (MSR) given by Hekimoglu and Koch [36]. SRs for different ∆X (1,2) and for N = 5000 simulations are presented in Figure 4. Note that such a SR is defined in a very similar way to the mean success rate (MSR) given by Hekimoglu and Koch [36]. SRs for different (1,2) X  and for 5000 N  simulations are presented in Figure 4. X and (2) X for the growing value of (1,2) X  . Vertical Displacement Analysis Let us now consider the efficacy of Msplit estimates in a more practical example, namely the analysis of vertical displacements within the leveling network, which is presented in Figure 5. Such a network has already been under investigation in previous papers [24,25]. h h was measured twice at each of two measurement epochs, and that 2   mm was the known standard deviation of all measurements. We also assumed that at the first epoch . In the classical approach to the estimation of the point displacements, we used the functional model of Equation (1). Since all height differences were measured twice at two Vertical Displacement Analysis Let us now consider the efficacy of M split estimates in a more practical example, namely the analysis of vertical displacements within the leveling network, which is presented in Figure 5. Such a network has already been under investigation in previous papers [24,25]. Note that such a SR is defined in a very similar way to the mean success rate (MSR) given by Hekimoglu and Koch [36]. SRs for different (1,2) X Δ and for 5000 N = simulations are presented in Figure 4. X and (2) X for the growing value of (1, 2 ) X Δ . Vertical Displacement Analysis Let us now consider the efficacy of Msplit estimates in a more practical example, namely the analysis of vertical displacements within the leveling network, which is presented in Figure 5. Such a network has already been under investigation in previous papers [24,25]. The network consists of four reference points 1 4 , , R R  with the known heights . We assumed that each of the height differences 1 1 6 , , h h  was measured twice at each of two measurement epochs, and that 2 σ = mm was the known standard deviation of all measurements. We also assumed that at the first epoch The network consists of four reference points R 1 , . . . , R 4 with the known heights H R 1 = · · · = H R 4 = 0 m and five object points of P 1 , . . . , P 5 . We assumed that each of the height differences h 1 , . . . , h 16 was measured twice at each of two measurement epochs, and that σ = 2 mm was the known standard deviation of all measurements. We also assumed that at the first epoch X t 1 = [H 1,1 = 0, · · · , H 5,1 = 0] T = 0, where H i,1 is the height of the ith object point at the first epoch. The shift of the object points between the measurement epochs is given by ∆X (1,2) = [∆H 1 (1,2) , · · · , ∆H 5(1,2) ] T , where ∆H i(1,2) = H i,2 − H i,1 . In the classical approach to the estimation of the point displacements, we used the functional model of Equation (1). Since all height differences were measured twice at two measurement epochs, namely, we had two series Sensors 2019, 19, 5047 9 of 14 of measurements at each epoch, then we should assume that y l ∈ R 32 , X l = [H 1,l , · · · , H 5,l ] T , and A ⊗ = A ⊗ 1 2 ∈ R 32,5 where A ∈ R 16,5 is a known coefficient matrix related to one series of measurements, 1 2 = [1, 1] T , and ⊗ is the Kronecker product. On the other hand, in the case of M split estimation, we should apply the functional model of Equation (3) and X (1) , X (2) ∈ R 5 are the competitive versions of the parameter vector, hence v (1) , v (2) ∈ R 64 are the respective competitive versions of the measurement errors. When analyzing the efficacy of M split estimation, we can use two measures, namely the local measure of the distance between the LS and M split estimates as well as the global one where [•] j is jth element of the vector and • is the Euclidean norm. The local distance, which is just another form of Equation (16), is related to a particular parameter, for example, the height of a displacing point. The global distance describes the whole parameter vector. Thus, we can define the local and global success rates in the following way where s i (l) j ([X (l) ] j , [X LS,l ] j ) and s i (l) (X (l) ,X LS,l ) are functions of an elementary success from Equation (17) and indexed with the respective arguments. The empirical analysis, which was based on the MC method for N = 5000 simulations, was carried out for several variants of the point displacements. First, we assumed that only point P 5 was displaced. The respective MC estimates obtained for the LS and M split estimations and ∆H 5(1,2) = −50, ∆H 5(1,2) = −100, or ∆H 5(1,2) = −200 mm are presented in Table 1, which also presents the local and global SRs. The MC estimates were similar for both estimation methods and the stable points. The SRs indicate that the LS estimates were closer to the theoretical values in the vast majority of the simulations. Note that the local SRs obtained for point P 5 were much higher than the global ones. All estimates of the point heights obtained in the MC simulations (for the variant ∆H 5(1,2) = −50 mm) are presented in Figure 6. simulations. Note that the local SRs obtained for point 5 P were much higher than the global ones. All estimates of the point heights obtained in the MC simulations (for the variant 5(1,2) 50 H    mm) are presented in Figure 6. In the second variant, we assumed that there were two unstable points, namely 5 P and 4 P . The results, which were obtained for the different point shifts, are presented in Table 2. Here, the MC estimates obtained for both methods were also similar. Figure 7 presents the LS and Msplit estimates that were obtained for all of the MC simulations. Generally, this confirmed the correctness of both estimation methods; however, differences between these two estimation methods were also apparent. The main difference was the dispersion, which was larger in the case of the Msplit estimation, especially for the stable points, which suggests that the accuracy of the Msplit estimation was worse than LS estimation. It is also worth noting that the SRs of the Msplit estimation achieved bigger values in this variant. In the case of point 5 P , the results of the Msplit estimation were better than the results of the classical approach in almost one third of the simulations. In the second variant, we assumed that there were two unstable points, namely P 5 and P 4 . The results, which were obtained for the different point shifts, are presented in Table 2. Here, the MC estimates obtained for both methods were also similar. Figure 7 presents the LS and M split estimates that were obtained for all of the MC simulations. Generally, this confirmed the correctness of both estimation methods; however, differences between these two estimation methods were also apparent. The main difference was the dispersion, which was larger in the case of the M split estimation, especially for the stable points, which suggests that the accuracy of the M split estimation was worse than LS estimation. It is also worth noting that the SRs of the M split estimation achieved bigger values in this variant. In the case of point P 5 , the results of the M split estimation were better than the results of the classical approach in almost one third of the simulations. 100 The results, which are presented here, show that both methods, namely LS and Msplit estimation, yielded satisfactory solutions. However, such a conclusion was valid for the ordered observation sets, namely when each observation was properly assigned to its measurement epoch. If such a condition is not met, then the observation from another epoch will usually be regarded as an outlier. Since LS estimation as well as Msplit estimation are not robust against outliers, they both break down (please note that Msplit estimation is generally not robust unless we introduce an additional virtual model for outliers). Note that in the context addressed here, the outliers result from the assignment of an observation to the wrong measurement epoch, but not from gross errors. The natural feature of Msplit estimation is the automatic assignment of each observation to the proper epoch. Thus, we can suppose that this estimation method will not break down if such outliers occur. To illustrate this feature of Msplit estimation, we simulated that point 5 P was displaced and that The results, which are presented here, show that both methods, namely LS and M split estimation, yielded satisfactory solutions. However, such a conclusion was valid for the ordered observation sets, namely when each observation was properly assigned to its measurement epoch. If such a condition is not met, then the observation from another epoch will usually be regarded as an outlier. Since LS estimation as well as M split estimation are not robust against outliers, they both break down (please note that M split estimation is generally not robust unless we introduce an additional virtual model for outliers). Note that in the context addressed here, the outliers result from the assignment of an observation to the wrong measurement epoch, but not from gross errors. The natural feature of M split estimation is the automatic assignment of each observation to the proper epoch. Thus, we can suppose that this estimation method will not break down if such outliers occur. To illustrate this feature of M split estimation, we simulated that point P 5 was displaced and that ∆H 5(1,2) = −50 mm. Now, let us consider the following variants of the Table 3. In the case of variant A, the results were very close to the respective results presented in Table 1. If the observation sets are not ordered correctly, then the local SRs at the second epoch are close to 1, which means that almost always, the height of point P 5 at the second measurement epoch is better assessed by the M split estimation than by LS estimation. Additionally, the global SRs were very high at the second epoch, hence one can say that the heights of all network points were better estimated by the application of M split estimation. Conclusions The paper showed that M split estimation can be successfully applied in deformation analysis. The results were generally similar to the results of the more conventional LS estimation; however, the latter method usually yielded slightly better outcomes. The elementary tests showed that the efficacy of the M split estimation grew with an increasing shift between the observation sets. In the case of geodetic networks, where a parameter vector usually consists of several point coordinates, the shift of one or two such coordinates between measurement epochs does not influence the efficacy of the M split estimation in a significant way. The real advantage of the M split estimation was revealed for the disordered observation sets, for example, when the observations from at least two measurement epochs were mixed for some reason. Note that the LS estimates break down in such cases, in contrast with the M split estimation, for which the ordering of all observations within the combined observation set can be arbitrary and does not influence the final results of the method as well as its iterative process. Such a feature results directly from the theoretical foundations of the method, which are based on the concept of the split potential. In short, each observation "chooses" the functional model that fits it best. In this context, M split estimates are robust against some kind of "outliers", namely observations that come from other observation sets. Referring to the presented example, there were four height differences regarding the height of network point P 5 . If one of them does not fit the other, then the method tries to fit such an "outlying" observation into another epoch. If it works, then the whole estimation process succeeds. However, if such an observation is in fact affected by a gross error, then it does not fit any epoch, and the estimation must break down. The introduction of a virtual epoch, which is not related to any real measurements, is one solution to this problem. One can say that such an epoch can collect all "loners" that do not fit any real measurement epochs. Generally speaking, one can say that the M split estimation is not robust against outliers, which results from the occurrence of gross errors. However, if one assumes an additional competitive functional model (dedicated to outliers), then the M split estimation can estimate the location parameters for "good" observation aggregations as well as outlier(s). Increasing the number of competitive functional models protects the estimation of location parameters of good observations from the bad influence of outliers. Note that in this context, outliers are no longer "outlying", and become regular observations of the third (or more generally next) aggregation. This concept, which is out of the scope of this paper, was discussed in [23,24,30].
7,343
2019-11-01T00:00:00.000
[ "Mathematics" ]
Beilstein Journal of Organic Chemistry Beilstein Journal of Organic Chemistry Beilstein Journal of Organic Chemistry Single and Double Stereoselective Fluorination of (e)- Allylsilanes Acyclic allylic monofluorides were prepared by electrophilic fluorination of branched (E)-allylsilanes with Selectfluor. These reactions proceeded with efficient transfer of chirality from the silylated to the fluorinated stereocentre. Upon double fluorination, an unsymmetrical ethyl syn-2,5-difluoroalk-3-enoic ester was prepared, the silyl group acting as an anti stereodirecting group for the two C-F bond forming events. Findings Asymmetric C-F bond formation continues to challenge chemists, inspiring the development of increasingly effective protocols for stereocontrolled fluorination. [1-11] Studies from our laboratory illustrated that allylsilanes undergo electrophilic fluorination to afford allylic fluorides with clean transposition of the double bond. Using chiral cyclic allylsilanes, these experiments have culminated in efficient methods for the asymmetric synthesis of monofluorinated cyclitols or vitamin D3 analogues. [12-15] The key step of these syntheses is a highly efficient diastereoselective fluorodesilylation. We encountered more difficulties with the fluorination of acyclic allylsi-lanes A constructed by metathetic coupling of allyltri-methysilane with chiral olefinic partners (eq. 1, Scheme 1). Although high yielding, the electrophilic fluorination of these substrates suffered from a poor level of diastereo-control, thereby limiting the synthetic value of these reactions .[16,17] The absence of a silylated stereogenic centre is likely to be responsible for the poor stereocontrol observed upon fluorination of these substrates. We envisaged that the fluorination of (E)-allylsilanes B, featuring a silylated stereogenic centre, might be a superior transformation to control the configuration of the emerging fluorine bearing centre (eq. 2, Scheme 1). This working hypothesis is supported by the well-accepted model, which accounts for the observed transfer of chirality when reacting allylsilanes B with electrophiles other than fluorine. [18-21] Chiral allylsilanes B are known to act as useful carbon nucleophile equivalents in highly stereoselective condensation reactions with a large range of electrophiles leading to the construction of CC , CO , C-N or C-S bonds. [22-27] With the nitrogen-based elec-trophile NO 2 BF 4 , this methodology delivers acyclic (E)-olefin dipeptide isosteres featuring two allylic stereocen-tres. which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Findings Asymmetric C-F bond formation continues to challenge chemists, inspiring the development of increasingly effective protocols for stereocontrolled fluorination. [1][2][3][4][5][6][7][8][9][10][11] Studies from our laboratory illustrated that allylsilanes undergo electrophilic fluorination to afford allylic fluorides with clean transposition of the double bond. Using chiral cyclic allylsilanes, these experiments have culminated in efficient methods for the asymmetric synthesis of monofluorinated cyclitols or vitamin D3 analogues. [12][13][14][15] The key step of these syntheses is a highly efficient diastereoselective fluorodesilylation. We encountered more difficulties with the fluorination of acyclic allylsilanes A constructed by metathetic coupling of allyltrimethysilane with chiral olefinic partners (eq. 1, Scheme 1). Although high yielding, the electrophilic fluorination of these substrates suffered from a poor level of diastereocontrol, thereby limiting the synthetic value of these reactions. [16,17] The absence of a silylated stereogenic centre is likely to be responsible for the poor stereocontrol observed upon fluorination of these substrates. We envisaged that the fluorination of (E)-allylsilanes B, featuring a silylated stereogenic centre, might be a superior transformation to control the configuration of the emerging fluorine-bearing centre (eq. 2, Scheme 1). This working hypothesis is supported by the well-accepted model, which accounts for the observed transfer of chirality when reacting allylsilanes B with electrophiles other than fluorine. [18][19][20][21] Chiral allylsilanes B are known to act as useful carbon nucleophile equivalents in highly stereoselective condensation reactions with a large range of electrophiles leading to the construction of C-C, C-O, C-N or C-S bonds. [22][23][24][25][26][27] With the nitrogen-based electrophile NO 2 BF 4 , this methodology delivers acyclic (E)olefin dipeptide isosteres featuring two allylic stereocentres. [28,29] Scheme 1: Fluorination of branched allylsilanes A and B Herein, we report our investigation into the fluorination of (E)-allylsilanes of general structure B. A highly efficient and stereoselective synthesis of alkenes featuring bisallylic stereocentres, one of them being fluorinated, emerged from this study. Significantly, alkenes flanked by two allylic fluorinated stereogenic centres are also accessible upon double electrophilic fluorination of (E)-allylsilanes substituted with an ester group. The synthesis of the allylsilanes (±)-1a-i featuring an ester or alcohol group was carried out according to the procedure reported by Panek and co-workers. [30] See Additional File 1 for full experimental data. The fluorinations were carried out at room temperature in CH 3 CN in the presence of 1.0 eq. of NaHCO 3 and 1.5 eq. of Selectfluor [1-chloromethyl-4-fluoro-1,2-diazoniabicyclo[2.2.2]octane bis(tetrafluoroborate)]. The reactivity of the (E)-allylsilanes 1a-d possessing a single stereogenic centre was surveyed in priority to probe how structural variations on these substrates influence the E/Z selectivity of the resulting allylic fluorides (Table 1). For the (E)allylsilane 1a, the allylic fluoride 2a was obtained in 81% yield as a roughly 1/1 mixture of E/Z geometrical isomers (entry 1). The structurally related (E)-allylsilane 1b possessing the primary alcohol group underwent fluorination with a lower yield of 64%, delivering preferentially the Eisomer with poor selectivity (entry 2). The fluorination of allylsilanes featuring the primary alcohol gave, in addition to the desired product, various amounts of O-trimethylsilylated 5-fluoroalk-3-enols. The presence of the gem-dimethyl group on the starting silanes 1c and 1d drastically improved the stereochemical outcome of the fluorination. Compounds E-2c and E-2d were formed in 95% and 70% yield respectively, with no trace of Z-isomer detectable in the crude reaction mixture (entries 3 and 4). When a second stereogenic centre is present on the starting (E)-allylsilane, up to four stereoisomers may be formed upon fluorination. This is illustrated in scheme 1 with the fluorodesilylation of anti (E)-1e. For substitution reactions (S E 2') of allylsilanes such as anti (E)-1e, with electrophiles other than "F + ", an anti approach with respect to the silyl group prevails with preferential formation of the syn (E) isomer. [18][19][20][21] This stereochemical outcome suggests that the major reaction pathway involves the reactive conformer I leading, after addition of the electrophile, to a carbocationic intermediate which undergoes rapid elimination prior to bond rotation (Scheme 2). Subsequent experiments focused on the fluorination of anti and syn (E)-allysilanes 1e-i to study the effect of silane configuration on diastereoselection ( was formed in 95% yield as a diastereomeric mixture of both syn-2e and anti-2e isomers. The high d.r. [19:1] suggested that the transfer of chirality (anti approach of Selectfluor) from the silylated to the fluorinated stereocentre was very efficient. A third allylic fluoride was detected in the crude mixture and its structure was tentatively assigned as syn (Z)-2e (entry 1). The benzyl-substituted allylsilane anti-1f was fluorinated in 90% yield with a similar sense and level of stereocontrol (entry 2). Excellent transfer of chirality was also observed for the fluorination of anti (E)-1g featuring the primary alcohol group but the yield was significantly lower (entry 3). Syn-1h, which was used contaminated with anti-1h [d.r. = 9:1], was fluorinated with an overall yield of 96% delivering a mixture of four stereoisomeric allylic fluorides (entry 4). For this reaction, erosion of stereointegrity resulting from alternative reacting conformation, syn approach of the fluorinating reagent with respect to the silyl group, or bond rotation prior to elimination, was detectable but minimal. A similar stereochemical trend was observed for the alcohol syn (E)-1i (entry 5 This chemistry offers the unique opportunity to access alkenes flanked with two allylic and stereogenic fluorinated centres upon double electrophilic fluorination of (E)-allylsilanes featuring an ester group. Although undoubtedly versatile for further functional manipulation, this structural motif is extremely rare with only two symmetrical variants reported in the literature. [31,32] The prospect of validating a more general strategy for the preparation of both symmetrical and unsymmetrical alkenes doubly flanked by fluorinated allylic stereocentres prompted us to challenge our methodology with the preparation of the unsymmetrical difluorinated alkenoic ester 3. This compound was subsequently converted into a known symmetrical difluorinated alkene for which the relative stereochemistry was unambiguously identified by X-ray analysis. [31] This line of conjecture allowed us to verify our stereochemical assignments. We investigated the feasibility of the double fluorination with (E)-allylsilane 4 prepared from (E)-vinylsilane 5 [33] via a [3,3] sigmatropic rearrangement. As anticipated and much to our delight, the doubly fluorinated alkene 3 was obtained through a succession of two electrophilic fluorinations. The electrophilic α-fluorination of the ester 4 was performed by treatment with LDA at -78°C followed by addition of N-fluorobenzenesulfonimide [34] (NFSI). The d.r. for this first fluorination was excellent (>20:1). The subsequent electrophilic fluorodesilylation of the resulting fluorinated silane 6 delivered 3 in excellent yield with no trace of side-products. In comparison with allylsilanes 1a-i, the fluorodesilylation of 6 was more demanding and required higher temperature to reach completion. Under these conditions, the level of stereocontrol of the second fluorination was moderate (Scheme 3). To unambiguously confirm the stereochemistry of syn (E)-3 [major diastereomer], this compound was converted into the known symmetrical difluorinated alkene 7 (Scheme 4). The key steps necessary to perform this conversion were a dihydroxylation, the reduction of the ester group and the benzylation of the resulting primary alcohol. Preliminary work revealed that the order of steps was important and that protecting group manipulations were required for clean product outcome. The cis-dihydroxylation of 3 was performed employing NMO and catalytic OsO 4 in DCM. [35] In the event, the diastereoselectivity was controlled by the two fluorine substituents. Four successful operations separated the newly formed unsymmetrical diol from 7, namely the protection of the diol as an acetonide, the reduction of the ester, the benzylation of the resulting primary alcohol and a final deprotection step. The spectroscopic data of compound 7 were identical to the ones of a sample prepared independently according to the procedure reported by O'Hagan. [31] This observation establishes the relative configuration as drawn in schemes 2 and 3, and supports our hypothesis that the sense of stereocontrol for the fluorinations of 1ei is in line with related nitrations reported by Panek. [28] In conclusion, the stereoselective fluorination of (E)-allylsilanes featuring a silylated stereogenic centre was found to be a useful reaction for the preparation of allylic fluorides, the silyl group acting as an efficient stereodirecting group. Notably, this methodology enables the preparation of unsymmetrical alkenes doubly flanked with fluorinated stereogenic centres. This result is significant as only symmetrical derivatives are accessible with the method reported to date. [31,32]
2,401
0001-01-01T00:00:00.000
[ "Chemistry" ]
Directly converted patient-specific induced neurons mirror the neuropathology of FUS with disrupted nuclear localization in amyotrophic lateral sclerosis Background Mutations in the fused in sarcoma (FUS) gene have been linked to amyotrophic lateral sclerosis (ALS). ALS patients with FUS mutations exhibit neuronal cytoplasmic mislocalization of the mutant FUS protein. ALS patients’ fibroblasts or induced pluripotent stem cell (iPSC)-derived neurons have been developed as models for understanding ALS-associated FUS (ALS-FUS) pathology; however, pathological neuronal signatures are not sufficiently present in the fibroblasts of patients, whereas the generation of iPSC-derived neurons from ALS patients requires relatively intricate procedures. Results Here, we report the generation of disease-specific induced neurons (iNeurons) from the fibroblasts of patients who carry three different FUS mutations that were recently identified by direct sequencing and multi-gene panel analysis. The mutations are located at the C-terminal nuclear localization signal (NLS) region of the protein (p.G504Wfs*12, p.R495*, p.Q519E): two de novo mutations in sporadic ALS and one in familial ALS case. Aberrant cytoplasmic mislocalization with nuclear clearance was detected in all patient-derived iNeurons, and oxidative stress further induced the accumulation of cytoplasmic FUS in cytoplasmic granules, thereby recapitulating neuronal pathological features identified in mutant FUS (p.G504Wfs*12)-autopsied ALS patient. Importantly, such FUS pathological hallmarks of the patient with the p.Q519E mutation were only detected in patient-derived iNeurons, which contrasts to predominant FUS (p.Q519E) in the nucleus of both the transfected cells and patient-derived fibroblasts. Conclusions Thus, iNeurons may provide a more reliable model for investigating FUS mutations with disrupted NLS for understanding FUS-associated proteinopathies in ALS. Electronic supplementary material The online version of this article (doi:10.1186/s13024-016-0075-6) contains supplementary material, which is available to authorized users. Background Fused in sarcoma (FUS) is a multifunctional DNA/RNAbinding protein involved in various aspects of cellular RNA metabolism and executes its main functions predominantly in the cell nucleus. Initially discovered as a fusion oncogene, mutations in the FUS gene resulting in FUS proteinopathies were recently linked to amyotrophic lateral sclerosis (ALS), responsible for~4 % of familial and~1 % of sporadic ALS cases [1][2][3]. FUS mutations cluster either in the glycine-rich region of the protein or in the RGGrich C-terminal domain, where they disrupt the nuclear localization signal (NLS) and result in altered subcellular localization of the FUS protein. ALS-associated FUS (ALS-FUS) mutations have been reported to cause cytoplasmic mislocalization of the protein in the brain and spinal cord of ALS patients [4,5]. Moreover, cytoplasmic FUS tends to aggregate to form inclusions in degenerating motor neurons of ALS patients [6][7][8]. As a consequence, both toxic gain-of-function in the cytoplasm and loss-of-function in the nucleus are proposed to be causative events in ALS development [9,10]. Key pathological features have been documented based on immunocytochemical studies on cultured fibroblasts from ALS patients or immunohistological analysis on autopsy samples [11,12]. These studies revealed abnormal cytoplasmic mislocalization of the FUS protein in ALS patients with FUS mutations in its NLS. When modeled on fibroblasts, however, mutant FUS proteins were predominantly detected in the nucleus, with minimal association with pathological signatures detected with those mutations in vivo [11,13,14]. Patient-derived induced pluripotent stem cells (iPSC) with the ability to differentiate into neural cells were found to be suitable for studying ALS-FUS pathology [15], but neuronal induction and differentiation processes using iPSC require tedious and labor-intensive procedures. Hence, it would be advantageous to develop rapid and simple FUS-associated ALS patient-derived cell models to study ALS-related neuronal pathology. To overcome the limitations associated with the current cell modeling systems, we examined FUS pathology in a more disease-relevant cell model. We used our previously described method of repressing a polypyrimidine-tractbinding (PTB) protein to directly convert patient fibroblasts carrying FUS mutations and those from age-matched healthy controls into functional neurons (iNeuron) [16]. We have recently identified FUS mutations (p.G504Wfs*12, p.R495*, and p.Q519E) by direct sequencing and multigene panel testing [17][18][19]. In this study, we examined the pathophysiological and biochemical properties of the three different FUS mutations at NLS region. Analysis of brain and spinal cord autopsy samples from FUS (p.G504Wfs*12) patient demonstrated the expected pathologic features including nuclear clearance and cytoplasmic accumulation of FUS in neurons. To generate a cell model that recapitulates key pathological features found in autopsy, we compared cellular localization and aggregation-prone properties of the endogenous FUS in fibroblasts, HEK-293 cells and rat primary cortical neurons and directly converted iNeurons in the presence or absence of stress. Directly converted iNeurons from patient fibroblasts was the only model that recapitulated the mutant FUS-associated neurological pathology that is observed in autopsied brain and spinal cord. Moreover, we showed that the FUS neuropathology of the familial ALS patient with p.Q519E mutation could be demonstrated in directly converted iNeurons but not in transfected cells or patient-derived fibroblasts. These findings suggest that directly converted iNeurons have a potential to become reliable disease-relevant models for dissecting pathophysiologies of FUS-related proteinopathies in ALS. Results and discussion Clinical and genetic characteristics of three ALS patients harboring FUS mutations in the NLS region Among ten diverse, recently identified FUS mutants or variants [17,19], two de novo FUS mutants (p.G504Wfs*12, p.R495*) confirmed by trio study in sporadic ALS [18] and one FUS variant (p.Q519E) in familial ALS [19] were included in this study. The residues of the three mutants are located in the C-terminal region containing the nuclear localization signal (NLS). As diagrammed in Fig. 1, the Q519E mutation is a missense mutation in the C-terminal NLS region; the mutation (p.G504Wfs*12) causes a frame shift in exon 15, leading to a truncated FUS; and the mutation (p.R495*) creates a premature codon to eliminate the NLS. The mutation (p.R495*) is associated with an aggressive clinical phenotype of ALS [20][21][22], and the mutation (p.G504Wfs*12) is a pathogenic truncation mutant associated with sporadic ALS [18,23]. In order to investigate whether FUS (p.Q519E) variant has the significance in disease pathogenesis, we established structural analysis of the mutation with Transportin-1 (Protein Data Bank, PDB (ID: 4FDD)) (Additional file 1: Figure S1). FUS is a nuclear protein that its nuclear import is mediated by interaction between Transportin-1 and the C-terminal NLS region of FUS [24,25]. Hence, we analyzed the hydrogen bonding pattern of FUS-Transportin-1 complexes and observed that one hydrogen bond is relevant to the FUS Q519 residue. The distance between acceptor atom (oxygen; atom type: OE1) of E509 from Transportin-1 and donor atom (nitrogen; atom type: NE2) of Q519 from FUS was measured as 3.21 Å. Since the experimental structure has no hydrogen atoms, the angle of hydrogen bond was measured between acceptor (OE1 of E509 from Transportin-1), donor (NE2 of Q519 from FUS), and the prior atom connected on donor (CD of Q519 from FUS), which comes to 134.7°. This is a possible hydrogen bonding between FUS-Transportin-1 complexes. If the Q519 on FUS is mutated to E519, the length of side chain is decreased by one carbon chain (from 4 to 3) and the polar property changes to negative from neutral. In the end, the found Q519 (FUS)-E509 (Transportin-1) hydrogen bond in wild-type will disappear in the Q519E mutant. In addition, the negative-negative repulsion (E519-E509) in Q519E mutant may result in the deactivation on FUS-Transportin-1 binding, thus providing a significance of the FUS (p.Q519E) variant in disease pathogenesis. Detailed clinical and epidemiological characteristics of three ALS patients with different FUS mutations, one sporadic ALS patient, and four healthy controls enrolled in this study are summarized in Table 1. FUS pathology in ALS-FUS patient brain and spinal motor neurons Human autopsy samples were used to reveal the distribution of FUS in the brain and spinal cord of FUS (G504Wfs*12) patient compared to a normal control (CTL 4) and a sporadic ALS patient without any known mutation. Immunohistochemical profiles demonstrated that wild-type FUS was confined predominantly to the nucleus in the majority of neurons in the control brain. A similar distribution of FUS immunoreactivity was also seen in a sporadic ALS patient. In contrast, prominent cytoplasmic or decreased nuclear staining of FUS with ring-like perinuclear inclusions was observed in the FUS (G504Wfs*12) case (Fig. 2a). To confirm cytoplasmic accumulation of FUS in neuronal cells from the FUS (G504Wfs*12) patient, we performed doublelabel immunohistochemistry for NeuN (neuronal nuclei marker) and FUS. This demonstrated co-labeling of NeuN in the nucleus and the mutant FUS in the cytoplasmic of the same neurons from the ALS-FUS patient, in contrast to the localization of the FUS protein in the nucleus of neurons from healthy control and a sporadic ALS patient (Fig. 2b, Additional file 2: Figure S2). We also determined the pathogenic features of the mutant FUS in spinal cord motor neurons. Consistent with findings in the precentral motor cortex, immunohistochemistry for FUS in NeuN-positive cells revealed the same pathological feature in the ventral horn of spinal cords (Fig. 2c, d). The sections of spinal cords from both Interestingly, FUS was predominantly nuclear in the postcentral gyrus and dorsal horn neurons of FUS (p.G504Wfs*12) patient indicating that FUS abnormalities are FUS abnormalities are observed in the motor Fig. 2 Cytoplasmic incorporation of FUS is present in ALS-FUS patient brain and spinal cords. a DAB staining depicts cytoplasmic neuronal inclusions of FUS (as indicated by their morphology) in the precentral gyrus of FUS (p.G504Wfs*12) patient (bottom) compared to the nucleus staining of FUS in a normal control (CTL 4, top) and a sporadic ALS patient (middle). Prominent cytoplasmic or decreased nucleus staining of FUS with ring-like perinuclear inclusions were observed in the motor neurons of the ALS patient. The enlarged images are shown in the right panels. Scale bars = 10 μm. b FUS pathology was confirmed by double-label immunofluorescence for FUS (green) and NeuN (red) in a normal control (top), sporadic ALS patient (middle), and FUS (p.G504Wfs*12) patient (bottom). Boxed region in the left panel is enlarged in the right panels. Note that cytoplasmic FUS expressed in a normal control are microglia (Additional file 2: Figure S2). Cells were counter stained with the nuclear marker DAPI (blue). Scale bars = 50 μm for the merged left panels and 7.5 μm for the right panels. c The ventral horn of the cervical spinal cord sections from normal control (top), sporadic ALS patient (top), and FUS (p.G504Wfs*12) patient (bottom) were compared. The same pathological features were observed by DAB staining in the spinal cords of the FUS (p.G504Wfs*12) patient. Scale bars = 10 μm. d The corresponding sections were processed for double-label immunofluorescence. FUS pathology was confirmed by FUS (green) and NeuN (red) staining. Cells were counter stained with the nuclear marker DAPI (blue). Scale bars = 10 μm system to a greater extent than that observed for the patient sensory neurons (Additional file 3: Figure S3). This is the first report on the case of FUS (p.G504Wfs*12) pathology on autopsy ALS samples. Endogenous mutant FUS pathology in primary patient fibroblasts The residues of the three mutants (p.G504Wfs*12, p.R495*, p.Q519E) are all located in the C-terminal NLS-containing domain of FUS. To examine the presence of ALS-FUS pathology in ALS patient fibroblasts, a punch skin biopsy were obtained from normal controls and ALS patients to isolate their fibroblasts. Primary fibroblasts from healthy individuals (CTL 1, 2, and 3) showed endogenous FUS entirely in the nucleus (Fig. 3a, left panels). Contrary to the report showing endogenous neuronal FUS harboring the G504Wfs*12 or R495* mutation in the cytoplasm with decreased staining in the nucleus [26], we observed more abundant nuclear immunoreactivity of FUS and somewhat diffuse cytoplasmic immunoreactivity on patient-derived fibroblasts that harbor either the G504Wfs*12 or R495* mutation. Surprisingly, FUS (p.Q519E) did not even show any cytosolic mislocalization. These results suggested that either FUS (p.Q519E) does not contribute to the pathogenic potential of ALS or that its mislocalization failed to be captured in the fibroblast model. Stress agents are known to induce cytoplasmic granules, and various ALS-causing FUS mutations have previously been reported to be recruited to those stress granules under stress conditions [21]. Sodium arsenite (referred to as arsenite) is widely used to induce oxidative stress in cells. To determine whether the cytoplasmic FUS protein in the patient fibroblasts could be recruited into stress granules, we stressed cells with arsenite, and observed the shift of dispersed FUS G504Wfs*12 or R495* proteins to cytoplasmic stress granules, which is similar to the response of the eukaryotic translation initiation factor 4G (eIF4G) (Fig. 3a, right panels, and Fig. 3b, c). Again, the Q519E mutant remained in the nucleus under such stress conditions. In addition to oxidative stress induced by sodium arsenite, we tested hyperosmotic stress induced by 0.4 M sorbitol for 1 hr [7]. In response to sorbitol stress, the amount of FUS in the cytoplasm increased with corresponding decrease in the nucleus. Importantly, the accumulation of cytoplasmic FUS granules in mutant fibroblasts is clearly much greater than that in healthy controls (Additional file 4: Figure S4). Subcellular fractionation of fibroblasts was performed to further investigate the localization of endogenous FUS. In agreement with the immunofluorescence results, the shorter G504Wfs*12 and R495* mutants could be distinguished from the longer wild-type FUS by Western blotting, showing that the mutants were more detectable in the cytosol and that the wild type was exclusively detected in the nucleus. In contrast, the Q519E mutant was detected in only the nucleus (Fig. 3d). These data suggest that patient-derived fibroblasts may not fully reflect the ALS pathology with disease-associated mutations in FUS. Mutant FUS pathology in transfected HEK-293 cells and primary neurons We aimed to examine whether the similar mutant FUS characteristics of patient fibroblasts, carrying the Q519E mutation, i.e., predominant nucleus FUS staining, was also observed in transfected cells. We overexpressed the cDNA encoding an N-terminal green fluorescence protein (GFP)-tagged wild-type or a mutant FUS in HEK-293 cells. The transiently transfected G504Wfs*12 and R495* mutants showed both nuclear and cytosolic distribution, whereas the Q519E mutant like the wild-type FUS resided predominantly in the nucleus (Fig. 3e, left panels). To determine whether the cytoplasmic mutant FUS could be incorporated into stress granules under oxidative stress conditions, we exposed the cells to arsenite. Both the G504Wfs*12 and R495* mutants showed the incorporation of their cytoplasmic FUS into eIF4Gcontaining granules, but the Q519E mutant still behaved like the wild-type FUS (Fig. 3e, right panels). The neuropathology of ALS is characterized by degenerating neurons in the brain and spinal cord, which is coincident with neuronal cytoplasmic inclusions of ALSassociated FUS proteins [27]. To determine the distribution of wild-type or mutant FUS constructs in neurons, we cultured cortical neurons from rats on embryonic day 18 rats and transfected them with GFP-tagged FUS constructs. The neurons were first cultured for 21 days and then transfected for 48 hrs before fixation. As shown in HEK-293 cells, both G504Wfs*12 and R495* mutants resided largely in the cytosol, which is contrary to the patterns that were observed in patient fibroblasts (Fig. 3f, left panels). When rat cortical neurons were exposed to oxidative stress, the cytosolic FUS (p.G504Wfs*12 and p.R495*) was further incorporated in eIF4G-positive stress granules (Fig. 3f, right panels). Interestingly, both the Q519E mutant and the wild-type FUS continued to reside in the nucleus before and after stress induction. These findings suggest that neurons from murine models may fail to reflect certain neuronal pathologies in human ALS-FUS brain or spinal cord samples. Moreover, overexpressed FUS may also cause deleterious effects that may be unrelated to ALS pathologies in transfected cells [28]. Endogenous mutant FUS that recapitulates autopsied ALS pathology is iNeuron-specific To develop more accurate disease models for ALS, we trans-differentiated ALS patient fibroblasts into induced neurons (iNeurons) by repressing a single RNA binding polypyrimidine-tract-binding (PTB) protein. To generate human iNeurons, we infected both patient and control fibroblasts with a lentivirus-repressing PTBP1, according to and modified from our recently published methods [16]. The subsequent culture conditions are provided in the schematic overview in Fig. 4a. In confocal cellular immunostaining assays, cells exhibited typical neuronal morphology, and nearly all cells were strongly positive for TUJ1 (the early neuronal marker βIII-tubulin). Within a day of neuronal induction, the cells were positive for TUJ1, and from 5-21 days, an increase in MAP2 (neuronal dendrites marker) immunostaining was observed (Fig. 4b). The maturated morphology of iNeurons with dendritic branching were confirmed with MAP2, NeuN (neuronal nuclei marker), and synapsin (neuronal synapsis marker) immunostaining at day 10 of neuronal induction (Fig. 4c). The percentage of neuronal tubulin marker TUJ1-positive iNeuron cells of the controls and three ALS patients with different types of FUS mutations were similar (Fig. 4d). (See figure on previous page.) Fig. 3 Endogenous FUS is partially mislocalized in patient fibroblasts with G504Wfs*12 and R495* mutations. a Primary fibroblasts cultures examined by confocal microscopy. A representative control image shows intense staining for FUS (green) in the nuclei (DAPI) and the stress granule markers eIF4G (red) in the cytoplasm. Patients with the G504Wfs*12 and R495* mutations near the NLS region also show that a majority of FUS protein in the nuclei with a slight increase of cytoplasmic FUS. In response to oxidative stress conditions, cytoplasmic FUS-positive inclusion bodies of G504Wfs*12 and R495* mutation co-localized with eIF4G stress granules (red). Cells were counter stained with the nuclear marker DAPI (blue). Scale bars = 25 μm. Bar graphs represent b the numbers of stress granules and c the numbers of FUS-positive stress granules (SGs). Data are from three experiments (the mean ± SEM, n = 20). One-way ANOVA followed by Tukey multiple comparisons test; **p < 0.001; N.S., not significant. d Cell fractionation analysis of cultured fibroblasts from ALS patients and controls showing an increased cytoplasmic expression of FUS in G504Wfs*12 and R495* patients compared with a representative control and Q519E patient. The upper band of FUS in the nucleus fraction of FUS (p.R495*) patient fibroblasts presumably an allele without a mutation and the lower band indicates the allele with the truncated R495* fragment. Lamin B2 and GAPDH are loading controls for the nuclear and cytoplasmic fractions, respectively. e HEK-293 cells were transfected with green fluorescent protein (GFP) wild-type FUS or FUS containing the ALS-associated mutations and treated with vehicle or 0.5 mM arsenite for 30 min. The cells were then processed for immunofluorescence analysis. Localization of GFP-tagged FUS wild type or the indicated FUS mutations (green), eIF4G stress granules (red) are shown. Cytosolic eIF4G co-localizes with FUS aggregates after oxidative stress. GFP (green) and eIF4G (red) show an increased overlap between mutant FUS (p.G504Wfs*12, p.R495*) and eIF4G as compared to wild-type FUS (WT) and eIF4G. Nuclei are shown by DAPI staining. Scale bars = 10 μm. f Rat E18 primary cortical neurons were cultured for 21 days and were transfected with constructs expressing wild-type FUS or ALS-associated mutants of FUS (green). After stress, redistribution of mutant FUS aggregates (green) into eIF4G (red) under oxidative stress is demonstrated. Nuclei are shown by DAPI staining. Scale bars = 25 μm. In control iNeurons, endogenous FUS was predominantly nuclear ( Fig. 5a and b left panels). In contrast, the patient iNeurons of G504Wfs*12 or R495* exhibited reduced endogenous FUS immunoreactivities in the nucleus along with increased cytoplasmic FUS. Considering that FUS was predominantly distributed in the nucleus of patient fibroblasts, FUS expression in iNeuron models seem to more closely mirror the FUS neuropathology found in ALS patients than those observed in patient fibroblasts. Intriguingly, the FUS (p.Q519E) patient also showed cytoplasmic localization of FUS with less nuclear distribution in the iNeuron model. To determine whether the cytosolic FUS (p.Q519E) could be induced to stress granules in iNeurons, we treated iNeurons with arsenite, and in line with the results with the cytoplasmic FUS in G504Wfs*12 or R495* patient iNeurons, we observed co-localization of the FUS (p.Q519E) mutant with arsenite-induced stress granules, which was further validated by the detection of the colocalization of the Ras-GTPase-activating protein SH3 domain binding protein (G3BP), another known component of stress granules (Fig. 5b, right panels). Co-localization of the cytosolic FUS inclusions with eIF4G under oxidative stress was also confirmed and quantified (Additional file 5: Figure S5, and Fig. 5c, d). These findings suggest that unlike patient-derived fibroblasts and transfected cell models, only patient iNeurons are able to fully capture the neuropathology of FUS mutations with a disrupted NLS region. Conclusions Mutations in FUS have been strongly implicated as the genetic cause of ALS [2,29]. In this study, we performed functional analysis of three different FUS mutations found in ALS patients, including the two de novo mutations (p.G504Wfs*12, p.R495*) we previously identified by trio study in sporadic ALS [30] and a novel variant (p.Q519E) by multi-gene panel testing in familial ALS (Table 1). All these mutations were located in the C-terminal region that contains the nuclear localization signal (NLS). FUS accumulation in neuronal cytoplasmic inclusions along with a degree of nuclear clearance are histopathological hallmarks of patients with FUS-mediated ALS, especially for the mutations located at the NLS region [2,31]. Consistently, we show for the first time that FUS (p.G504Wfs*12) exhibited the accumulation of cytoplasmic FUS and the depletion of nuclear FUS in patient brain and spinal cord motor neurons. The autopsy results demonstrated typical ALS-FUS features of cytoplasmic aggregation and nuclear clearance of FUS in neurons, which have also been described in the autopsy of patients with other FUS mutations in the NLS region. As of now, cultured patient fibroblasts have been used as cellular models for disease studies. Induced pluripotent stem cell (iPSC)-derived neurons from patients with a FUS mutation appear to provide a suitable model for understanding pathophysiological mechanisms of FUS mutations; however, one of the problems in skin fibroblast models is that some common FUS-associated pathological hallmarks found in autopsy cases are not consistently identified in patient fibroblasts [13]. Although iPSC-based models are useful in identifying the molecular and cellular defects in neuronal abnormality and instrumental for in vitro drug screening for therapeutic effects, the process of generating iPSC-derived neurons from human fibroblasts is intricate. To develop disease models more efficiently, we directly converted the fibroblasts from patients with FUS mutations into induced neuron (iNeuron) by repressing a polypyrimidine-tract-binding (PTB) protein. As PTB is naturally down-regulated during neuronal induction in development, PTB regulation enhanced the neurogenesis program in the fibroblasts [16]. As shown in the present data, iNeuron is a rapid and highly diseaserelevant cell model. Compared to the majority of nuclear FUS distribution in patient fibroblasts carrying mutations in the NLS region, iNeurons demonstrated a clear increase in cytoplasmic distribution and a concurrent decrease in the nuclear distribution of mutant FUS. Moreover, cytosolic aggregates of FUS could be induced under oxidative stress conditions. The analysis on iNeurons from a FUS (p.G504Wfs*12) patient recapitulated all key features of FUS pathology found in the patient brain and spinal cord motor neurons, thus confirming that iNeurons as a more disease-relevant in vitro model that accurately mirrors disease pathology of the patient. Intriguingly, the FUS (p.Q519E) patient who had endogenous FUS distributed in only the nucleus in fibroblast models or transiently transfected cells demonstrated a cytosolic mislocalization and aggregation of FUS only in the iNeuron model. These findings further support this new model as a useful research tool for studying ALS-FUS pathogenesis. FUS proteinopathies in ALS neuronal degeneration have been poorly understood due to the lack of clinically relevant cell models for the disease. The identification of disease-causing genes and the development of patientspecific and disease-relevant cell models for functional analysis are critical for advancing our understanding of the pathophysiology in ALS. Studies using patient iNeurons may reveal additional features of FUS pathology in the cytoplasm that may have escaped previous studies on patient fibroblasts [11]. Similarly, mutant FUS cDNA constructs of patients whose fibroblasts or each cDNA construct does not display typical FUS pathology may have distinct pathologic features, which can now be dissected in iNeurons. ALS-FUS patient fibroblast models present endogenous cytoplasmic FUS incorporation into stress granules; however, FUS in patient fibroblasts are predominantly expressed in the nucleus. Murine neurons transiently transfected with mutant FUS constructs revealed both decreases in the nucleus and increases in the cytosol, and upon stress, cytosolic FUS could be induced into stress granules. Yet, murine neurons may be insufficient to capture all key mechanism in neuronal pathology in human brain or spinal cords. Development of more disease-relevant experimental models from ALS patients that recapitulate the characteristics of neuronal dysfunction found in human post-mortem tissues will open new doors to both understanding pathophysiologic mechanisms in ALS-FUS and developing new therapeutic strategies. Therefore, simple, reliable, and reproducible iNeuron models are promising in that they may greatly accelerate ALS research. Subjects Three ALS patients with different types of FUS mutations were enrolled in this study. We have recently identified FUS mutations (p.G504Wfs*12, p.R495*, and p.Q519E) by direct sequencing and multi-gene panel testing [17,19,32]. These patients showed onset at age 27 to 34 with various disease progression. Skin fibroblasts were obtained from these ALS patients with disrupted NLS region and three healthy controls. Autopsy tissues were obtained from two patients: one ALS-FUS patient (p.G504Wfs*12) and one sporadic ALS patient without any known mutation in FUS, C9orf72, SOD1, ALS2, SPG11, UBQLN2, DAO, GRN, SQSTM1, SETX, MAPT, TARDBP, and TAF15. The clinical and genetic findings are summarized in Table 1. The study protocol was approved by the Institutional Review Board of Hanyang University Hospital, and written informed consents were obtained from all patients involved in the study (IRB# 2011-R-63). Structural modelling For a structural analysis, we sought for an applicable protein structure in PDB (ID: 4FDD), which contains Transportin-1 and FUS domains. Because the FUS domain includes the Q519 residue, the influence of Q519E mutation on the complex can be examined. The PDB complex consists of Transportin-1 (chain A: residue number from 371 to 890) and FUS (chain B: residue number from 498 to 526). The missing part (residue number from 321 to 370) and N-terminal region (from 1 to 320) in Transportin-1 was removed from the original PDB structure because they are not relevant to direct interactions with the FUS domain. The FUS missing residues from 498 to 506 were generated and minimized to find their local minima with keeping the rest atomic coordinates unchanged. To examine the effect of Q519E mutation on the FUS-Transportin-1 binding, a hydrogen bonding analysis was performed between FUS and Transportin-1 structures. Because the structure has no hydrogen atoms, we used an implicit hydrogen bonding analysis with the following loose criteria, the bond distance below 5 Å between acceptor and donor atom and the angle above 90°, among acceptor, donor, and the prior atom connected to the donor atom. The analysis was performed in CHARMM (Chemistry at Harvard Macromolecular Mechanics) [33], and the structure was visualized using Jmol (an open-source Java viewer for chemical structures in 3D. http://www.jmol.org/) Immunohistochemistry and immunofluorescence Autopsied samples of brain and spinal cord were obtained from one ALS-FUS patient (p.G504Wfs*12), one sporadic ALS patient, and one healthy control. Immunohistochemistry was performed on 5 μm thick paraffin sections. Tissues were deparaffinized, rehydrated in serial changes of xylene and ethanol gradients and autoclaved for 10 min in 10 mM citric acid, pH 8.0. Sections were then blocked with 10 % normal goat serum (vol/ vol) in PBS. For immunostaining, rabbit polyclonal antibodies reactive to FUS (Abnova) were applied on the precentral motor cortex and postcentral gyrus, and mouse antibodies against FUS (Proteintech) were used on spinal cord tissue. The sections were colorimetrically developed using the 3,3'-diaminobenzidine DAB substrate kit (Vector Labs) for 1 min and counter stained with haematoxylin (Sigma-Aldrich), dehydrated, and coverslipped in Permount (See figure on previous page.) Fig. 5 Endogenous FUS is mislocalized to the cytoplasm and is incorporated into cytoplasmic stress granules in response to arsenite in patient iNeurons. a A representative control shows intense staining for FUS (green) in the nuclei (DAPI) in TUJ1-positive (red) iNeurons at day 10 of neuronal induction, whereas the patients show a majority of FUS protein in the cytoplasm. Cells were counter stained with the nuclear marker DAPI (blue). Scale bars = 50 μm. b Confocal images of vehicle treated iNeurons (left panel) as compared to cells treated with 0.5 mM arsenite for 30 min (right panel) at day 10 are shown. A representative control shows FUS protein predominantly localized to the nuclei. ALS-FUS patient with Q519E mutation recapitulated the FUS neuropathology only in iNeurons: iNeurons from the patient show a majority of FUS protein (green) in the cytoplasm. In response to oxidative stress conditions, cytoplasmic FUS-positive inclusion bodies (green) in iNeurons were co-localized with G3BP stress granules (red). Cells were fixed and probed by immunofluorescence for DAPI (blue). Scale bars = 25 μm. Bar graphs represent (c) the numbers of stress granules and (d) the numbers of FUSpositive stress granules (SGs). Data are from three experiments (the mean ± SEM, n = 20). One-way ANOVA followed by Tukey multiple comparisons test; **p < 0.001; N.S., not significant medium. Images were acquired with a Leica DM5000B microscope. Plasmids and site-directed mutagenesis N-terminally GFP-tagged wild-type human FUS cDNA was cloned into the pReceiver vector (Genecopoeia). To make the mutant DNA (p.Q519E, p.G504Wfs*12, p.R495*), in vitro mutagenesis of the GFP-tagged FUS cDNA was conducted using the EZchange™ site-directed mutagenesis kit (Enzynomics) according to the manufacturer's protocol. HEK-293 and primary rat neurons were transiently transfected with GFP-tagged wild-type or mutant human FUS cDNA using Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. After 48 hrs, the cells were fixed in the presence or absence of stress for immunofluorescence staining as described below. For oxidative stress induction, vehicle (water) or 1 M stock solution of sodium arsenite (Sigma-Aldrich) dissolved in water was added to the media at a final concentration of 0.5 mM for up to 30 min. For hyperosmotic stress induction, vehicle (growth media) or 0.4 M sorbitol (Sigma-Aldrich) dissolved directly into the growth media for up to 1 hr. Conversion of human skin fibroblasts to iNeurons Fibroblasts were obtained from forearm skin with a punch biopsy (Table 1). Fibroblasts were cultured and maintained in DMEM supplemented with 20 % FBS, non-essential amino acids (all from Gibco), sodium bicarbonate (Sigma-Aldrich), and 1 % (vol/vol) Penicillin/ Streptomycin/Fungizone (Cellgro). In all experiments, passage-matched fibroblasts (passages 3-5) were used. Fibroblast were seeded at a density of 1 × 10 4 cells/cm 2 and used for experiments after cell synchronization by serum starvation at matched time points. Immunocytochemistry and confocal microscopy Fibroblasts, HEK-293, primary rat neurons, and iNeurons were washed with 1 × PBS, fixed with 4 % paraformaldehyde (PFA) for 15 min at room temperature and then washed three more times with PBS. Cells were permeabilized by incubation in 0.3 % Triton X-100 for 10 min at room temperature, washed with PBS, and then blocked for 1 hr in 5 % normal goat serum (Vector Labs). Cells were incubated with primary antibodies for 2 hrs at room temperature, washed three times with 1 × PBS, and incubated with secondary antibodies for 1 hr at room temperature. After three additional washings with 1 × PBS, nuclei were stained with DAPI. Coverslips were mounted on glass slides with Fluoromount-G (Southern-Biotech). The primary antibodies used included mouse monoclonal antibodies against C-terminus FUS (Santa Cruz Biotechnology), FUS (Proteintech), G3BP (BD Transduction Laboratories), and rabbit polyclonal antibodies against eIF4G (Santa Cruz Biotechnology), FUS (Abnova). For neuronal cell markers, mouse monoclonal antibody reactive to β-tubulin III (TUJ1; Covance) and rabbit polyclonal antibody to MAP2 (Cell Signaling Technology), NeuN (Millipore), and Synapsin I (Chemicon) were used. Secondary antibodies were Alexa Fluor 488-conjugated and/or TRITC-conjugated mouse or rabbit antibodies (Gibco). Images were acquired with a Leica TCS SP5 confocal microscope. The stress granules were counted manually. Twenty cells from each patient fibroblasts or iNeurons were chosen based on DAPI staining of nuclei (n = 3). Significance between stress granule formations was calculated using one-way ANOVA followed by Tukey multiple comparisons test. Nuclear-cytoplasmic fractionation and immunoblot analysis Cell fractionation was performed using the NE-PER Nuclear and Cytoplasmic Extraction Reagents kit (Thermo Fisher Scientific) according to the manufacturer's protocol. Nuclear and cytoplasmic extracts from fibroblasts were analyzed by Western blotting. Equal amounts of protein from each sample were separated by 10 % sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to a PVDF membrane (GE Healthcare). Membranes were blocked with 5 % skim milk. The primary antibodies used were mouse monoclonal antibodies against Lamin B2 (AbCam) and rabbit polyclonal antibodies against FUS (Abnova) and GAPDH (Santa Cruz Biotechnology). The membranes were probed with horseradish peroxidase-conjugated secondary antibodies (Santa Cruz Biotechnology) and developed using West-Q Chemiluminescent Substrate Plus Kits (GenDEPOT). Additional files Additional file 1: Figure S1. Additional file 2: Figure S2. FUS is distributed in the cytoplasm in microglia but is absent in astrocytes. FUS (green) is (a) apparently not expressed in GFAP-positive astrocytes, (red) and is (b) cytoplasmic in Iba-1-positive microglia (red, arrows) in the precentral gyrus of a normal control (CTL 4, top), sporadic ALS patient (middle), and FUS (p.G504Wfs*12) patient (bottom). Boxed region in the left panel is enlarged in the right panels. Cells were counter stained with the nuclear marker DAPI (blue). Scale bars = (a) 50 μm for the merged left panels and 10 μm for the right panels, and (b) 25 μm. Cells were counter stained with the nuclear marker DAPI (blue). (PDF 594 kb) Additional file 3: Figure S3. Additional file 4: Figure S4. Endogenous FUS is partially redistributed to the cytoplasm in response to sorbitol. Primary fibroblasts of a representative control and the patient with the Q519E mutation shows intense staining for FUS (green) in the nuclei (DAPI) and the stress granule markers eIF4G (red) in the cytoplasm. Patients with the G504Wfs*12 and R495* mutations also show that a majority of FUS protein in the nuclei with a slight increase of cytoplasmic FUS (left panel). Cells treated with 0.4 M sorbitol for 1 hr are shown on the right panel. In response to sorbitol stress, slight decrease of nucleus FUS and increase of cytoplasmic FUS-positive inclusion bodies co-localized with eIF4G stress granules were observed. The accumulation of cytoplasmic FUS granules in mutant fibroblasts were much greater than that in healthy controls. Cells were counter stained with the nuclear marker DAPI (blue). Scale bars = 10 μm. (PDF 821 kb) Additional file 5: Figure S5. Endogenous FUS cytoplasmic incorporation into stress granule marker eIF4G in response to arsenite in patient iNeurons. Immunocytochemistry performed on vehicle treated iNeurons (left panel) as compared to cells treated with 0.5 mM arsenite for 30 min (right panel) at day 10 are shown. A representative control shows FUS protein predominantly localized to the nuclei. All three ALS-FUS patients show a majority of FUS protein (green) in the cytoplasm of iNeurons. Cytoplasmic FUS-positive inclusion bodies (green) were detectable in eIF4Gpositive stress granules (red) in patients. Cells were fixed and probed by immunofluorescence for DAPI (blue). Scale bars = 25 μm.
8,045.8
2016-01-22T00:00:00.000
[ "Biology", "Medicine" ]
Reciprocal regulation of integrin β4 and KLF4 promotes gliomagenesis through maintaining cancer stem cell traits Background The dismal prognosis of patients with glioma is largely attributed to cancer stem cells that display pivotal roles in tumour initiation, progression, metastasis, resistance to therapy, and relapse. Therefore, understanding how these populations of cells maintain their stem-like properties is critical in developing effective glioma therapeutics. Methods RNA sequencing analysis was used to identify genes potentially involved in regulating glioma stem cells (GSCs). Integrin β4 (ITGB4) expression was validated by quantitative real-time PCR (qRT-PCR) and immunohistochemical (IHC) staining. The role of ITGB4 was investigated by flow cytometry, mammosphere formation, transwell, colony formation, and in vivo tumorigenesis assays. The reciprocal regulation between Integrin β4 and KLF4 was investigated by chromatin immunoprecipitation (ChIP), dual-luciferase reporter assay, immunoprecipitation, and in vivo ubiquitylation assays. Results In this study, we found that ITGB4 expression was increased in GSCs and human glioma tissues. Upregulation of ITGB4 was correlated with glioma grades. Inhibition of ITGB4 in glioma cells decreased the self-renewal abilities of GSCs and suppressed the malignant behaviours of glioma cells in vitro and in vivo. Further mechanistic studies revealed that KLF4, an important transcription factor, directly binds to the promoter of ITGB4, facilitating its transcription and contributing to increased ITGB4 expression in glioma. Interestingly, this increased expression enabled ITGB4 to bind KLF4, thus attenuating its interaction with its E3 ligase, the von Hippel-Lindau (VHL) protein, which subsequently decreases KLF4 ubiquitination and leads to its accumulation. Conclusions Collectively, our data indicate the existence of a positive feedback loop between KLF4 and ITGB4 that promotes GSC self-renewal and gliomagenesis, suggesting that ITGB4 may be a valuable therapeutic target for glioma. Electronic supplementary material The online version of this article (10.1186/s13046-019-1034-1) contains supplementary material, which is available to authorized users. Background Glioma is the most common primary malignant brain tumour of the central nervous system. Despite great advances in therapeutic techniques for treating glioma, such as surgery, radiotherapy, and chemotherapy, patients with glioblastoma (GBM) still only have an average survival of 12-15 months [1][2][3][4]. Accumulating evidence suggests that glioma are functionally heterogeneous and harbour a subset of tumour cells with stem cell characteristics, including the preferential expression of stem cell markers, enhanced self-renewal ability, and multi-lineage differentiation potential. Those cells are termed glioma stem cells (GSCs) and are highly capable of initiating tumour growth or repopulating tumours after treatment [5][6][7][8]. Recently, studies have increasingly demonstrated that GSCs are highly adaptive to various crucial conditions such as nutrient-restricted conditions, hypoxia, or chemoagent exposure, and actively interact with microenvironmental factors to evade antitumour immune responses, promoting tumour angiogenesis and tumour invasion. Because of these characteristics, GSCs are considered to be responsible for tumour recurrence and the poor outcomes of glioma patients [9][10][11]. Therefore, investigation of the key regulators involved in maintaining these GSC traits is of great importance to understand glioma progression and to develop novel treatment approaches. Integrin β4 (ITGB4) also known as CD104 is a laminin-5 receptor which is predominantly expressed in squamous epithelial cells, endothelial cells, immature thymocytes, Schwann cells, and fibroblasts of the peripheral nervous system [12]. In tumours, ITGB4 was first discovered as a tumour-specific antigen. Subsequent studies demonstrated that increased expression levels of ITGB4 were correlated with malignant progression and poor survival rates in squamous cell carcinomas (SCCs) of the skin, lung, head and neck, and cervix [13][14][15]. Further studies have reported that high expression levels of ITGB4 were found in several types of cancer-including breast, bladder, colon, ovarian, pancreatic, prostate, and thyroid-and were linked to poor prognosis [16][17][18]. In tumour tissues, the phosphorylation of the cytoplasmic tail of ITGB4 leads to its release from hemidesmosomes and its interaction with growth factor receptors, which promotes the invasion and metastasis of tumour cells [18]. Although ITGB4 has been reported to promote tumourigenesis in many cancers, its role in glioma is still unknown. Here, we show for the first time that ITGB4 expression is increased in GSC and glioblastoma tissues. Elevated levels of ITGB4 maintained the stem-like properties of GSCs, promoted glioma cell migration and tumorigenesis, and were associated with glioma grades. Further mechanistic studies revealed that KLF4, an important transcription factor, could directly bind to the promoter of ITGB4, facilitating its transcription and contributing to increased ITGB4 expression in glioma. Simultaneously, we found that ITGB4 interacted with KLF4 and decreased its binding to the E3 ligase VHL in glioma cells, which subsequently enhanced KLF4 stability and increased KLF4 expression. Thus, our study reveals that a novel feedback loop exists between KLF4 and ITGB4, which contributes to GSC self-renewal and glioma tumourigenesis. Flow cytometry Antibodies for CD133 were purchased from Miltenyi Biotec. Briefly, 10 μL of the antibodies were used to mark 10 6 cells per 100 μL of buffer for 30 min, in the dark, in a refrigerator (2°C-8°C), and the marker cells were analysed (BD Accuri C6). For the ALDH1 assay, the ALDH1+ population was detected with an ALDE-FLUOR kit (Shanghai Stem Cell Technology Co. Ltd., Shanghai, China) following the manufacturer's instructions. The ALDH1+ and ITGB4 + cells were analysed by flow cytometry (BD Accuri C6) or sorted (BD FACS-Canto II). Mammosphere formation assay Spheres were enriched from LN229 and U251 cells by culturing 200 to 1000 cells/mL in serum-free DMEM-F12 medium (Gibco) supplemented with B27 (1:50, Invitrogen) and 20 ng/mL EGF and bFGF. Nontreated tissue culture flasks were used to reduce cell adherence and support growth as undifferentiated tumour spheres. Cells were cultured for 2 weeks, and the number of spheres with a diameter of more than 100 mm in each well was counted. Dual-luciferase reporter assay The assay was performed as previously described [19]. ChIP assay The ChIP assay was performed as previously described [20]. Immunoprecipitation and in vivo KLF4 ubiquitylation assay Cells were transfected with the indicated plasmids using Lipofectamine 3000 (Invitrogen) reagent according to the manufacturer's protocol. For immunoprecipitation assays, cells were lysed with NP40 lysis buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 1% NP40, 0.5% deoxycholate) supplemented with protease-inhibitor cocktail (Biotool). Immunoprecipitations were performed using the KLF4 antibody and protein A/G agarose beads (Santa Cruz) at 4°C. The immunocomplexes were then washed twice with 200 μL PBS. Both lysates and immunoprecipitates were examined using the indicated primary antibodies followed by detection with the corresponding secondary antibodies and Western Bright ECL chemiluminescent detection reagent (Advansta). For in vivo deubiquitylation assays, HA-ubiquitin was transfected into LN229 cells, with or without ITGB4 knockdown, or overexpressed using Lipofectamine 3000. Twenty-four hours later, the cells were treated with 20 μM of the proteasome inhibitor MG132 (Calbiochem) for 8 h. Then, cells were lysed with NP40 lysis buffer and incubated with anti-KLF4 antibody for 3 h and protein A/G agarose beads (Santa Cruz) for a further 6 h at 4°C. The beads were then washed three times with PBS buffer. The proteins were released from the beads by boiling in SDS-PAGE sample buffer and analysed by immunoblotting with anti-Ub monoclonal antibody. Protein half-life assay For the KLF4 half-life assay, the LN229 cells with or without stably expressing ITGB4 or the indicated shRNAs, were treated with CHX (Sigma, 10 mg/mL) for the indicated durations before collection. The cell lysates were examined using the indicated primary antibodies followed by detection with the related secondary antibodies and Western Bright ECL chemiluminescent detection reagent (Advansta). Colony formation assay LN229 and U251, with or without ITGB4 knockdown, were harvested and mixed by pipetting to become single-cell suspensions in complete culture media of a given concentration. Dilutions were made of the single-cell suspensions to 500 or 1000 cells in every well of 6-well plate. This was incubated at 37°C with 5% CO 2 for 2 weeks. The colonies were then stained with 0.04% crystal violet-2% ethanol in PBS. Photographs of the stained colonies were then taken. Cell migration assay Cell migration assays were performed in 24-well transwell plates with 8-mm polyethylene terephalate membrane filters (Corning) separating the lower and upper culture chambers. In brief, LN229 or U251 cells were plated in the upper chamber at 1 × 10 4 cells per well in serum-free DMEM medium. The bottom chamber contained DMEM medium with 10% FBS. Cells were allowed to migrate for 24 h in a humidified chamber at 37°C with 5% CO 2 . After the incubation period, the filter was removed and non-migrant cells on the upper side of the filter were detached using a cotton swab. Filters were fixed with 4% formaldehyde for 15 min and cells located in the lower filter were stained with 0.1% crystal violet for 20 min and photographed. Tumour growth assay Animal studies were conducted in accordance with the National Institute of Health Guide for the Care and Use of Laboratory Animals with the approval of the Animal Research Committee of Dalian Medical University. Male nude mice (4-6 weeks of age, 18-20 g) were obtained from the SPF Laboratory Animal Center of Dalian Medical University (Dalian, China) and were randomly divided into the indicated groups. The mice in each group (n = 6) were subcutaneously injected with the indicated cells. After 15 days, tumour size was measured every 5 days by Vernier callipers and converted to TV according to the following formula: TV (mm 3 ) = (axb 2 )/2, where a and b are the largest and smallest diameters, respectively. All animals were killed 5 weeks after injection, and the transplanted tumours were removed, weighed, and fixed for further study. Tissue microarrays and immunohistochemistry Glioma tissue microarrays were purchased from Alenabio and Shanghai Outdo Biotech Company. These contained a total of 112 glioma tissues and 8 normal tissues. Immunohistochemistry was performed as previously described [21]. The characteristics of the patients and their tumours were collected through the review of medical records and pathology reports. Informed consent with approval of the ethics committee of Taizhou Hospital of Zhejiang Province was obtained. All of the methods in this study were in accordance with the approved guidelines, and all of the experimental protocols were approved by the ethics committee of Taizhou Hospital of Zhejiang Province. For immunohistochemistry, sections were subjected to antigen retrieval using microwave heating at 95°C in citrate buffer (pH = 6.0). The indicated antibodies specific for KLF4 and ITGB4 were diluted according to the manufacturer's instructions. The degrees of immunostaining were reviewed and scored by two independent observers. The proportion of the stained cells and the extent of the staining were used as the criteria for evaluation. For each case, at least 1000 tumour cells were analysed. For each sample, the proportion of KLF4 and ITGB4 expressing cells varied from 0 to 100%, and the intensity of staining varied from weak to strong. One score was given according to the percentage of positive cells as: < 5% of the cells: 1 point; 6-35% of the cells: 2 points; 36-70% of the cells: 3 points; > 70% of the cells: 4 points. Another score was given according to the intensity of staining as: negative staining: 1 point; weak staining (light yellow): 2 points; moderate staining (yellowish brown): 3 points; and strong Fig. 1 ITGB4 expression was increased in GSCs. a-c GSC-enriched populations were obtained from LN229 cells by sphere formation assay. The altered genes between spheroid and monolayer were detected by RNA sequencing analysis. The upregulated genes are listed. d-g The expression levels of ITGB4, Oct4, and Nanog were analysed by western blotting and q-RT-PCR assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control. h-k ALDH1 positive cells were obtained from LN229 and U251 cells by flow cytometry sorting. The expression levels of ITGB4, Oct4, and Nanog were analysed by western blotting and q-RT-PCR assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control staining (brown): 4 points. A final score was then calculated by multiplying the above two scores. If the final score was equal to or larger than four, the protein expression in the tumour was considered high; otherwise, the protein expression in the tumour was considered low. Fig. 2 High expression of ITGB4 in glioma patients was associated with glioma grades. a-c Representative images from immunohistochemical staining of ITGB4. The expression levels of ITGB4 between normal tissues (n = 8) and glioma tissues (n = 112) were compared. The association between ITGB4 and the grades were analysed. * p < 0.05 vs. control. d ITGB4 mRNA expression in normal tissues (n = 23) and the glioblastoma tissues (n = 81) were analysed. The data were extracted from the Oncomine database. e The mRNA levels of ITGB4 between normal tissues (n = 3) and glioblastoma tissues were analysed by q-RT-PCR Fig. 3 ITGB4 knockdown suppressed stem-like properties of glioma cells. a-d ITGB4+ and ITGB4-cells were isolated from LN229 and U251 cells by flow cytometry sorting. The expression levels of Oct4 and Nanog were analysed by western blotting and q-RT-PCR assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control. e-h GSCs were enriched from LN229 and U251 cells by sphere formation assay. We then knocked down ITGB4 expression using siRNA in the GSCs. The expression levels of ITGB4, Oct4, and Nanog were analysed by western blotting and q-RT-PCR assays. Data represent the mean ± SD of three independent experiments. ** p < 0.01, *** p < 0.001 vs. control. i-n ITGB4 was knocked down in LN229 and U251 cells. The mammosphere-forming abilities, ALDH1-positive populations, and CD133-positive populations were analysed. Data represent the mean ± SD of three independent experiments. ** p < 0.01, *** p < 0.001 vs. control We compared ITGB4 mRNA expression in normal tissues and in glioma tissue by searching the gene symbol "ITGB4 and fold change >1.5" in the Oncomine database. The data of 81 glioma tissues and 23 normal brain tissues were obtain from Sun brain (180) 204989_s_at; diffuse astrocytoma tissues (n = 7, Sun brain (180), 214292_at), anaplastic astrocytoma tissues (n = 19, Sun brain (180), 211905_s_at), and classic medulloblastoma (n = 46, Pomeroy brain (85) X53587_at). Fig. 4 ITGB4 knockdown suppressed glioma cell migration and proliferation. a-d ITGB4 was knocked down in LN229 and U251 cells. The cell migration and proliferation were analysed by transwell and colony formation assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control. e-g LN229 cells with or without ITGB4 knockdown were subcutaneously injected into nude mice (n = 6 in each group) for tumour formation. Representative bright-field imaging of the tumours in the mice implanted the indicated cells. After 5 weeks, mice receiving transplants of the indicated cells were sacrificed. The tumour volume and weight were calculated. ** p < 0.01, *** p < 0.001 vs. control Statistics and data analyses Statistical evaluations were performed using GraphPad Prism 5. Data are shown as mean ± SD. Multiple comparisons between treatment groups and controls were performed using Dunnett's least significant difference (LSD) test. Statistical significance between groups was calculated using the LSD test in SPSS 17.0 software (IBM). The results of WB were analyzed using the Image J software. Values of p < 0.05 were considered statistically significant. ITGB4 expression is elevated in GSC-enriched populations To identify the genes that are expressed differentially in GSCs, we first conducted a mammosphere formation assay to enrich GSCs from the glioma cell LN229 line, for subsequent mRNA sequencing analysis. Compared with the cells grown in monolayer cultures, we found 156 upregulated genes and 81 downregulated genes in GSCs ( Fig. 1a and b). Among these genes, we found that ITGB4 expression was significantly increased in GSCs, which was confirmed by subsequent western blotting and q-RT-PCR assays. The upregulation of GSC markers Oct4, Nanog, Sox2, CD133, and CD44 were listed as the positive control ( Fig. 1c-g). To further confirm this, we next enriched glioma stem cells from LN229 and U251 cells by ALDH1positive sorting assay and found that ALDH1+ cells also exhibited much higher ITGB4 mRNA and Fig. 5 KLF4 upregulated ITGB4 expression. a-c KLF4 was overexpressed in LN229 cells. Gene expression profiles were obtained by RNA sequencing analysis. The mRNA levels of ITGB4 are listed. d-g KLF4 was overexpressed in LN229 and U251 cells. The expression levels of ITGB4 and KLF4 were detected by western blotting and q-RT-PCR assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control. h-k KLF4 was knocked down in LN229 and U251 cells. The expression levels of ITGB4 and KLF4 were detected by western blotting and q-RT-PCR assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control protein expression levels than ALDH1− cells (Fig. 1h-k). Together, these data suggest that ITGB4 expression is elevated in GSC-enriched populations. ITGB4 is highly expressed in human glioma tissues and is positive correlated with glioma grades To better understand the role of ITGB4 in the development of human glioma, we further examined ITGB4 expression in human glioma samples (n = 112; World Health Organization (WHO) grade II-IV) and nonneoplastic brain tissue samples (n = 8) by immunohistochemical (IHC) staining. The IHC staining indicated that ITGB4 expression levels were increased in human glioma samples relative to normal brain tissues ( Fig. 2a-b). Interestingly, increased ITGB4 levels were found in high-grade gliomas and high ITGB4 expression was significantly correlated with increased tumour grade (Fig. 2c). Subsequently, the mRNA levels of ITGB4 were assessed in 81 glioma tissues and 23 normal brain tissues, using gene expression data obtained from the Oncomine database. Correspondingly, compared with normal brain tissues, ITGB4 mRNA levels were upregulated in glioblastoma tissues (Fig. 2d). Similar results were obtained in diffuse astrocytoma tissues (n = 7), anaplastic astrocytoma tissues (n = 19), and classic medulloblastoma (n = 46) (Additional file 1: Fig. 6 KLF4 binds to the promoter of ITGB4. a Schematic illustration of pGL3-based reporter constructs were used in luciferase assays to examine the transcriptional activity of ITGB4. b-c Parts of the promoter of ITGB4, named P1, P2, and P3, were individually transfected into LN229 and U251 cells with or without KLF4 overexpression. Luciferase activity was measured. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control. d-e Parts of the promoter of ITGB4, named P1, P2, and P3, were individually transfected into LN229 and U251 cells with or without KLF4 knockdown. Luciferase activity was measured. Data represent the mean ± SD of three independent experiments. ** p < 0.01, *** p < 0.001 vs. control. f The potential KLF4 binding site was inspected by JASPAR. Schematic illustration of KLF4 wild type binding site (BS) and the matching mutant (BSM) that were used in luciferase assays. g-h The wild type promoter (BS) or the matching mutant (BSM) were individually transfected into LN229 and U251 cells with or without KLF4 overexpression. Luciferase activity was measured. Data represent the mean ± SD of three independent experiments. ** p < 0.01 vs. control. i-j ChIP analysis showing the binding of KLF4 to the promoter of ITGB4 in LN229 cells with or without KLF4 overexpression or knockdown. An isotype-matched IgG was used as a negative control Figure S1A-C). To further verify this, we examined the mRNA levels of ITGB4 in seven glioma tissues and three normal brain tissues from our own institution. Similarly, increased ITGB4 mRNA expression was observed in the glioma tissues (Fig. 2e). ITGB4 is an important mediator of the stem-like properties of GSCs To assess the contribution of ITGB4 in promoting glioma stem-like properties, we first obtained ITGB4-positive cells from the LN229 and U251 cells by fluorescence-activated cell sorting (FACS). Compared with the ITGB4-negative cells, ITGB4 positive cells exhibited much higher Oct4 and Nanog expression (Fig. 3a-d). To further investigate the role of ITGB4 in GSCs, we enriched cancer stem cells from LN229 and U251 cells by mammosphere formation assay and then knocked down ITGB4 expression in these cells using siRNA. Compared with the control group, ITGB4 knockdown remarkably decreased the expression of the Oct4 and Nanog stemness markers (Fig. 3e-h). Next, we stably knocked down ITGB4 expression using lentivirus expressing short-hairpin RNA (shRNA) in LN229 and U251 cells. As shown in Additional file 2: Figure S2A-B, stable cell lines expressing these shRNAs showed significantly reduced ITGB4 levels. We then performed mammosphere formation, ALDH1-positive, and CD133-positive sorting assays to investigate the role of ITGB4 in promoting self-renewal abilities, a key characteristic of GSCs. The sphere formation efficiency was dramatically reduced upon ITGB4 depletion, as indicated by a decrease in spheroid numbers (Fig. 3i-j). Additionally, the ALDH1+ and CD133+ population was markedly decreased following the depletion of ITGB4 (Fig. 3k-n). Together, these data implied that ITGB4 is necessary for promoting cancer stem-like properties. ITGB4 knockdown suppressed glioma cell migration and proliferation in vitro and in vivo Given that ITGB4 suppression diminished GSC properties in vitro, we then examined whether ITGB4 knockdown could affect glioma cell migration and proliferation in vitro and in vivo. To this end, we used the LN229 and U251 cell lines and stably knocked down ITGB4 with two independent shRNAs. In the subsequent transwell assay, the migratory capabilities of LN229 and U251 cells depleted of ITGB4 were apparently decreased compared to control cells ( Fig. 4a-b). Furthermore, the numbers of colonies formed by LN229 and U251 cells with ITGB4 knockdown were also remarkably reduced compared to control cells (Fig. 4c-d). To examine the involvement of ITGB4 expression in glioma tumorigenesis in vivo, we implanted LN229 cells, stably expressing control shRNA or shRNA targeting ITGB4, into nude mice. As illustrated in Fig. 4e-g, the size and weight of xenograft tumours were significantly reduced by the ITGB4 knockdown. KLF4 upregulates ITGB4 expression in glioma cells KLF4/GKLF is a member of the KLF-like factor subfamily of zinc finger proteins [22]. Our recent studies have indicated that KLF4 upregulates MGLL and BIK in HCC and prostate cancer cells [23][24][25]. To screen the KLF4regulated genes in glioma, we overexpressed KLF4 in LN229 cells for subsequent mRNA sequencing analysis. Interestingly, among the upregulated genes, we found that ITGB4 was significantly elevated in KLF4 overexpressing cells (Fig. 5a-c). To further confirm this, we first detected the mRNA and protein levels of ITGB4 in KLF4 overexpressing LN229 and U251 cells. Compared with the control cells, overexpression of KLF4 significantly increased ITGB4 expression (Fig. 5d-g). Conversely, inhibition of KLF4 dramatically decreased ITGB4 expression (Fig. 5h-k). KLF4 directly binds to the promoter of ITGB4 To identify the KLF4 binding regions on the ITGB4 promoter, we first cloned the upstream sequence of ITGB4 and different truncations of it. We then inserted them into pGL3-based luciferase reporter plasmids, named P1-P3 (Fig. 6a). We transfected them into LN229 and (See figure on previous page.) Fig. 7 ITGB4 enhanced KLF4 stability. a-b ITGB4 was knocked down in LN229 and U251 cells. The protein levels of ITGB4 and KLF4 were analysed by western blotting assay. c LN229 and U251 cells with or without ITGB4 knockdown were treated with MG132 or not. The protein levels of ITGB4 and KLF4 were analysed by western blotting assay. d-g LN229 cells with or without ITGB4 knockdown or overexpression were treated with CHX (10 mg/ml) for the indicated times. The half-life of KLF4 was measured. Data represent the mean ± SD of three independent experiments. ** p < 0.01, *** p < 0.001 vs. control. h-i LN229 cells with or without ITGB4 knockdown or overexpression, were transfected with the indicated constructs and the cells then were treated with MG132 for 8 h before collection. The whole-cell lysate was subjected to immunoprecipitation with anti-KLF4 antibodies and western blotted with anti-Ub antibodies to detect ubiquitylated KLF4. j LN229 and U251 cell lysates were subjected to immunoprecipitation with control IgG or anti-KLF4 antibodies. The immunoprecipitates were then detected using the indicated antibodies. k-l ITGB4 was knocked down or overexpressed in LN229 cells. The cell lysates were subject to immunoprecipitation with control IgG or anti-KLF4 antibodies. The immunoprecipitates were then detected using the indicated antibodies U251 cells with or without KLF4 overexpression. Compared with control cells, the luciferase activities of P1 and P3 were augmented in KLF4 overexpressing cells; however, this increase was abolished when P2 was transfected ( Fig. 6b-c). To further verify this, the truncations were transfected into LN229 and U251 cells with or without KLF4 knockdown. We found that KLF4 depletion led to a decrease in luciferase activity from P1 and P3. However, this decrease disappeared when P2 was transfected in KLF4 knockdown cells (Fig. 6d-e). Taken together, these results indicate that the region from − 500 to 0 bp (P3) is a key region for the promotion of ITGB4 by KLF4. Previous reports have indicated that KLF4 is a zinc finger-type transcription factor that usually binds to the GC rich elements of promoters [26]. To identify potential KLF4 binding sites, we inspected the sequence of the ITGB4 promoter by JASPAR software and found a putative KLF4 binding site on the ITGB4 promoter. To verify that this potential KLF4-binding site was indeed responsive to KLF4, two pGL3-based luciferase reporter plasmids named BS and BSM were established (Fig. 6f). These plasmids were individually transfected into LN229 and U251 cells with or without KLF4 knockdown. As shown in Fig. 6g and h, the luciferase activity of BS but not BSM was significantly increased in KLF4 overexpressing cells. In addition, subsequent chromatin immunoprecipitation (ChIP) assays showed that the chromatin fragments corresponding to the putative KLF4 binding sites were specifically present in the anti-KLF4 immunoprecipitates from LN229 cells. This bond was increased when KLF4 was overexpressed, whereas, the bond was decreased when KLF4 was knocked down (Fig. 6i-j). ITGB4 interacts with KLF4 and enhances its stability In addition to KLF4 being identified as a potential regulator of ITGB4, we also found that a positive feedback loop existed between KLF4 and ITGB4. As shown in Fig. 7a and b, we found that the knockdown of ITGB4 using siRNA led to KLF4 downregulation and that the decrease in KLF4 was restored by the proteasome inhibitor MG132. This indicated that ITGB4 affected KLF4 expression in a proteasome-dependent manner (Fig. 7c). To further validate this finding, we treated the indicated cells with the protein synthesis inhibitor cycloheximide (CHX). Notably, the depletion of ITGB4 led to a prominent decrease in the stability of endogenous KLF4 protein ( Fig. 7d-e). Whereas, overexpression of ITGB4 enhanced the stability of KLF4 in LN229 cells ( Fig. 7f-g). Additionally, we also found that knockdown of ITGB4 increased the ubiquitylation of KLF4 in glioma cells (Fig. 7h), while overexpression of ITGB4 decreased the ubiquitylation of KLF4 (Fig. 7i). To uncover the molecular mechanism by which ITGB4 enhances KLF4 stability, we first wanted to investigate whether ITGB4 could interact with KLF4. The binding assays suggested that ITGB4 could interact with KLF4 in glioma cells and they were co-localization in the cells (Fig. 7 and Additional file 3: Figure S3). Subse quently, we also found that the binding of KLF4 to VHL, an E3 ligase of KLF4, was increased in ITGB4 knockdown cells (Fig. 7k). Similar to the knockdown assay, the interaction between KLF4 and VHL was decreased in ITGB4 overexpressing cells (Fig. 7l). Taken together, our data suggest that ITGB4 interacts with KLF4 and prevents its degradation. Reciprocal regulation between KLF4 and ITGB4 plays an essential role in glioma stem cell self-renewal and tumourigenesis Having identified that the feedback loop existed between KLF4 and ITGB4, we next asked whether reciprocal regulation between KLF4 and ITGB4 facilitated GSC self-renewal, proliferation, and migration. We first knocked down ITGB4 in LN229 and U251 cells with or without KLF4 overexpression. The expression levels of Oct4 and Nanog were analysed by western blotting and q-RT-PCR. As shown in Fig. 8a-d, KLF4 overexpression elevated the expression of Oct4 and Nanog. However, the upregulation of Oct4 and Nanog disappeared when ITGB4 was knocked down. After this, to assess the contribution of KLF4 in promoting GSC properties through regulation of ITGB4, CD133-positive sorting assays, mammosphere formation, transwell assays, and plate colony formation were performed. As shown in Fig. 8e-l, the (See figure on previous page.) Fig. 8 KLF4 and ITGB4 played an essential role in glioma tumorigenesis. a-d ITGB4 was knocked down in LN229 and U251 cells with or without overexpressing KLF4. The expression levels of ITGB4, Oct4, Nanog, and KLF4 were analysed by western blotting and q-RT-PCR assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control. e-h The CD133-positive populations and mammosphereforming abilities were analysed. Data represent the mean ± SD of three independent experiments. ** p < 0.01 vs. control. i-l Cell migration and proliferation were analysed by transwell and colony formation assays. Data represent the mean ± SD of three independent experiments. *** p < 0.001 vs. control. m-n The cells were subcutaneously injected into nude mice (n = 6 in each group) for tumour formation. Representative bright-field imaging of the tumours in the mice implanted with the indicated cells. After 5 weeks, mice receiving transplants of the indicated cells were sacrificed. The tumour volume and weight were calculated. ** p < 0.01, *** p < 0.001 vs. control elevations of the CD133+ population, sphere formation efficiency, migration ability, and colony numbers induced by KLF4 were reversed when ITGB4 was knocked down. To further confirm this, we stably overexpressed KLF4 in LN229 cells with or without ITGB4 knockdown, then implanted these cells into nude mice. As shown in Fig. 8m and n, we found that KLF4 overexpression promoted glioma growth as indicated by the increase in the size and Fig. 9 KLF4 was positively associated with ITGB4 expression in glioma tissues. a-c Representative images from the immunohistochemical staining of KLF4. The expression levels of KLF4 between normal tissues (n = 8) and glioma tissues (n = 112) were compared. The association between KLF4 and the grades were analysed. * p < 0.05, *** p < 0.001 vs. control. d-e Representative images from the immunohistochemical staining of KLF4 and ITGB4 in glioma tissues (n = 112). The association between KLF4 and ITGB4 was analysed (p < 0.001, R 2 = 0.7131) weight of xenograft tumours. However, this increase was abolished when ITGB4 was depleted. KLF4 is closely associated with glioma grades and ITGB4 expression in glioma tissues To evaluate the clinical relevance of the KLF4-ITGB4 axis, we first examined the expression of KLF4 in human glioma samples (n = 112; World Health Organization (WHO) grade II-IV) and nonneoplastic brain tissue samples (n = 8) by immunohistochemical (IHC) staining. These results showed that KLF4 displayed higher expression levels at varying degrees, compared with nonneoplastic brain tissues. High expression of KLF4 was significantly correlated with increased tumour grade (Fig. 9a-c). In addition, we further analysed the correlation between KLF4 and ITGB4 in human glioma tissues (n = 112). As shown in Fig. 9d and e, we found that KLF4 was closely associated with ITGB4 expression (p < 0.001, R 2 = 0.7131). Discussion Most patients with glioma tend to show recurrence and poor prognosis, which are believed to be closely related to the existence of GSCs. Therefore, it is necessary to understand how GSCs maintain their stem-like properties and find GSC-related therapeutic targets to improve glioma prognosis [27][28][29][30][31]. In this study, our findings showed that ITGB4 expression was increased in GSCs and glioma tissues. Elevated ITGB4 levels were correlated with glioma grades. Subsequently, we found that ITGB4 knockdown decreased the self-renewal abilities of GSCs and suppressed glioma cell migration and proliferation in vitro and in vivo. Further mechanistic studies revealed that KLF4, an important transcription factor, directly binds to the promoter of ITGB4, facilitating its transcription and contributing to increased ITGB4 expression in glioma. Interestingly, when ITGB4 levels were increased, it was able to interact with KLF4 and thus decrease its binding to the E3 ligase VHL, leading to its accumulation in glioma. There is increasing evidence to indicate that the overexpression of ITGB4 is correlated with an aggressive phenotype and poor prognosis in breast cancer, lung cancer, pancreatic cancer, cervical cancer, and gastric cancer [17]. Similarly, we found that the mRNA and protein levels of ITGB4 were increased in human glioma and GSCs. Increased ITGB4 levels played an oncogenic role in glioma and were correlated with the glioma grades. Inhibition of ITGB4 expression in glioma cells reduced the self-renewal abilities of GSCs and suppressed glioma cell migration and proliferation in vitro and in vivo. Recently, Transmembrane protein 268 (TMEM268) was reported to increase ITGB4 expression via enhancing its stability in gastric cancer cells [32,33]. TAp73 was indicated to transcriptionally upregulate ITGB4 expression [34]. Here, we found that KLF4 directly binds to the promoter of ITGB4, facilitating its transcription and contributed to ITGB4 increase in glioma. KLF4 is a member of the KLF-like factor subfamily of zinc finger proteins [22]. Dysregulation of KLF4 has been observed in a number of human cancers, including gastrointestinal, pancreas, bladder, and lung cancer. Ectopic expression of KLF4 has been reported to suppress cell proliferation, induce apoptosis, and promote cell-cycle arrest, indicating that KLF4 has a tumour suppressor function in a variety of malignancies and that its downregulation may play an essential role in tumourigenesis [35][36][37][38][39][40]. For example, KLF4 was reported to suppress cell migration and invasion in esophageal cancer [41,42]. In gastric cancer, KLF4 inhibited cell proliferation and metastasis via downregulating β-catenin expression [43][44][45]. Recently, KLF4 was reported to transcriptionally repress cavelion-1 expression and thereby inhibit metastasis of pancreatic cancer [46]. However, in squamous cell carcinoma, breast cancer, osteosarcoma, and glioma, KLF4 was shown to promote cell growth and cellular dedifferentiation, and inhibit cell apoptosis [47][48][49]. Consistently, our data indicated that KLF4 promoted gliomagenesis via upregulating ITGB4 expression. Interestingly, we also found that a positive feedback loop existed between ITGB4 and KLF4. ITGB4 was able to interact with KLF4 and enhanced its stability. Previous studies have indicated that the E3 ligase VHL and FOXB33 promoted KLF4 degradation in breast cancer [50,51]. Our data suggests that ITGB4 binds to KLF4 and suppresses its interaction with VHL. Conclusions Taken together, our data suggests the existence of a positive feedback loop between KLF4 and ITGB4 that promotes GSC self-renewal and gliomagenesis, and also implicates ITGB4 as a valuable therapeutic target for glioma.
7,947.8
2019-01-18T00:00:00.000
[ "Medicine", "Biology" ]
Noninvasive cross-sectional observation of three-dimensional cell sheet-tissue-fabrication by optical coherence tomography Cell sheet engineering allows investigators/clinicians to prepare cell-dense three-dimensional (3-D) tissues, and various clinical trials with these fabricated tissues have already been performed for regenerating damaged tissues. Cell sheets are easily manipulated and 3-D tissues can be rapidly fabricated by layering the cell sheets. This study used optical coherence tomography (OCT) to noninvasively analyze the following processes: (1) adhesions between layered cell sheets, and (2) the beating and functional interaction of cardiac cell sheet-tissues for fabricating functional thicker 3-D tissues. The tight adhesions and functional couplings between layered cell sheets could be observed cross-sectionally and in real time. Importantly, the noninvasive and cross-sectional analyses of OCT make possible to fabricate 3-D tissues by confirming the adherence and functional couplings between layered cell sheets. OCT technology would contribute to cell sheet engineering and regenerative medicine. Introduction Damaged tissues have already been clinically treated with a variety of regenerative therapies using functional cells and bioengineered tissues [1,2]. We have proposed scaffold-free tissue engineering, called "cell sheet engineering", utilizing temperature-responsive culture dishes, which possess reversible hydrophilic/hydrophobic properties that are simply controlled by culture temperature [3]. Cell sheets are comprised of only cells and a biological extracellular matrix (ECM), which means that cell-dense three-dimensional (3-D) tissue can be fabricated by simply layering cell sheets without any need for scaffolds [4][5][6][7]. Three-dimensional cell sheet-tissues have been applied to regenerate damaged tissues, and cell sheet-therapy has already been used clinically in six different fields [8][9][10][11][12][13][14][15][16][17]. The presence of ECM is thought to promote the tight, rapid attachment between individual layered cell sheets [9,18]. There have been very few studies that could observe cross-sections of cell sheet-tissues noninvasively to analyze adhesion and functional communications due to the technical difficulty [19,20]. Three-dimensional tissues and their microstructures can be observed in cross-section by optical coherence tomography (OCT) in real time [21]. The technology has been applied in several clinical fields, where the safety, non-invasiveness and feasibility have been confirmed; at present, OCT had become an important method in clinical examination [22][23][24][25][26][27][28]. Recently, we have developed an OCT system, which can observe living cell sheets in cross-section [29]. In this study, the in-vitro fabrication of 3-D tissues with cell sheet engineering, as well as the beating and functional coupling of 3-D cardiac tissues were analyzed noninvasively by OCT. This technology could be an invaluable method in the fields of cell sheet engineering, tissue engineering, and regenerative medicine. Materials and methods All animal experiments were performed in accordance with the experimental procedures approved by the Committee for Animal Research of Tokyo Women's Medical University. Cell culture and cell sheet preparation C2C12 murine skeletal myoblast lines (Sumitomo Dainippon Pharma, Osaka, Japan) and NIH3T3 murine embryonic skin fibroblast lines (ATCC s CRL-1658 ™ ) [30] with 10% fetal bovine serum (Japan Bio Serum, Nagoya, Japan) and 1% penicillin-streptomycin (Invitrogen Life Technologies, CA, USA)], 6.0 Â 10 5 cells were seeded onto a 35-mm temperature-responsive culture dish (UpCell s dish) (CellSeed, Tokyo, Japan) and then cultured for 3 days at 37 1C. Rat neonatal cardiac cell sheets were fabricated using UpCell s dishes according to a previous report [7]. To harvest these cell sheets, the culture dishes were placed in a separate CO 2 incubator set at 20 1C. To fabricate 3-D tissues, cell sheets were layered on a 35 mm polystyrene culture dish (Corning, NY, USA) by pipetting as described in previous reports [6,7]. Optical coherence tomography (OCT) Recently, an OCT system to analyze cell sheets has been established [29]. The adhesions between a cell sheet and a polystyrene culture dish, and between layered cell sheets, were observed at 37 1C. The location of spaces, which was characterized by the intensity of the OCT signal, between (1) a cell sheet and the dish, or (2) layered cell sheets is displayed as red in the image. The vertical resolution of the OCT was approximately 9 μm, and the horizontal resolution was approximately 20 μm. The beatings of cardiac cell sheets were observed and analyzed at 32 fps (frames per second). Within cardiac cell sheets, areas determined as beating within the cell sheet were those places where the correlation of OCT signals at intervals of 90 ms was lower than a predetermined level. The beating areas were marked with green colored markers. Observation of C2C12 and NIH3T3 cell sheets by OCT A cell sheet with culture medium was transferred onto a polystyrene culture surface as described in previous reports [4][5][6][7]. After cell sheets were transferred onto the culture dish, medium was removed to facilitate spreading of the cell sheet. After a shortterm cultivation of less than 30 min, the spaces were found to decrease rapidly, and when no spaces were observed, the transferred cell sheet was found to be in a smooth plane form (Fig. 1B). The rapid time-course decrease in spaces between the cell sheet and the dish was clearly recorded (Video 1). When new medium was added to the cell sheet to prevent drying out, the cell sheet continued to adhere onto the dish (Video 1), showing a tight attachment between the cell sheet and the dish. Similar adhesion processes were also observed in C2C12 cell sheets onto the dish using OCT, and tight attachment between the cell sheet and the dish was confirmed (data not shown). We have been attempting to develop a cell culture surface with higher functionality that is able to precisely control the attachment/detachment of cells by modulating the chemical structure of the surface; for example, hydrophilically modified cell culture surfaces can accelerate cell sheet detachment [31][32][33]. At present, surfaces that have been developed are mainly evaluated by top-view photography. Because OCT allows us to analyze the attachment/detachment of cell sheets crosssectionally and noninvasively, the technology will use as an optimal system to assess and quantify cell culture surfaces that accelerate the attachment/detachment of cell sheets. For observing and analyzing the cross section and time course of the adhesion between layered cell sheets, cell sheets were layered. After the first C2C12 cell sheet was adhered onto a culture surface, another C2C12 sheet was layered onto the first cell sheet, and medium was removed to facilitate spreading the second cell sheet. After layering, the time course showed a rapid decrease in spaces between the two cell sheets was clearly observed within 30 min by OCT (Fig. 2 and Video 2). When new medium was added to the cell sheet to prevent drying out, the two cell sheets continued to adhere (Video 2), indicating that there was a tight attachment between the cell sheets. These results show that OCT can detect the adhesion between layered cell sheets, as well as between a cell sheet and the culture surface. Therefore, OCT technology will be also used in an assessment system to quantify the search for culture methods, which accelerate the attachment between multi-layered cell sheets biologically and physically. C2C12 cell sheets were stacked to form a multi-cell layer tissue while simultaneously observing their cross section by OCT. After layering two C2C12 cell sheets, a third cell sheet was stacked onto the double-layer cell sheet. Just after layering, spaces between the third cell sheet and the double-layer cell sheet could be observed clearly (Fig. 3A), then after a short incubation of less than 30 min, complete attachment between the cell sheets was observed ( Fig. 3B). Using a similar procedure, harvested cell sheets were successfully stacked up into quintuple-layer tissue, and the same adhesion processes between multi-layered cell sheets was clearly detected (Fig. 3). Only a few spaces between the cell-sheet layers were detected, showing tight adhesions between the multilayered cell sheet constructs. When cell sheets have been layered to fabricate 3-D tissues, whether to adhere a cell sheet to the culture surface, or between layered cell sheets, medium should be removed, and after a suitable incubation time, new medium is added to avoid the drying out [4][5][6][7]. This manipulation has been performed experientially, and its feasibility and efficacy were confirmed by OCT. In addition, the incubation time between removing the medium and adding new medium is largely dependent on the experience of the investigators. Importantly, OCT allows investigators and clinicians to manipulate cell sheets while confirming noninvasively the adhesion between a cell sheet and the culture surface, or between layered cell sheets in a 3-D tissuefabrication. Furthermore, the thicknesses at arbitrary points in the cell sheet-tissues could be easily measured, as shown in Fig. 3. Observation of rat cardiac cell sheets by OCT A rat cardiac cell sheet detaching from a temperature-responsive culture dish was cross-sectionally analyzed. The cell sheet was detached from both edges, and the detachment was recorded in movie data by OCT (Video 3), which shows the detaching of a beating cell sheet. The beating of a detached cardiac cell sheet was analyzed by OCT. Within the cell sheet, beating areas were marked with green colored markers, where the correlation of OCT signals for short intervals (90 ms) was lower than the predetermined level. Fig. 4 and Video 4 clearly show the transmission of green areas, namely, the beating areas within the cell sheet, which indicate that the transmission of action potentials within a cardiac cell sheet could be noninvasively detected in cross-section by OCT. Next, harvested cardiac cell sheets then were layered to form a double-layer tissue while observing it by OCT, which showed that the cell sheets seemed to beat synchronously (Video 5). This data showed that the 3-D transmission of beating cardiac cells within a multi-layered cell sheet could be observed by OCT, and suggests a functional coupling between layered cell sheets, confirming the previous observations in our laboratory [4,7,19]. The noninvasive observation will also contribute to the electrophysiology of cardiac tissues. At present, we are preparing the investigation of the electrical and functional coupling processes between layered cardiac cell sheets in detail using the technique of the combination of OCT system and a multipleelectrode extracellular recording system. Clinical trials of autologous skeletal myoblast-sheet transplantation into heart disease patients are now underway. The first patient, who suffered from a dilated cardiomyopathy has received autologous cell sheet therapy, and is now in good clinical condition [12]. In the first clinical therapy, quadruple-layered myoblast sheets were used. It is generally accepted that enormous numbers of cells (10 9 cell-level per patient) are necessary for treating conditions such as cardiovascular disease and diabetes [34], and layered cell sheets make it possible to transplant enormous numbers of cells onto target tissues. OCT observation then allowed us to detect tight and complete adhesions between multi-layered cell sheets noninvasively (Figs. 2 and 3). OCT can be a powerful tool for analyzing the quality of engineered tissues in clinical cell sheet-therapy. While the therapeutic effects of adult stem/progenitor cells including skeletal myoblasts are generally thought to be mainly due to the paracrine effects of various factors secreted from implanted cells [35], beating cardiac cells and cardiac tissue are expected to contribute to the mechanical support of damaged heart tissue via electrical and functional couplings as well as their paracrine effects [36,37]. In an effort to make a more advanced regenerative therapy, attempts at engineering beating myocardial tissue using cardiac cell sheets have already been performed [4]. In fact, multi-layered cardiac cell sheets give good therapeutic effects in rat models [38,39]. In addition, we have also succeeded in the fabrication of spontaneously beating human cardiac cell sheets, using human induced pluripotent stem cells (hiPSCs) [20,40]. hiPSC-derived cardiac cell sheets have been shown to be feasible and safe in a large animal model [41]. In the near future, cardiac cell sheets will be used clinically for regenerating damaged heart tissues. This study showed that beating of cardiac cell sheet-tissues could be cross-sectionally detected by OCT (Movies 4 and 5). OCT could be also be used to evaluate the beating and functional coupling of engineered cardiac tissues. Recently, we observed cell-sheet-transfer-process and the adhesion between a cell sheet and target tissue by OCT using the rat model [29]. At present, to evaluate the potential for clinical use, we are attempting to observe and analyze the transplantation of cell sheets onto beating heart tissue using a porcine model by OCT. OCT could also be used clinically, as a method to observe and analyze adhesions and functional couplings between transplanted cell sheets and target tissues, as well as to assess the quality of engineered tissues before transplantation. In this study, the dynamics of layered cell sheets were observed noninvasively by OCT. OCT technology allowed us to analyze in real time the cross-sectional adhesion between (1) multi-layered cell sheets, and (2) beating 3-D cardiac tissues in detail. Using this method, we were able to confirm the rapid adhesion and functional coupling of 3-D tissues. In addition, OCT observation could allow investigators and clinicians to fabricate three-dimensional tissues by confirming the adherence and functional coupling between layered cell sheets. We are confident that OCT technology can be used a powerful method in cell sheet engineering and the clinical application.
3,024
2015-05-12T00:00:00.000
[ "Biology", "Materials Science", "Engineering" ]
The Common Good According to Great Men of Prayer and Economists: Comparisons, Connections, and Inspirations for Economics : This paper aims to present and compare contemporary concepts of the common good formulated by economists with reference to the understanding of the common good by the great men of prayer: Augustine of Hippo Introduction The concept of the common good, despite its ancient origins, remains a contemporary and intellectually challenging topic situated at the intersection of philosophy, Catholic social teaching (CST), and economics.Given the interdisciplinary nature of the common good, this paper adopts a historical and interdisciplinary approach, along with the descriptive method.This allows for the presentation of the most important ideas of economists against the background of philosophers and, especially, representatives of CST, contributing to a better understanding of the nature and character of economic concepts of the common good. The primary objective of this research is to present and compare key contemporary concepts of the common good formulated by economists, with reference to the understanding of the common good by the great men of prayer: Augustine of Hippo; Thomas Aquinas; Jacques Maritain; and Popes John XXIII, John Paul II, and Francis.It seeks to determine in what direction the economic theory of the common good can develop, taking into account the inspirations drawn from the most prominent theorists of the common good in CST. The choice of and focus on the main CST representatives was not accidental.The authors of the most important concepts of the common good were (or still are) undoubtedly the great men of prayer.Today, most of them are saints of the Catholic Church, but even those not yet canonized are widely recognized as people of God: serving God and humanity.Undoubtedly, prayer had or continues to have a central place in the lives of these people.Indeed, true men of prayer have access to the wisdom of God, who shows them the nature and meaning of the reality they encounter every day-they have some insight into how God himself sees things and matters.Therefore, it seems that one should definitely look at what the great people of prayer in the Church understood by the common good and be inspired by their thoughts when creating theories in the social sciences, including economics. The article consists of four main parts.Following the introduction, Section 2 of the study provides a historical overview of the most important concepts of the common good, mostly proposed by CST, from antiquity to the present day, distinguishing various traditions of perceiving the common good.Section 3 presents the concepts of the common good proposed by economists, characterizing the four most significant contemporary approaches.Then, in the fourth part of the article, the ideas of economists about the common good are compared and assessed in light of the concept of the common good within the CST framework.The study ends by defining and comparing the main characteristics of the analyzed concepts of the common good, offering insights into potential future research directions on the category of the common good in economics, consistent with the thought of the great men of prayer. The Contribution of Great Men of Prayer to the Concept of the Common Good Since the earliest days of philosophical and socioeconomic thought, the concept of the common good has been subject to intense scrutiny.Plato in De Republica stated that it is the responsibility of the state to ensure the common good, known as κoινó καλó in Greek, by defining it and convincing citizens through persuasion or force to build it.Therefore, according to Plato, the statutory law is the ultimate arbiter of the common good, which may sometimes conflict with the interests of individuals (Zamelski 2012).On the other hand, Aristotle, a student and critic of Plato, took a different approach in his Ethica Nicomachea.He did not view the common good as being at odds with the good of individuals but rather saw it as encompassing both the good of the polis and the good of individual citizens.Aristotle believed that the state existed to enable individuals to achieve their ultimate goal, or telos, which was personal happiness or eudaimonia, attainable through various virtues (Zamelski 2012).Additionally, Aristotle contended that the good of the state was a more perfect and encompassing good than that of the individual (Sadowski 2010). Christian thought, drawing upon the philosophical achievements of the ancient Greeks and Romans, has played a significant role in the development of the concept of the common good.Augustine of Hippo's understanding of the common good (Latin: bonum commune) differs significantly from that of Plato's.Augustine posited in De Civitate Dei that it is not the state but the community that defines the common good through its affections because the pursuit of the common good determines the existence of the community (Sadowski 2010).Thus, the common good cannot be imposed on citizens by the state; it must be chosen and recognized by the citizens themselves. Centuries later, Thomas Aquinas took a more comprehensive and penetrating approach to the idea of the common good, building upon Aristotle's concepts.Like Aristotle, Aquinas viewed humans as social beings and always considered them as members of a community rather than individuals in isolation (Sadowski 2010).He, therefore, emphasized the interrelatedness of the good of the individual and the good of the community, defining the common good as a shared goal that each member of the community accepts as their own good and a motive for their actions because it serves to achieve their aims and perfection (Zamelski 2012). The author of Summa Theologiae regarded the common good as a distinct value, not simply the sum of individual goods, and considered it higher than the good of the individual, provided that it belonged to the same hierarchy of goods and goals (Sadowski 2010).For instance, a community could not expect its members to sacrifice their personal goal of salvation for a particular common good, as God, their individual goal, is also the highest common good (Latin: bonum commune separatum), hierarchically above all other common goods. It is important to note that, in Aquinas's view, the superiority of the common good does not threaten the primacy of the human person as an individual and a member of the community (Sadowski 2010).Aquinas believed that the social order unites individual aspirations for the common good and connects the human (social and individual) dimension of the common good with God as the highest common good through the order of the universe.As such, not only individuals and the community but also authorities play an important role in the pursuit of the common good due to the significance of the social order. Only a few centuries later, the concept of the common good was significantly redefined by Enlightenment philosophers such as Hobbes, Locke, and Rousseau.They approached the common good from different anthropological assumptions, resulting in diverse interpretations.Bruni and Zamagni (2017) note that differences in the understanding of the common good also stem from varying Christian traditions such as Lutheran, Calvinist, and Catholic approaches.Additionally, these differences resulted from a departure from the theological and teleological approach to the common good, which placed God as the ultimate goal of every human being (De George 2004).Instead, subsequent philosophers emphasized the importance of either the individual (as an end in themselves), as seen in utilitarianism or the doctrine of human rights, or society or the state, as developed in later philosophical currents such as Marxism and Hegelianism. Therefore, in contrast to the integral approach of the medieval Christian thinkers, the definitions of the common good proposed later by Enlightenment shifted towards emphasizing either the role of the individual (referred to as liberal, individualist conceptions) or the role of society (referred to as socialist, collectivist conceptions).This departure broke the inherent bond between the development of the individual and the good of the community (society or state) that was present in earlier Christian thought (Lutz 1999). The concept of the common good, which integrates the individual and community dimensions, was reintroduced through the social teaching of the Catholic Church.Pope John XXIII defined the common good by referring to St. Thomas Aquinas's understanding, stating that it is "all those social conditions which favor the full development of human personality" (John XXIII 1961, p. 65).The Second Vatican Council also affirmed this approach, stating in Gaudium et Spes that the common good encompasses "the sum of those conditions of social life which allow social groups and their individual members relatively thorough and ready access to their own fulfillment" (Second Vatican Council 1965, p. 26).This definition has become increasingly universal, encompassing the rights and obligations resulting from respect for the entire human race.As such, each social group must consider the needs and legitimate aspirations of other groups, as well as the general well-being of humanity as a whole. According to Enderle (2018), there are four aspects of this definition of the common good.First, it pertains to the conditions of social life rather than being a substantive goal of all people in society (German: Gemeingut), meaning it is an instrumental value (Dienstwert) rather than an internal value (Selbstwert).Second, these conditions are necessary for both social groups and their individual members to realize their life plans and achieve selffulfillment.Third, the common good encompasses all of these social conditions.Finally, globalization and increased interdependence across the world mean that these conditions apply to all of humanity.Jacques Maritain, drawing on the thought of Thomas Aquinas and the teachings of the Church, developed the concept of the common good within the framework of Christian personalism.Maritain's conception posits that the common good is "common to the whole and the parts, the persons" (Maritain 1951, p. 11), thus existing as a distinct entity rather than being the sum of individual goods.Although the common good is deemed more important than the individual goods of society members, it does not imply the subordination of individuals to society at all times.Firstly, the primacy of the common good, according to Thomas Aquinas, applies to values of the same order (e.g., economic values), not to higher values of the human person (cognitive, spiritual, moral, and ideological values) since their infringement would lead to the destruction of the common good.Secondly, no society is an end in itself since it exists for the people who constitute it, and they are separate entities-persons-with their dignity, autonomy, freedom rights, and own aspirations for higher education and life tasks.Therefore, the common good must not conflict with the good of the individual but instead contribute to the good and development of each person.Thus, it is not "the proper good of the whole", which, "like the hive with respect to its bees, relates the parts to itself alone and sacrifices them to itself", but it is "common to both the whole and the parts into which it flows back and which, in turn, must benefit from it" (Maritain 1966, p. 51).Consequently, the rulers of society cannot wholly subjugate individual persons, as the latter ones can be directly and entirely subordinated solely to their ultimate goal, i.e., God, who surpasses all created common goods.Therefore, the authority necessary for the implementation of the idea of the common good must serve the common good and not attempt to control it. Karol Wojtyła (later known as Pope John Paul II) developed the concept of the common good in light of Christian personalism.He posited that the common good is a distinct social good and not the mere sum of individual goods of the members of society.It determines the good of individuals, and there is no conflict between the "true common good" and the "true good of the person" (Wojtyła 2018, pp. 79-80).According to John Paul II, the most crucial aspect of the concept of the common good is the respect for the human person's dignity and rights.In this regard, it is imperative to establish the appropriate conditions for each member of society to fulfill their calling.This principle, in turn, translates into the prosperity and economic development of a given community.Thus, according to John Paul II, the common good serves as the cornerstone of the social order, and it is the obligation of society to create conditions that allow for the full development of all its members (John Paul II 1993).John Paul II made numerous references to the common good in his socioeconomic encyclicals, namely Laborem Exercens (John Paul II 1981), Sollicitudo Rei Socialis (John Paul II 1987), and Centesimus Annus (John Paul II 1991).He proposed a broad and comprehensive definition for it as "the good of all and of each individual, because we are all really responsible for all" (John Paul II 1987, p. 38).John Paul II emphasized that the common good should not be confused with attitudes that prioritize individual benefits or the interests of one's own social group through class conflict (John Paul II 1987, 1991).He regarded the common good as a higher value, which provides direction for the necessary transformation of spiritual attitudes that serve as the basis of all human relationships (John Paul II 1987).Pope Francis's encyclicals Laudato Si' (Francis 2015) and Fratelli Tutti (Francis 2020) focus on the practical aspects of the common good, emphasizing its importance as a relational good rooted in social relationships and human interactions.The encyclicals use metaphors such as "a common home" and "brotherhood" to highlight the interconnectedness of humanity and the need for mutual care and responsibility.In Laudato Si', Pope Francis employs the metaphor of the common good as a common home.This metaphor is introduced in the title itself, and at the beginning, he references St. Francis of Assisi to describe "our sister and mother Earth" in its natural and climatic dimensions.By doing so, he extends the concept of the common good to include not only human beings but also common resources and all of creation.Here, the common good is envisioned as a common home that fosters feelings of love and care.Laudato Si' emphasizes the interconnectedness between human beings and the environment, calling for a renewed sense of global solidarity and recognizing that environmental degradation disproportionately affects the poor and vulnerable.In Fratelli Tutti, another metaphor of the common good is introduced, the one of brotherhood (Fratelli) that exists between people.Considering others as brothers and sisters is crucial for building the common good as it emphasizes the principle of inclusion and the removal of barriers that contradict this principle, such as all forms of exclusion (e.g., of immigrants and people with disabilities).These goals are still present in development strategies, but due to the pandemic, they have become even more significant as rising inequality and unemployment have intensified the plight of vulnerable people.According to Francis, worsening conditions for the most vulnerable individuals lead to the erosion of the principle of fraternity, which, in turn, undermines the common good expressed by this principle. In summary, it is worth emphasizing that the contribution of the great people of prayer to the understanding of the common good has largely shaped the core values and principles of CST.These include "first and foremost, the primacy of the human person with his transcendent dignity; solidarity, understood as a fraternal relationship for the common good; the principle of subsidiarity, which guarantees the right and duty to participate responsibly in common decisions", and "the principle of the common good", interpreted as "the defence of the quality of human life, in the sense of both the ecology of the natural environment and the spiritual ecology, which advocates respect not only for the material but also for the higher moral and spiritual needs of human life, both individually and collectively" (Marek and Jabło ński 2021, p. 2). The Concept of the Common Good According to Economists Although the concept of the common good has not been extensively discussed in economic theory, there are a few modern economists who have given it attention, including Sen, Tirole, and Ostrom, who were awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.Sen (2008), in his capability approach, recognized that the primary goal of economic policy should not be limited to GDP growth but rather focused on providing individuals with greater opportunities to choose different "functionings", i.e., "parts of the state of a person, in particular the various things that he or she manages to do or be in leading a life" (Sen 1996, p. 57).The capability approach is implicitly linked to the common good idea, as Sen (1992, p. 40) conceptualizes capability, i.e., "the various combinations of functionings that a person can achieve", as oriented towards the freedom to choose different functionings ("beings and doings"), which aligns with liberal theories of the common good.In this sense, the common good is not directly linked to the well-being of the community as a whole and its members, but rather only to the diverse and varying good of its members (cf.Argandoña 2013). Tirole, in his book Economics for the Common Good (Tirole 2017), explicitly refers to the concept of the common good and questions how economics can contribute to achieving it.However, he does not define the common good in detail, stating in general terms that it is "our collective aspiration for society" and understanding it as the "well-being of the community" (Tirole 2017, pp. 2, 5). Tirole suggests that economics can contribute to the pursuit of the common good in two ways.Firstly, it can focus on the ultimate goals inscribed in the concept of the common good and distinguish them from the means or instruments used to achieve them.This way, the means do not become the ends themselves, losing sight of the ultimate goal of the common good.Secondly, economics can assist in developing tools such as institutions and policies that can help achieve the common good once "a definition of the common good has been agreed upon" (Tirole 2017, p. 5).Therefore, according to Tirole, "economics, like other human and social sciences, does not seek to usurp society's role in defining the common good", but "works toward the common good" because "its goal is to make the world a better place" (Tirole 2017, p. 5). Tirole emphasizes the importance of reconciling individual interests with the general interest in the pursuit of the common good.He asserts that while the common interest permits the private use of goods for individual well-being, it does not allow their abuse at the expense of others.In Tirole's perspective, economics is a science that incorporates both individual and collective dimensions, analyzing situations where individual interest can be compatible with the quest for collective well-being and those where it can hinder that quest (Tirole 2017, p. 4). In summary, Tirole sees the common good as a form of the well-being of the community chosen by society.Thus, the common good, as understood by Tirole, in contrast to Sen's idea, is not directly the good of the members of the community but, first and foremost, the good of the community itself. It is noteworthy that the concept of the common good was of interest not only to political philosophers but also to early economists such as Genovesi and Smith, a classical economist who is often regarded as the father of political economy.Despite their differing assumptions about human nature, which may have been influenced by distinct Christian traditions, they arrived at varying understandings of the common good.Genovesi believed that the common good arose from the cultivation of virtues among the members of society, while Smith viewed it as the aggregate of individual values attained (Bruni and Zamagni 2017). A noteworthy critic of classical economics was Sismondi (de Sismondi 1819), who was the first economist explicitly recognized as a founder of social and humanistic political economy.This school of economics places significant emphasis on the concept of the common good, which aims to connect the common good and individual good, drawing on the original ancient and medieval attempts.The humanistic version of the economics of the common good, promoted by Sismondi, fits into the concept of "civil economy" developed by economists such as Genovesi (1769), Dragonetti (1788), Ruskin (1901), Loria (1910), Fuà (1993), among others, who also referred to the ancient meaning of the common good (Bruni and Zamagni 2017). Sismondi is referred to by Lutz (1999) in a volume with the same title as the abovementioned book by Tirole but also with a distinctive and revealing subtitle: Two Centuries of Economic Thought in the Humanist Tradition.Lutz (1999, p. 125) has a completely different approach to defining the common good than Tirole, arguing that it "cannot be distilled from the actual preferences of members of society."Lutz further observes that the existence of diverse tastes and beliefs does not imply the absence of a common good, nor does it render its definition impossible.The solution, according to Lutz, is to find norms, values, or principles acceptable to every rationally thinking person to define the common good.Such principles would provide an objective criterion for determining the different meanings of particular types of "functionings" and "capabilities" required to lead a good life, overcoming the weaknesses of Sen's concept (cf. Anderson 1995;Nussbaum 1988). Lutz (1999, pp. 128, 130, 135-36) posits two "principles of normative economics" as objective measures within his concept of "human welfare economics": • "material sufficiency"-implying that subsistence and physical health are fundamental human values and that every member of society has an equal right to these goods by virtue of their humanity; • "respect for human dignity"-asserting that every member of society has an equal and rightful claim to human dignity and being treated accordingly. Lutz (1999, p. 137) elaborates on the values and moral laws associated with these principles and defines the common good as "good that is equally shared or equally belonging to each and every member of society."This includes the right to personal freedom, basic well-being, and respect for human dignity.As an economist, Lutz (1999, p. 139) outlines the concept of "economics of the common good" as the translation of these principles and moral laws into economic rights, social policies, and institutions.The economic rights that follow from these principles are the right to the necessities of life (referring to material sufficiency), the right to "economic democracy" (i.e., treating workers with dignity based on respect for human dignity), and the right of future generations to material sufficiency and respect for human dignity (Lutz 1999, p. 139).These rights are implementable within a specific social vision of a market economy, which prioritizes, firstly, a "humanistic enterprise system", giving human and social capital a special emphasis in the organization of the economy and considers free capital markets as merely instrumental to social welfare (Lutz 1999, p. 141). Secondly, Lutz emphasizes the necessity for the economy to be integrated with society rather than the reverse to prevent social decisions from being subordinated to impersonal economic competition.He asserts that the achievement of the common good under such a framework can be facilitated by a proactive and protective role played by the government (Lutz 1999, p. 141).Accordingly, Lutz's concept posits the common good as the amalgamation of the same good for each member of the community rather than as being directly correlated with the welfare of the community as a whole. Presently, the notion of the common good in economic theory is mainly implicit in the context of the commons and public good issues, which require a distinct approach from market theory.Although Nobel Memorial Prize in Economic Sciences winner Ostrom does not explicitly mention the common good, her concept of "the commons" possesses essen-tial characteristics.While Hardin (1968) underscores the dichotomy between individual and common interests in the situation of jointly owned property or the "tragedy of the commons", it is Ostrom (1990) who reconciles them, demonstrating how the individual interest can align with the interest of the community, on the condition that the individual adheres to the rules and actions for the common good, which do exist (Ostrom 1990(Ostrom , 1999)).The more effectively governing rules regulate community life, the better the long-term results of individual choices.In other words, the tragedy of Hardin's pasture need not conclude poorly for either the participants or the community.Rather than state distribution or market forces, it is civil society working for the common good that should govern these resources.Such governance may supersede market-driven rationality and lead to positive outcomes for both the community and individuals, resolving the issue of free but limited resources (Bruni and Zamagni 2017). Ostrom is also critical of Hardin's anthropological assumption, based on the homo oeconomicus model, and favors Aristotle's homo politicus or homo sociologicus model.This alternative model suggests that humans are capable of respecting community norms and rules and act in accordance with the values prevalent in society.Ostrom's concept of the commons and the papal concepts of the common good are both rooted in a fundamentally different understanding of human nature compared to the mainstream economic view.Rather than being solely self-interested actors, humans are considered to be socially and culturally embedded individuals (cf.Polanyi 1977), who pursue not only material utility but also nonmaterial goals and values that exist within the context of their society.The image of individuals as socially and culturally embedded, pursuing aspirations that resonate with their society's values and fulfilling their visions beyond utility, is characteristic of humanistic economics, within which the primary objective is the comprehensive development (flourishment) of individuals (Horodecka 2015).This shift in perspective highlights the importance of taking into account the role of cultural norms and values in shaping economic behavior and challenges the traditional economic assumption of individual rationality based solely on self-interest. Discussion In Sen's view, the concept of the common good is an important overarching goal and a criterion for evaluating economic policy: to provide people with opportunities to realize their potential and thus to offer them opportunities for development.It is undoubtedly very valuable to emphasize that the individual is ultimately at the center of economic policy.Nevertheless, a weakness of Sen's approach, when compared to the concepts of the common good in CST, is that it does not explicitly emphasize the communal dimension of the common good.Sen's liberal approach may stem from his observation that an overemphasis on the common good and the adaptation of individuals to cultural conditions can effectively hinder individual development.Sen gives the example of a woman who accepts violence and lacks the will to pursue education because of the community's expectations of her.Therefore, the concept of capabilities that unfolds at the individual level becomes a litmus test for community issues, as both the reluctance and inability of some women to pursue education may be the result of the social conditions in which the individual finds herself.Coming from a country where violence and the very low status of women go hand in hand, Sen argues not only that individuals should develop goals but also that these actions should be supported by the state, whose role is to 'enable' individuals to achieve their own goals.In such a case, individual development serves the progress of the country but gives the community the impression that it is violating its common good-in terms of the shared values that bind it together.However, this is not the common good in the sense we are writing about if it significantly limits individual development. Unlike Sen, Tirole recognizes the danger of individuals pursuing their own good at the expense of the common good of which they are a part.However, Tirole's view of the common good as the "ultimate goal" also differs from CST.According to CST and Christian philosophy, individuals should contribute to the common good while striving for personal well-being, with the common good supporting them in this endeavor.The potential drawback of overemphasizing the community dimension of the common good is that it overlooks the fact that the common good should facilitate the full development of individuals within the community.This is consistent with Tirole's assertion that the common good is defined by a given society.This relativistic concept of the common good may endorse a community that imposes values that are incompatible with the objective truth about human nature and its eternal vocation with God since certain subjective common goods may not correspond to certain objective goods of individuals. Elinor Ostrom's approach is closer to Tirole's than to Sen's.She emphasizes the importance of the community for the narrowly defined common good, such as the commons.Within this framework, the community establishes principles and rules that reconcile individual and common interests while allowing for individual autonomy.In contrast to the previous two perspectives, Ostrom's concept is integral, emphasizing both the individual and communal dimensions of the common good.It portrays not only the community as the guarantor of the common good but also individuals who, in pursuing their own interests, contribute to the creation of the common good.However, Ostrom's concept shares the weaknesses of the previously discussed approaches in that the common good is subjective and lacks objective criteria for determining what constitutes the common good. An attempt to create such criteria can be found in the work of Lutz, who defines the common good on the basis of these criteria.His approach to the common good is, therefore, universalist.He also gives examples of common goods, including respect for human dignity, thus drawing on the concept of the common good in Christian philosophy and Catholic social teaching.A weakness of his concept, however, is the lack of emphasis on the communal dimension of the common good, which brings his approach to the common good closer to Sen's proposal in this respect.In these concepts, there is no emphasis on the fact that the individual is rooted in the community in which he lives, as emphasized primarily by the communitarians (Sandel, McIntyre, Walzer).For them, the collectively shared concept of the good is the foundation for community and individual development, a perspective that is also indirectly present in representatives of CST.This undoubtedly has implications for understanding the category of the common good. Conclusions and Future Directions The concept of the common good has ancient roots and has been revitalized by Christianity.Its philosophical foundations were laid by St. Thomas Aquinas, who drew from Aristotle's teachings and incorporated them into Catholic social doctrine.Jaques Maritain and John Paul II made a significant contribution to the development of the common good by linking it with Christian personalism.Pope Francis has since adapted this concept to address contemporary global challenges, such as environmental threats and growing social inequality, in a more practical and secular manner. The common good, according to Christian philosophy and Catholic social teaching, has two dimensions: personal and communal.This means that the true common good serves both the good of the individual members of the community (a small group, local community, and citizens of the state) and the community itself.There cannot be a contradiction between the dimensions of the common good, so the concept of the common good has an integral character (it is two-dimensional).It is also a universalistic (objectivist) approach because it refers to objective criteria for determining what the common good is. In reviewing different concepts of the common good as presented in this paper, several dichotomies in the understanding of the common good have become apparent, including the following: • Universalist (objectivist) and relativistic (subjectivist) approaches; • Emphasis on the individual (personal) and/or group (community) dimension; • Integral (two-dimensional or three-dimensional common good, with the third dimension referring to God) and nonintegral (one-dimensional, one-sided common good) approaches; • The common good as an end and/or a means to an end (or conditions for achieving the end). Sen argues that the common good (veiled in his capability approach) is the ultimate goal and has a distinctly individual dimension with a nonintegral character.Tirole defines the common good as the goal that is pursued by society and has a nonintegral nature, but the focus is on the community.Ostrom's concept of the commons sheds light on the role of conditions, such as principles and rules, in achieving the common good, which arises through reconciling individual interests with the interests of the community, indicating the integral nature of the common good.Meanwhile, Lutz's concept emphasizes the individual dimension and non-integral nature of the common good with the universality of his approach attempting to establish objective criteria for defining the common good, in contrast to the relativistic approach of Sen, Tirole, and Ostrom. Economic theories tend to favor one-dimensional and relativistic concepts of the common good.However, proposing multidimensional and universalist ideas within economic theories could be valuable and is worth trying.In particular, given the growing need for practical applications of the concept of the common good, an illustrative example is the concept of "economy for the common good" (Dolderer et al. 2021), which is rooted in both economics and CST.Moreover, the contributions of the great men of prayer (within CST) to the idea of the common good provide inspiration and can enrich economic theories of the common good, e.g., enabling a more comprehensive understanding of how to reconcile individual and societal interests.Undoubtedly, the concept of the common good presents an intriguing area of inquiry for economists as it offers insights into the interdependence between individual development and the attainment of societal objectives.By adopting the lens of the common good, it becomes evident that there is a need for an alternative to the mainstream economic model, which still primarily relies on the utilitarianbased homo oeconomicus paradigm.This alternative is found in the concept of the human nature of homo persona, a person embedded in society rather than a socially alienated individualist, whose development is linked to the values and virtues of a given society while also being a separate and autonomous entity with inherent worth and dignity that can strive towards self-realization through ongoing personal growth.
7,751.8
2023-12-15T00:00:00.000
[ "Economics", "Philosophy" ]
Exploring Students’ Visual Thinking: Examining Students' Sequence And Series Analysis Through The Lens Of Visual Cognitive Styles Visual thinking skill has an important role in solving mathematics problems. Visual thinking not only can be applied on the topic of geometry but also on the other topics including sequence and series. The visual cognitive styles have a relationship to mathematical abilities and types of tasks. This research aims to analyze the students’ visual thinking skill viewed by visual cognitive styles on the topic of sequence and series used grounded theory approach. The steps in this study are developing visual cognitive style questionnaire and visual thinking skill test, asking students to fill the questionnaire and do the test, and analyzing the results of questionnaires and students’ answer. The subjects of this research were 2 students at the eleventh grade (16-17 year-olds). The result showed that students with spatial visualization have better visual thinking skill than object visualization. Students with spatial visualization are very good at working on problems on indicators transforming problems or concepts into visual forms. Whereas, students with object visualization are very good at drawing for the nth term of a sequence. The analysis of this research can be used as material for consideration to make the learning design on the topic of sequence and series that is in accordance with the cognitive styles of students INTRODUCTION Students are expected to master visual thinking skills in solving mathematical problems [1].Visual thinking has an important role in the development of thinking, mathematical understanding, and the transition from abstract to concrete related to solving mathematical problems.Arcavi defines visual thinking as the ability, the process, and the product of creation, interpretation, use of, and reflection upon images, diagrams, in our minds, on paper or with the help of technology, with the aim of describing and communicating information, thinking about and developing ideas that are not previously known, and advancing understandings [2].Therefore, visual thinking ability can be interpreted as a person's ability to understand and interpret information that involves relevant images, process images and transform ideas into visual forms. The visual thinking skill is interesting to be discussed because the results of the study found that students experience limitations and difficulties in building a visual representation.Students' difficulties are understanding problems, drawing diagrams, reading charts correctly, understanding conceptual formal mathematics, and mathematical problem solving [2,3,4].Students can understand the problem better when they can provide visual images that represent the situation in the mathematical problems they face.The visual thinking skill can help students in stating the mathematical problems with their own language. Each individual has its own characteristics that distinguish it from other individuals.Each individual has their own way of receiving and processing information when solving problems.This is what is known as cognitive style.The cognitive style is a typical way that is consistent in receiving and processing information to be used in solving problems.Cognitive style related to the habits of students using sensory devices is divided into two, namely verbal and visual cognitive styles [5].The cognitive visual style consists of spatial visualization and object visualization. The cognitive visual style has a relationship with mathematical abilities and certain types of tasks. Visual thinking skill can have a relationship with other mathematical abilities and the differences in the types of visual cognitive styles has an effect on different topics of mathematics and tasks [6,7].Therefore, it is important to identify students included in spatial visualization or object visualization.Identification of the time of use and visual images used can facilitate students in understanding mathematics.This is useful for teachers to design learning that is more suitable for each student.The researchers about visual thinking skills often used geometry subject [7,8].Even though, visual thinking skill can also be used in others subject.Expert in mathematics used visual thinking skills to solve not only in geometry subject but also in various problems in mathematics [4]. Taking those described problems into account, this research aims to analyse students' visual thinking viewed by visual cognitive styles on the topic of sequence and series.The result of this study can be used as supporting information to develop further learning design. METHOD This research is qualitative method with a grounded theory approach.The stages of grounded theory are open coding, axial coding, and selective coding.The subjects of this research were 2 students at the eleventh grade (16-17 year-olds).The students came from a public school in Bandung.This research aims to analyse students' visual thinking viewed by visual cognitive styles on the topic of sequence and series.To do so, first, we developed visual cognitive styles questionnaire and visual thinking skill test on the topic of sequence and series.We used Object-Spatial Imagery Questionnaire (OSIQ) that contain 30 statements [9].The visual thinking skill test contain three tasks based on the indicators of visual thinking skill (see Table 1).Next, we requested students to fill the questionnaire in 20 minutes and solve the test in 60 minutes.Then, we analysed students' answer.a. Compose a sequence that represent the number of the squares in the images.b.Find the 5 ℎ term of the sequence. Construct the images as help in solving problems A ball has a bounce power of 65% as high as the preceding height.If the ball is dropped from a height of 5 , find the height does it reach after 3th bounce. RESULTS AND DISCUSSION In this research, there are 2 students with spatial visualisation and object visualisation.This classification of cognitive styles is based on the results of the visual cognitive style questionnaire.The students were given three tasks about sequence and series based on visual thinking skill indicators.For Task 1, student with object visualisation use squares to represent the series (see Figure 1).He divided the square into 4 equal parts to represent 1/4.Then, he made a square again and divided it into 16 part to represent 1/16 and so on.He get that the series is 1/4^n .But, we cannot see the result of the series is equal to 1/3 from the figure that he created to represent the series.This indicates that student with object visualisation see a problem part by part.This is in line with the findings of Kozhevnikov which stated that students with object visualisation use imagery to construct high-quality images of the shapes of individual objects [10] Figure 2 showed the answer of student with spatial visulisation in Task 1.He used a square to construct the representation of the series.First, he divided a square into 4 equal parts to represent 1/4.Then, he divided the part that represent 1/4 into 4 equal parts to represent 1/16 and so on.We can see that the shaded part is represent the series.The shaded part is equal with the others.We can see from the figure there are 3 equal parts.So, we can see the result of the series from the figure made by him.This means he understands the relation between the images and the series.This implies that student with spatial visualisation see a problem thoroughly.Student with spatial visualisation create the visual representation of the series, so that the visual representation can give important information to solve the problem [11].This is related to the research of Kozhevnikov dkk.found that students with spatial visualsation use imagery to represent and transform spatial relations [10]. In Task 2, it is found that student with object visualisation cannot determine all the square in the images (see Figure 3).He only seen 4 squares on the second term and 9 squares on the third term.While, student with spatial visualisation can see all the square in the images correctly (see Figure 4).This implies that student with object visualisation tend to see images globally as a single perceptual unit.In contrast, student with spatial visualisation tend to see images part by part.Students with object visualisation process images holistically, while students with spatial visualisation encode and process images analytically, using spatial relations to arrange and analyze the components [10]. Task 2 asked to find 5th term of an arithmetic sequence.Both of student with spatial and object visualisation do not use the formula for a general term for an arithmetic sequence.They used the concept of the sequence and series.Sequence is a function that has a pattern.So, they found the pattern to solve the problem.Student with object visualisation can see the pattern of the sequence from the first three term.He noticed that U_n=n^2.While, student with spatial visualisation determined the common difference of the sequence.Then, he notice that the common difference is quadratic.So, he found that U_n=U_(n-1)+n^2.This implies that students with visual cognitive style have a relation with creativity because they solved the problem using their own way.It seems that this is caused by the questions only asking to find the term that is not too large so students feel it is easier not to use the formula in solving it.This cause them to be more creative and provide different solutions to solving problems [8,7].In Task 3, student with object visulisation used the formula for a general term for a geometric sequence to solve the problem (see Figure 5).He looked for the third term of the sequence to answer the task.But, the height of the ball after third bounce actually is fourth term of geometric sequence with the ratio 0.65 and the first term is 5.He made mistake because he did not construct the image of ball bouncing to make it easier to solve the task.In contrast, student with spatial visualisation made the image of ball bouncing.So, he knew what the task asks and what the solution needed to solve the task. Student with spatial visualisation did not use the formula for a general term for a geometric sequence to solve Task 3 (see Figure 6).He knew that the height of the ball after bounce is 65% as high as the preceding height.So, he noticed that the height of the ball after bounce is U_n=65%×U_(n-1).We called this as recursive formula.Recursive formula is a formula that uses the previous term to get to the next one.This means student with spatial visualisation uses visual representation as an aid to solve the task.It is easier to solve the task if we use visual representation and we can avoid doing mistakes.In line with that, Stylianou dan Silver stated that using visual representation is a viable strategy for problem solving because it can facilitate the problem solving process.Student with spatial visualisation solve the task in a different way [4].The result of a research also found similar with this findings.One of the characteristics of students with spatial visualisation is creative [10].Based on the students' answer when solving visual thinking skill test, we found that student with spatial visualisation has better visual thinking skill in solving sequence and series task.Student with spatial visualisation really good at making an image to represent the series.Student with object visualisation made visual representation separately, according to the terms.This indicates that student with object visualisation see a problem part by part.Meanwhile, the student with spatial visualisation made visual representation of a series in an image.So, they think and see a problem thoroughly. In case of obtaining spesific infromation from an image, object visualizers tend to encode images globally as a single perceptual unit, which they process holistically.So, they can miss the infromation that display implisity in the images.On the other side, spatial visualizers tend to encode and process images analytically, part by part, using spatial relations to arrange and analyse the components.This mean students with spatial visualisation can get whole information from the images because they are thorough when process the images. When solving the sequence and series tasks, student with visual cognitive style likely did not use the formula of sequence and series.Student with object visulisation only use formula for a general term for a geometric sequence.This maybe due to the question just asks for the lower terms.So, students think it is easier to solve the tasks without using the formula.This indicates that student can provide different strategy to solve the task.It also implies that visual cognitive styles might have relationship with creative skill. The Results should include the rationale or design of the experiments as well as the results of the experiments.Results can be presented in figures, tables, and text.The Results should include the rationale or design of the experiments as well as the results of the experiments.Results can be presented in figures, tables, and text. CONCLUSION The research question addressed in this paper concerns about students' visual thinking skill viewed by visual cognitive styles on the topic of sequence and series.The results lead to the following conclusions.Student with spatial visualisation has better visual thinking skill in solving sequence and series task.Student with object visualisation made visual representation separately, according to the terms.Meanwhile, the student with spatial visualisation made visual representation of a series in an image.Object visualizers tend to encode images globally as a single perceptual unit.In contrast, spatial visualizers tend to encode and process images analytically, part by part.Student with visual cognitive style likely did not use the formula of sequence and series when solving the sequence and series tasks. a image that represent the series.b.Can you see the result of the series from the images?2.Obtain specific information from imagesNote the following images. Figure 1 .Figure 2 . Figure 1.The answer of student with object visualisation in Task 1 Figure 3 .Figure 4 . Figure 3.The answer of student with object visualisation in Task 2 Figure 5 .Figure 6 . Figure 5.The answer of student with object visualisation in Task 3 Table 1 . Visual Thinking Skill Test No Indicator Task 1. Transform the problem or concept into visual form Note this following series.
3,138.8
2023-10-31T00:00:00.000
[ "Mathematics", "Education" ]
Novel P-n Li2SnO3/g-C3N4 Heterojunction With Enhanced Visible Light Photocatalytic Efficiency Toward Rhodamine B Degradation The design of highly efficient and stable photocatalysts to utilize solar energy is a significant challenge in photocatalysis. In this work, a series of novel p-n heterojunction photocatalysts, Li2SnO3/g-C3N4, was successfully prepared via a facile calcining method, and exhibited superior photocatalytic activity toward the photodegradation of Rhodamine B solution under visible light irradiation as compared with pure Li2SnO3 and g-C3N4. The maximum kinetic rate constant of photocatalytic degradation of Rhodamine B within 60 min was 0.0302 min−1, and the composites still retained excellent performance after four successive recycles. Chemical reactive species trapping experiments and electron paramagnetic resonance demonstrated that hydroxyl radicals (·OH) and superoxide ions (·O2-) were the dominant active species in the photocatalytic oxidation of Rhodamine B solution, while holes (h+) only played a minor role. We demonstrated that the enhancement of the photocatalytic activity could be assigned to the formation of a p-n junction photocatalytic system, which benefitted the efficient separation of photogenerated carriers. This study provides a visible light-responsive heterojunction photocatalyst with potential applications in environmental remediation. INTRODUCTION The presence of harmful and toxic substances in aqueous solution poses severe risks to human health and ecosystems. The purification of waste water is an urgent priority and a major research theme in environmental science (Shannon et al., 2008;Damasiewicz et al., 2012). As a promising technique for oxidation of pollutants, semiconductor-based photocatalysis, which uses solar energy to drive chemical reactions, has an important role in environmental remediation (Chen et al., 2010). Among semiconductor photocatalysts, layered metal oxides have attracted much attention owing to their low cost, photostability, and oxidation capability (Osada and Sasaki, 2009;Lei et al., 2014;Haque et al., 2018). Recently, the semiconductor Li 2 SnO 3 has been applied as a UV light-responsive photocatalyst with excellent photocatalytic performance and chemical stability . As a state-of-the-art layered photocatalyst, the compound features a conventional [Li 1/3 Sn 2/3 O 2 ] − anion layered structure in the a-b planes, while the rest of the Li + cations embed in the interlayer spaces to balance the charge (Howard and Holzwarth, 2016). The resulting charge density distribution in space generates an electrostatic field perpendicular to the laminar direction, promoting the separation of photo-induced carriers to drive photocatalysis. In addition, the valence band edge of Li 2 SnO 3 is positive enough to oxidize organic pollutants. However, similar to many other oxide semiconductors, Li 2 SnO 3 can only absorb UV light, while its harvesting of solar energy is poor owing to its wide intrinsic optical band gap (∼3.7 eV), limiting its photocatalytic activity. Constructing visible light-responsive Li 2 SnO 3 -based heterojunction photocatalysts to make full use of sunlight is thus an important goal. This kind of system, where a heterojunction is formed between a visible light-and a UV light-responsive photocatalyst, has received some attention in recent years (Pan et al., 2012;Wu et al., 2017;Liu et al., 2018;Qiao et al., 2018;Wang et al., 2018a;Hafeez et al., 2019). For instance, ZnFe 2 O 4 /TiO 2 heterojunctions exhibited outstanding photocatalytic degradation of bisphenol A under visible light irradiation (Nguyen et al., 2019). A CdS/SrTiO 3 nanodots-on-nanocubes heterojunction presented excellent visible light photocatalytic performance for oxidation of H 2 (Yin et al., 2019). Notably, Dong et al. successfully synthesized an insulator-based core-shell SrCO 3 /BiOI heterojunction structure, and this nanocomposite displayed an unprecedentedly high photocatalytic NO removal performance (Wang et al., 2018b). Therefore, the heterojunction strategy clearly provides opportunities to utilize wide-band-gap semiconductors with excellent intrinsic photophysical properties as visible light-responsive photocatalysts. Among the best known classes of such catalysts are p-n heterojunctions, which have been extensively studied to optimize their photocatalytic activity. Their catalytic mechanism is based on an internal electric field established at the interface of the p-n junction, which promotes the efficient separation of photogenerated carriers Dong et al., 2018;Dursun et al., 2018;Wang et al., 2018;Zeng et al., 2019). The Mott-Schottky plots measured by electrochemistry demonstrate that Li 2 SnO 3 is a p-type semiconductor. Therefore, to improve its photocatalytic performance, it is necessary to couple Li 2 SnO 3 with n-type and visible light-responsive semiconductors to build p-n heterojunction systems, which would be able to simultaneously realize high utilization rates of solar energy and efficient separation of photogenerated carriers. Among numerous n-type photocatalytic semiconductors, g-C 3 N 4 is a promising candidate for its tunable photo-response, and effective charge carrier transportation properties. As a photocatalyst, g-C 3 N 4 has been widely investigated owing to its excellent properties including layered graphite-like structure, visible lightresponsive band gap (∼2.7 eV), facile preparation, low toxicity, and high photostability (Wang et al., 2009Ong et al., 2016;Wen J. Q. et al., 2017;Lu et al., 2018;Zhang et al., 2018;Li X. B. et al., 2019). Furthermore, as an n-type semiconductor, g-C 3 N 4 has been selected to be coupled with p-type semiconductors to enhance photocatalytic activity, such as in CuBi 2 O 4 /g-C 3 N 4 (Guo et al., 2017), Bi 4 Ti 3 O 12 /g-C 3 N 4 (Guo et al., 2016), and LaFeO 3 /g-C 3 N 4 . EXPERIMENTAL SECTION Synthesis of G-C 3 N 4 , Li 2 SnO 3 , and Li 2 SnO 3 /g-C 3 N 4 Heterojunction g-C 3 N 4 was prepared by annealing melamine in a muffle furnace. Briefly, 5 g melamine was heated in an closed crucible at a rate of 4.5 • C/min to 560 • C and maintained for 2 h. Then, the furnace was turned off and cooled to room temperature naturally. Pure Li 2 SnO 3 was synthesized from a mixture of Li 2 CO 3 and SnO 2 with a molar ratio of 3.3/3.0. The mixed reactants were ground together within a mortar for 30 min. Then, the mixture was heated at 850 • C for 6 h. The heterojunctions Li 2 SnO 3 /g-C 3 N 4 (LSO-CN) with different mass ratios were prepared by a traditional solid state method. Samples with initial mass ratios of g-C 3 N 4 to LSO-CN having values of 70, 80, 85, 90, and 95 wt% were prepared, and labeled as LSO-CN-70, LSO-CN-80, LSO-CN-85, LSO-CN-90, and LSO-CN-95, respectively. Taking LSO-CN-85 as an example, 0.03 g of Li 2 SnO 3 powder, 0.17 g of g-C 3 N 4 and 1 mL ethanol were mixed, and ground together for 10 min. The resultant mixture was heated at 500 • C for 2 h in a covered crucible. Characterization Powder X-ray diffraction (PXRD) was performed on a PANalytical X'pert powder diffractometer equipped with a PIXcel detector and with CuKα radiation (40 kV and 40 mA). The scanning step width of 0.01 • and the scanning rate of 0.1 • s −1 were applied to record the patterns in the 2θ range of 6-90 • . A JEOL JSM-6700F field emission scanning electron microscope (SEM) was employed to investigate the surface morphologies. The transmission electron microscopy (TEM) and high-angle annular dark field (HAADF) images and energydispersive spectra (EDS) of Li 2 SnO 3 were recorded by a Talos F200S G2 Microscope to characterize the microstructures of the samples. The UV-vis diffuse reflectance spectroscopy (UV-vis DRS) data were collected at room temperature using a powder sample with BaSO 4 as a standard on a Shimadzu UV-3150 spectrophotometer over the spectral range 200-800 nm. The Fourier transform infrared (FT-IR) spectra were obtained by using a Nicolet 360 spectrometer with a 2 cm −1 resolution in the range of 500-4,000 cm −1 . Fluorescence spectra were measured on a Hitachi fluorescence spectrophotometer F-7000 to detect the concentration of, in which the fluorescence emission spectrum (excited at 316 nm) of the solution was measured every 15 min during the photocatalytic reaction. The solid-state photoluminescence (PL) spectra were acquired using a Fluorolog-TCSPC luminescence spectrometer with an excitation wavelength of 325 nm. In the electron paramagnetic resonance(EPR) experiments, 10 mg of LSO-CN-85 sample and 40 µ L of 5,5 Dimethyl-1-pyrroline N-oxide (DMPO) was dispersed into 1 mL of deionized water (DMPO-·OH) or methol (DMPO-·O − 2 ), and then irradiated with visible light (λ > 420 nm) for 5 and 10 min, respectively. Electrochemical measurement was conducted on a CHI 660E workstation. A Pt plate, a calomel electrode and sample LSO-CN-85 coated on indium tin oxide (ITO) served as the counter electrode, reference electrode and working electrode, respectively, in a three-electrode cell. Electrochemical impedance spectroscopy (EIS) was carried out using an alternating voltage of 5 mV amplitude over the frequency range of 10 5 -0.1 Hz with an open circuit voltage in 0.5 M Na 2 SO 4 . For the analysis of transient photocurrent responses, a 300-W Xe lamp (cut-off λ> 420 nm; CEL-HXF300, Beijing Aulight) and Na 2 SO 4 were employed as the light source and electrolyte, respectively. The Mott-Schottky curves were measured in Na 2 SO 4 solution and the amplitude perturbation was 5 mV with frequencies of 1,000 Hz. Photocatalytic Activity Measurement The photocatalytic performance of the LSO-CN composites was evaluated by the degradation of RhB. The light irradiation source was the above-mentioned Xe lamp with a filter (λ ≥ 420 nm) laid on the top of the reaction vessel. The light source was kept 7 cm away from the top of the reaction vessel and the reactant solution was maintained at room temperature by providing a flow of cooling water during the photocatalytic reaction. Before irradiation, the photocatalyst powder (30 mg) and RhB solution (10 mg L −1 , 100 mL) were fully stirred in the dark for 1 h to establish the adsorption-desorption equilibrium. Then, the reaction was exposed to the light, and 5 mL samples of the suspension were extracted at a given time interval and separated by centrifugation. The concentration of RhB solution was determined by UV-vis spectrometry at its maximum absorption peak of 554 nm. Typically, the trapping experiments of active species were carried out as follows: 30 mg LSO-CN-85 and dye solution (10 mg/L, 100 mL) were mixed. Then, 10 mL 2-propanol (IPA), 0.1 mmol disodium ethylenediaminetetraacetic acid (EDTA), and 0.1 mmol ascorbic acid were added in sequence to trap radicals, holes (h + ) and radicals, respectively. Additionally, trapping experiments under fluorescence spectroscopy were carried out as follows: 30 mg LSO-CN-85 and 8.3 mg terephthalic acid (TA) were dissolved in 100 mL NaOH solution (2 mmol/L), then the solution was stirred for 60 min in the dark and irradiated by the 300-W Xe lamp. Results and Discussion The crystallographic structure and phase purity of the assynthesized samples were confirmed by PXRD. As presented in Figure 1, one small peak at 13.1 and one strong peak at 27.4 for pure g-C 3 N 4 were assigned to the (100) and (002) crystal plane, respectively, in good accordance with previous reports (Hou et al., 2013). For Li 2 SnO 3 , the XRD pattern matched well with the monoclinic phase (JCPDS No. 00-031-0761). The two characteristic peaks of g-C 3 N 4 gradually decreased in intensity with the increase of the Li 2 SnO 3 content in the LSO-CN composites, whereas the peak intensity of Li 2 SnO 3 strengthened gradually, reflecting the co-existence of Li 2 SnO 3 and g-C 3 N 4 in these heterojunctions. Further, the compositions of Li 2 SnO 3 , g-C 3 N 4 and the LSO-CN heterojunction photocatalysts were confirmed by FT-IR. As shown in Figure 1B, for pure Li 2 SnO 3 , characteristic absorption peaks appeared at 519, 1,430, 1,495, and 3,435 cm −1 , and the peak located at 519 cm −1 was assigned to the stretching vibration of Sn-O-Sn and Sn-O groups . In the FT-IR spectrum of g-C 3 N 4 , the peak located at 807 cm −1 was assigned to the breathing vibration mode of triazine units. The absorption peaks in the range of 1,000-1,800 cm −1 were ascribed to the C=N and aromatic C-N stretching vibration modes, whereas the peaks ranging from 3,000 to 3,500 cm −1 originated from the N-H stretching vibrations. The main characteristic peaks of the heterojunctions LSO-CN were similar to those of pure g-C 3 N 4 because of the relatively weak vibration intensity of Li 2 SnO 3 . Notably, however, compared with the g-C 3 N 4 , the characteristic peaks at 1,241, 1,320, 1,413, and 1,631 cm −1 of sample LSO-CN-85 were shifted to higher wavenumbers, which indicated possible interfacial interactions involving electron transfer in these LSO-CN heterostructures ( Figure S1). Frontiers in Chemistry | www.frontiersin.org SEM measurements were carried out to examine the morphology of the as-synthesized photocatalysts. Evidently, the as-prepared Li 2 SnO 3 photocatalysts (Figure 2a) exhibited irregular bulk morphologies with an average particle length of ∼6µm. Figure 2b presents the existence of large aggregates of g-C 3 N 4 with a folded thin-sheet morphology. After combining Li 2 SnO 3 and g-C 3 N 4 into a heterojunction, irregular aggregates of Li 2 SnO 3 were observed to adhere to g-C 3 N 4 (Figure 2c), and the SEM-EDS element mapping showed a homogeneous distribution of Sn, O, C and N throughout the heterojunction (Figure 2d). The intimate contact at the heterojunction between Li 2 SnO 3 and g-C 3 N 4 can be further observed in the representative HAADF-TEM image in Figure 3a. Meanwhile, the interface formed after the addition of Li 2 SnO 3 into the LSO-CN-85 heterojunction can be clearly seen in the HRTEM image (Figure 3b). Notably, no distinct lattice fringes could be observed in g-C 3 N 4 because of its low crystallinity, whereas distinct lattice fringes with a lattice spacing of 0.25 and 0.29 nm were found in Li 2 SnO 3 , which were ascribed to the (131) and (−113) planes, respectively. This kind of heterojunction system would be expected to reduce the recombination probability of photo-induced carriers and improve the photocatalytic activity. Additionally, TEM-EDS elemental mapping was performed to further authenticate the hybridization of the p-type and n-type semiconductors. As presented in Figure 3c, the elements Sn, O, C, and N were distributed uniformly across the assemblies, in good accordance with the results of SEM-EDS. In summary, the above analysis by powder XRD, FT-IR, SEM, and TEM manifested that a heterojunction interface was successfully formed in the composite between Li 2 SnO 3 and g-C 3 N 4 . The light absorption ability of the as-prepared samples was determined via UV-vis reflectance spectroscopy to evaluate the optical band gaps. As shown in Figure 4A, pure Li 2 SnO 3 presented a typical absorption edge at ∼340 nm, and the estimated band gap energy E g was about 3.64 eV (Figure 4B, black trace). For the pure g-C 3 N 4 ( Figure 4A, red trace), the absorption edge was extended to 451 nm, and the corresponding calculated optical band gap E g was 2.75 eV ( Figure 4B, red trace). The obtained E g values of Li 2 SnO 3 and g-C 3 N 4 were in excellent accordance with previous reports Guo et al., 2017). Compared with the pure g-C 3 N 4 , when Li 2 SnO 3 was composited with g-C 3 N 4 , the LSO-CN-85 heterojunction displayed a blue shift of the absorption band, which would be favorable for efficient separation of the photo-induced carriers, thus leading to a higher photocatalytic performance. The photocatalytic activities of the as-synthesized samples were evaluated by RhB photodegradation under visible light (λ ≥ 420 nm). The measured photocatalytic activities of the LSO-CN composites are presented in Figure 5. As can be seen in Figure 5A, without catalysts, the photodegradation of RhB solution under visible light was almost undetectable. The photodegradation rate in the presence of Li 2 SnO 3 alone was only slightly higher, attributed to its wide intrinsic optical band gap. Meanwhile, pure g-C 3 N 4 achieved the modest photodegradation rate of just 36% within 60 min irradiation. However, the photocatalytic activity of g-C 3 N 4 /Li 2 SnO 3 was remarkably influenced by the Li 2 SnO 3 content, and all of the LSO-CN composites exhibited superior photocatalytic activities for RhB photodegradation compared with the parent compounds g-C 3 N 4 and Li 2 SnO 3 . Among these composites, LSO-CN-85 had the optimal photocatalytic activity, with a photocatalytic degradation efficiency of 86% under visible light within 60 min. Figure 5B presents the photocatalytic reaction kinetics of the as-synthesized samples, in which the experimental data can be described by a pseudo-first order model expressed by the following formula (Hailili et al., 2018;Xie et al., 2018): Frontiers in Chemistry | www.frontiersin.org where C 0 and C are the RhB concentration in solution at time 0 and t, respectively. The quantity k is the fitted kinetic rate constant. It can be seen that the plots of the irradiation time t against ln C 0 C are nearly straight lines, which reveals that all the photocatalysts followed pseudo-first order kinetics in the photodegradation of the RhB solution. The kinetic rate constants of Li 2 SnO 3 and g-C 3 N 4 were 0.0006 and 0.0057 min −1 , respectively. For the Li 2 SnO 3 /g-C 3 N 4 heterojunctions, the corresponding kinetic rate constants of LSO-CN-70, LSO-CN-80, LSO-CN-85, LSO-CN-90, and LSO-CN-95 were fitted as 0.0208, 0.0203, 0.0302, 0.0167, and 0.0108 min −1 , respectively. The kinetic rate constant of the LSO-CN-85 was the highest, and was ∼50 and 5 times that of pure Li 2 SnO 3 and g-C 3 N 4 . To evaluate the stability of the photocatalytic performance, cycling experiments of the heterojunction LSO-CN-85 were carried out. As indicated in Figure 5D, the photocatalytic activity exhibited no obvious loss after four successive cycles for the photodegradation of RhB solution, and the observed XRD patterns during the cycling photocatalytic experiments still matched well with pristine LSO-CN-85 (Figure S2), both suggesting that the LSO-CN heterojunction photocatalyst was stable during the photocatalytic reaction process. To quantify the separation efficiency of the photoinduced carriers, measurements of solid photoluminescence, photocurrent responses and electrochemical impedance spectroscopy were performed. Figure 6A presents the PL spectra of Li 2 SnO 3 , g-C 3 N 4 and LSO-CN-85 excited at 325 nm. For Li 2 SnO 3 , no obvious emission peak was observed in the range of 400-600 nm, whereas for g-C 3 N 4 , strong fluorescence intensity was centered at ∼460 nm. Generally, weaker emission intensity of a PL spectrum manifests higher separation efficiency of photo-induced carriers, implying a low recombination rate. For the heterojunction LSO-CN-85, the PL intensity was considerably lower than that of g-C 3 N 4 , indicating the strong suppression of the recombination of photo-induced carriers in the heterojunction. Further, the photocurrent responses of the as-prepared samples were determined during four on/off visible light irradiation cycles in Na 2 SO 4 electrolyte. As presented in Figure 6B, g-C 3 N 4 had a markedly low transient photocurrent response because of the high recombination rate of photoinduced carriers, while Li 2 SnO 3 exhibited the lowest photocurrent density, ascribed to its wide band gap. However, for the LSO-CN-85 heterostructure, the photocurrent density increased notably, indicating remarkably enhanced efficiency in the separation and transportation of photo-induced carriers. Next, EIS was performed to explore the conductive properties of the as-prepared samples under visible light ( Figure 6C). As is well-known, in Nyquist plots, a smaller arc radius represents lower impedance and higher efficiency of charge transfer. Notably, the LSO-CN-85 heterostructure had a smaller arc radius than the parent compounds Li 2 SnO 3 and g-C 3 N 4 , which further testified to the effective separation of photo-induced carriers after forming the heterojunction. Hence, based on the above results, the Li 2 SnO 3 /g-C 3 N 4 heterostructure was able to promote the transfer and separation of the photo-induced carriers, leading to the enhancement of photocatalytic activity under visible light. Mott-Schottky measurement was performed to evaluate the oxidation capability of the as-synthesized catalysts. The flatband potentials were calculated by the Mott-Schottky equation (Gelderman et al., 2007;Cho et al., 2009;Boltersdorf et al., 2016): where C is the space charge capacitance, ε r and ε 0 are the dielectric constant of the semiconductor and the permittivity in a vacuum, e is the electronic charge, N d is the carrier density, and V, V fb , κ B and T are the applied voltage, flat-band potential, Boltzmann constant and temperature, respectively. Here, V fb was obtained as the x-intercept of the Mott-Schottky plots ( 1 C 2 = 0) as a function of the applied potential. Meanwhile, the flat-band potential V fb corresponds to the conduction band potential for an n-type semiconductor and the valence band edge potential for a p-type semiconductor. As indicated from the positive and negative slopes of the Mott-Schottky plots in Figures 7A,B, Li 2 SnO 3 was a p-type semiconductor, while g-C 3 N 4 was of the n-type. The corresponding V fb of Li 2 SnO 3 and g-C 3 N 4 were determined to be 2.27 and −1.1 V vs. saturated calomel electrode (SCE), respectively, and these potentials relative to SCE were calibrated to the reversible hydrogen electrode (RHE) potentials through the following equation (Ke et al., 2017;Lin et al., 2018;Xu et al., 2019): where V RHE is the calibrated potential vs. RHE, V 0 SCE equals 0.245 V, and V SCE are the obtained experimental values. Thus, the V fb of Li 2 SnO 3 and g-C 3 N 4 were 2.92 and −0.45 V vs. RHE after calibration. Herein, the flat-band potential (defined as the quasi-Fermi level) is adopted to be 0.1 V below the conduction band minimum (CBM) or above the valence band maximum (VBM) for n-type and p-type semiconductors, respectively. Therefore, the final VBM of Li 2 SnO 3 and CBM of g-C 3 N 4 were 3.02 and −0.55 V, respectively. Referring to the estimated optical band gaps from the UV-vis DRS curves, the CBM of Li 2 SnO 3 and VBM of g-C 3 N 4 were calculated to be −0.62 and 2.20 V, respectively. Trapping experiments of reactive species during the photocatalytic process were carried out to explore the mechanism of the LSO-CN-85 heterojunction. As shown in Figure 8, a dramatic suppression of photodegradation efficiency was observed after adding IPA and ascorbic acid, manifesting that and were the main participants in the photocatalytic reaction. In contrast, the introduction of EDTA had only a weak influence on the photodegradation rates, demonstrating that h + played a minor role in degrading the RhB solution. The reactive species were also detected using fluorescence spectroscopy. The increase of fluorescence intensity with prolonged irradiation time was consistent with the results of the trapping experiments ( Figure S3). To further investigate the active species ·OH and ·O − 2 during the photocatalytic process, EPR measurements were performed. As presented in Figure 9, it could be seen that no EPR signal was detected in the darkness. However, the signal of ·OH and ·O − 2 were increased remarkably, when the light was on. These results further confirmed the existence of ·OH and ·O − 2 during the photocatalytic process. Based on the above analysis, the proposed photocatalytic mechanism of the LSO-CN-85 heterojunction is presented in Figure 10. As revealed by the results of the Mott-Schottky measurements and UV-vis DRS experiments, the band alignments of p-type Li 2 SnO 3 and g-C 3 N 4 before formation of an interface were as presented in Figure 10a. First, when the p-type Li 2 SnO 3 and g-C 3 N 4 were combined to form the p-n heterostructure, the Fermi levels of Li 2 SnO 3 tended to rise up FIGURE 10 | Schematic of charge transfer between p-type Li 2 SnO 3 and n-type g-C 3 N 4 (a) before contact and (b) after contact forming the p-n heterojunction. and the Fermi levels of g-C 3 N 4 tended to descend to reach an equilibrium state. As a result, the CB edge of Li 2 SnO 3 was higher than that of g-C 3 N 4 and a built-in electric field was generated in the space charge region containing negatively charged Li 2 SnO 3 and positively charged g-C 3 N 4 (Figure 10b). Second, once the Li 2 SnO 3 /g-C 3 N 4 heterojunction was irradiated with visible light, photo-induced electrons and holes were generated in the g-C 3 N 4 . However, the photogenerated electrons and holes could not be excited in the Li 2 SnO 3 owing to its intrinsic wide band gap. As a results, the inner electric field at the p-n heterojunction interface will push the holes in the VB of g-C 3 N 4 toward the VB of Li 2 SnO 3 . Meanwhile, the generated electrons remained in the conduction band of g-C 3 N 4 , where the accumulated electrons reacted with O 2 adsorbed on the surface of the heterojunction to form and, which in turn degraded RhB in the aqueous solution. Therefore, in such a way, the photogenerated electrons and holes were efficiently separated and the recombination rate was decreased. In addition, the dye sensitization effect was also considered in this system. The photoexcited electrons on the LOMO level of RhB molecule (Dong et al., 2014) were prone to transfer to the CB of g-C 3 N 4 , resulting in the increased aggregation of electrons and further enhanced the performance of the photodegradation. CONCLUSION A novel LSO-CN heterojunction photocatalyst, comprising p-type Li 2 SnO 3 and n-type g-C 3 N 4 , was successfully prepared by a facile calcining method. The obtained heterojunctions LSO-CN were characterized by PXRD, SEM, TEM, FT-IR, and UV-vis DRS. The optimum photodegradation rate was that of the heterojunction LSO-CN-85, i.e., 86% degradation of RhB after 60 min of visible light irradiation, which was ∼5 times that of g-C 3 N 4 . The photo-induced and active radicals played the dominant role in the photocatalytic RhB degradation over the LSO-CN-85 heterojunction photocatalyst. Photoelectrochemical performance measurements were carried out to elucidate the photocatalytic mechanism. The enhanced photocatalytic performance could be attributed to the successful preparation of a p-n heterojunction between Li 2 SnO 3 and g-C 3 N 4 , which greatly promoted the efficient separation of photoinduced carriers. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. AUTHOR CONTRIBUTIONS Specifically, YL and DY proposed this topic and design of the project. BZ and MW completed the characterization part. MW, QY, and XL completed the experimental part. YW analyzed the results. DY, BZ, and YL composed the manuscript. All authors participated in the discussions of the results and made important contributions on this work.
6,227.2
2020-02-11T00:00:00.000
[ "Chemistry", "Environmental Science", "Materials Science" ]
Determination of Iron in Some Selected Iron Containing Tablets Using Redox Titration Iron is an essential trace element, required for haemoglobin formation and the oxidative process of living tissues. A comparative study of the determination of iron in iron tablets was carried out using Redox titration on five samples of capsule containing iron. The capsules were analyzed using Redox titration on five of the samples containing iron content inform of ferrous fumarate. The weight of ferrous fumarate Fe (C4H2O4) as well as elemental Fe 2+ in milligram per gram for each capsule containing ferrous fumarate was determined. The titration was carried out using potassium permanganate (vii) (KMNO4) which is the oxidation agent. The results obtained indicated Chemiron contained 144±4.61 (mg/g) while the label contained 150mg/g, Astyfer contained 74.6±5.69 (mg/g) while the label contained 80mg/g, Ferrobin plus contained 246±2.36 (mg/g) while the label contain 300mg/g, Emivite super contain 102±3.64 (mg/g) while the label contain 100mg/g, and Maxiron 265±1.73 (mg/g) while the label contain 250mg/g. Base on the results for the analysis of all the sample it can be concluded that Maxiron have the highest percentage of iron which is best supplements for adult lacking high percentage of iron. However, Astyfer has the lowest amount of iron which is the best supplement for infants who require very low amount of iron supplements. Introduction Iron is the most abundant metal found in the earth crust, water as well as in different food stuffs naturally. Iron is also found in human body. [1] About 70% of iron in the average adult is found in the red blood cells of the blood called hemoglobin, which plays a vital role in transportation of oxygen from the lungs to various tissues in the body while the remaining 30% is present as storage iron in the kidney, liver, spleen and bone marrow. Myoglobin, in the muscles cells accepts, stores, transports and releases oxygen. Iron is an essential element which is vital and required for various biological functions. [1] The average daily nutritional requirement of iron is 8-11mg/d for males and 16-27mg/d for adult female due to the iron losses through their regular menstrual blood flow. [2] Iron is essential to virtually all living organisms and is integral to multiple metabolic functions, [3] It is an essential trace element, required for haemoglobin formation and the oxidative process of living tissues. [4][5] Iron is required for many vital functions such as reproduction, healing of wounds, cellular growth, oxidative metabolism, muscles activity and execution of various metabolic processes. The main role of iron is to carry oxygen to the tissues where it is needed, although use for DNA synthesis and also in the protection against microbes and free radicals produced in the body promote the development of heart diseases and damage the level of cholesterol in the blood. [6] Nutritional deficiency of folate is commonly associated with people consuming inadequate diet. Pregnant women are at high risk of folate deficiency because pregnancy significantly increases folate requirement, especially during periods of rapid fetal growth. [7][8] Low iron intake can lead to iron deficiency which leads to various disorders such as anemia, cheilosis, dyspnea, irritability, impaired memory and increased in susceptibility to infection. [9][10][11] Anaemia is a major public health problem worldwide with prevalence of 43% in developing countries and 9% in developed nations. Tablets Using Redox Titration [12][13] Iron deficiency anaemia in young children is the most prevalent form of micronutrient deficiency worldwide. [14] For such conditions, some iron supplement is needed along with foodstuffs such as egg yolk, fish kidney, wheat, maize, spinach, pheasant, meat etc. Iron supplementation can be administered in form of tablets capsule or injection from which iron content in iron containing tablets or capsule may vary from one pharmaceutical formulation to another. However, the pharmacopeia range of iron in tablet has been allocated from 48-54mg, as iron overdose leads to cases such as convulsion, multisystem organ failure, coma, and even death. [15] Medicinally, iron is required as a dietary supplement in conditions of iron deficiency associated with secondary anaemia. A satisfactory intake of iron can be ensured by eating a suitable diet, because certain foods such as liver, kidney, egg yolk, and spinach are rich in iron. Nevertheless, it is sometimes necessary to supplement the iron taken in natural diet with iron tablets. Since iron is the key component of haemoglobin, therefore taking appropriate amount of iron drug supplement can help build red blood cells and reverse anaemia caused by too little iron in the body. [16][17] Most of the iron tablets bought at the pharmacies usually contain Iron (ii) sulphate (ferrous fumarate) examples of these tablets with their trade names are Astyfer, Chemiron, Fesolate, etc [18]. The principle that governed the identification of iron (Fe 2+ ) in iron containing drugs, is the ability of iron (Fe 2+ ) to reduce strong oxidizing agents such as potassium permanganate (vii) KMnO 4 to Mn 2+ , for which the iron itself is oxidized from Fe 2+ to Fe 3+ and the end point of the reaction is associated to the formation of a persistent pale pink colour solution. [19] The aim of this study is to determine the quantity of iron in selected iron containing drugs using Redox titration. Then compare the label amount of iron in the iron containing drugs and the quantity determined experimentally. Materials The samples were iron containing tablets obtained from Passmac Gold Pharmacy, No. 5 kilgori Road, Besides Intercity Bank, Sokoto State, Nigeria. The samples were labelled 1, 2, 3, 4, 5. All chemicals were used of analytical grade. Redox Titration The principle that governed the identification of iron (Fe 2+ ) in iron containing drugs, is the ability of iron (Fe 2+ ) to reduce strong oxidizing agents such as potassium permanganate (vii) [KMnO 4 ] to Mn 2+ , for which the iron itself is oxidized from Fe 2+ to Fe 3+ and the end point of the reaction is associated to the formation of a persistent pale pink colour solution. [ SO 4 in a conical flask. The burette was then filled with the prepared 0.02m K MnO 4 solution and titrated against the pipette solution of (NH 4 ) Fe (SO 4 ) 2 6H 2 O until a persisting pale pink colour was observe which indicates the end point of the reaction. The corresponding titre value was recorded and the procedure was repeated for two more times, each time noting the corresponding titre value. Determination of the Amount of Iron in the Iron Containing Medications Five (5) iron capsules were weighed and transferred in to 250cm 3 volumetric flask, distilled water was added up to the mark to dissolve into solution. A pipette was used to measure 20cm 3 of the solution into a conical flask, and was titrated against the standardized 0.02m KMnO 4 until a persisting pate pink colour was observed, which indicate the end point of the reaction. The corresponding titre value was carefully recorded. The procedure was repeated for each of the samples. The titre value obtained was used to calculate the amount of iron in the samples. Chemical reaction MnO 4 -+5Fe 2+ +8H + →Mn 2+ +5Fe 3+ +4H 2 O The results obtained for the analysis were summarized in table 1. Although the capsule was produced by different manufacturers, samples 1, 2, 3, 4 and 5 have the same iron formulation which is iron fumarate. Also it can be seen from the table the difference of some of the capsule is quite comparable with the label/standard concentration of (Fe) in the Redox titration. And might be due to the in complete transfer or dissolution of the sample solution may interfere with the observation of the endpoint. Results and Discussion Moreover, in the accuracy of measurement faculty weighing apparatus, improper through mixing of component or splashes of the samples may also result in such variations. However, if at all none of the above has happened, the differences may only be a marketing promotion strategy from manufacturers. Conclusion Based on the result for the analysis of all the samples 1-5 it can be concluded that Maxiron (265mg/g) have the highest amount of iron. Hence Maxiron capsules can be regarded as the most suitable for those groups lacking iron supplement (e.g. menstruating and pregnant women and those who suffer blood loss due to accidents). However, Astyfer which has the lowest amount of iron is the best supplement for infants who require very low amount of iron supplements.
1,979.6
2019-09-30T00:00:00.000
[ "Chemistry", "Medicine" ]
Charge radii of exotic potassium isotopes challenge nuclear theory and the magic character of N = 32 Nuclear charge radii are sensitive probes of different aspects of the nucleon–nucleon interaction and the bulk properties of nuclear matter, providing a stringent test and challenge for nuclear theory. Experimental evidence suggested a new magic neutron number at N = 32 (refs. 1–3) in the calcium region, whereas the unexpectedly large increases in the charge radii4,5 open new questions about the evolution of nuclear size in neutron-rich systems. By combining the collinear resonance ionization spectroscopy method with β-decay detection, we were able to extend charge radii measurements of potassium isotopes beyond N = 32. Here we provide a charge radius measurement of 52K. It does not show a signature of magic behaviour at N = 32 in potassium. The results are interpreted with two state-of-the-art nuclear theories. The coupled cluster theory reproduces the odd–even variations in charge radii but not the notable increase beyond N = 28. This rise is well captured by Fayans nuclear density functional theory, which, however, overestimates the odd–even staggering effect in charge radii. These findings highlight our limited understanding of the nuclear size of neutron-rich systems, and expose problems that are present in some of the best current models of nuclear theory. The charge radii of potassium isotopes up to 52K are measured, and show no sign of magicity at 32 neutrons as previously suggested in calcium. The observations are interpreted with coupled cluster and density functional theory calculations. Nuclear charge radii are sensitive probes of different aspects of the nucleon-nucleon interaction and the bulk properties of nuclear matter; thus, they provide a stringent test and challenge for nuclear theory.The calcium region has been of particular interest, as experimental evidence has suggested a new 'magic' number at N = 32 [1][2][3], while the unexpectedly large increases in the charge radii [4,5] open new questions about the evolution of nuclear size in neutron-rich systems.By combining the collinear resonance ionization spectroscopy method with β-decay detection, we were able to extend the charge radii measurement of potassium (Z = 19) isotopes up to the exotic 52 K (t 1/2 = 110 ms), produced in minute quantities. Our work provides the first charge radii measurement beyond N = 32 in the region, revealing no signature of the magic character at this neutron number.The results are interpreted with two state-of-the-art nuclear theories.For the first time, a long sequence of isotopes could be calculated with coupled-cluster calculations based on newly developed nuclear interactions.The strong increase in the charge radii beyond N = 28 is not well captured by these calculations, but is well reproduced by Fayans nuclear density functional theory, which, however, overestimates the odd-even staggering effect.These findings highlight our limited understanding on the nuclear size of neutron-rich systems, and expose pressing problems that are present in some of the best current models of nuclear theory. The charge radius is a fundamental property of the atomic nucleus.Though it globally scales with the nuclear mass as A 1/3 , the nuclear charge radius additionally exhibits appreciable isotopic variations that are the result of complex interactions between protons and neutrons.Indeed, charge radii reflect various nuclear structure phenomena such as halo structures [6,7], shape staggering [8] and shape coexistence [9], pairing correlations [10,11], neutron skins [12] and the occurrence of nuclear magic numbers [5,13,14].The term 'magic number' refers to the number of protons or neutrons corresponding to completely filled shells, thus resulting in an enhanced stability and a relatively small charge radius. In the nuclear mass region near potassium, the isotopes with neutron number N = 32 are proposed to be 'magic', based on an observed sudden decrease in binding energy beyond N = 32 [2,3] and high excitation energy of the first excited state in 52 Ca [1].The nuclear charge radius is also a sensitive indicator of 'magicity': a sudden increase in the charge radii of isotopes is observed after crossing the classical neutron magic numbers N = 28, 50, 82 and 126 [5,[13][14][15].Therefore, the experimentally observed strong increase in the charge radii between N = 28 and N = 32 of calcium [4] and potassium chains [5], and in particular the large radius of 51 K and 52 Ca (both having 32 neutrons) have attracted significant attention. One aim of the present study is therefore to shed light on several open questions in this region: how does the nuclear size of very neutron-rich nuclei evolve, and is there any evidence for the 'magicity' of N = 32 from nuclear size measurement?We furthermore provide new data to test several newly developed nuclear models, which aim at understanding the evolution of nuclear charge radii of exotic isotopes with large neutron-toproton imbalance.So far, all ab initio nuclear methods, allowing for systematically improvable calculations based on realistic Hamiltonians with nucleon-nucleon and three-nucleon potentials, have failed to explain the enhanced nuclear sizes beyond N = 28 in the calcium isotopes [4,16].Meanwhile, nuclear Density Functional Theory (DFT) using Fayans functionals has been successful in predicting the increase in the charge radii of isotopes in the proton-magic calcium chain [11], as well as the kinks in the proton-magic tin and lead [13].These state-of-the-art theoretical approaches have been predominantly used to study the charge radii of even-Z isotopes, and only very recently could charge radii of odd-Z Cu (Z = 29) isotopes be investigated with Abody in-medium similarity renormalization group (IM-SRG) method and Fayans-DFT [10]. Laser spectroscopy techniques yield the most accurate and precise measurements of the charge radius for radioactive nuclei.These highly efficient and sensitive experiments at radioactive ion beam facilities have expanded our knowledge of nuclear charge radii distributed throughout the nuclear landscape [17].Laser spectroscopy achieves this in a nuclear-modelindependent way by measuring the small perturbations of the atomic hyperfine energy levels due to the electromagnetic properties of the nucleus.Although these hyperfine structure (hfs) effects are as small as one part in a million compared to the total transition frequency, they can nowadays be measured with remarkable precision and efficiency, even for shortlived, weakly-produced, exotic isotopes [10]. To enhance the sensitivity of the high-resolution, optically detected collinear laser spectroscopy method that was previously used to measure the mean-square charge radii of the potassium isotopes [5,18,19], we used the Collinear Resonance Ionization Spectroscopy (CRIS) experimental setup at the ISOLDE facility of CERN.This allows very exotic isotopes to be studied with the same resolution as the optically detected method [10,20].Relevant details of the ISOLDE radioactive beam facility and the CRIS setup are depicted in Fig. 1 (see Methods for details).The CRIS method relies on the step-wise resonant laser excitation and ionization of atoms.For this experiment, a narrowband laser was used to excite potassium atoms from one of the hfs components of the atomic ground state into a hyperfine energy level of an excited state.From there, another excitation and subsequent laser ionization were induced by broadband high-power laser beams, as discussed in Ref. [21].The resulting ions were deflected from the remaining (neutral) particles in the beam, and were detected with an ion detector.This method allows nearly background-free ion detection, and thus has very high sensitivity, provided that the contaminating beam particles are not ionized (through e.g.collisional ionisation or due to the highpower non-resonant laser beams).By counting the ions as a function of the laser frequency, the energy differences between the atomic hyperfine transitions were measured. If measurements are performed on more than one isotope, the difference in mean-square charge radius of these isotopes can be obtained from the difference in the hfs centroid frequency of two isotopes (the isotope shift) with mass numbers A and A : δν A,A = ν A − ν A .In order to apply the CRIS method to study a light element such as potassium, where the optical transition exhibits a lower sensitivity to the nuclear properties, the long-term stability and accurate measurement of the laser frequency had to be investigated.The details of the relevant developments are presented in Ref. [22], where the method was validated by measuring the mean-square charge radii of 38−47 K isotopes with high precision.For the most exotic isotope, there was an additional challenge: a large isobaric contamination at mass A = 52, measured to be 2×10 4 times more intense than the 52 K beam of interest.The resulting detected background rate was found to be an order of magnitude higher than that of the resonantly ionized 52 K ions.In addition, this background rate was found to strongly fluctuate in time, making a measurement with ion detection impossible (see the hfs spectrum in Fig. 1(a)).Taking advantage of the short half-life of 52 K (t 1/2 = 110 ms) and the fact that the isobaric contamination is largely due to the stable isotope 52 Cr, an alternative detection setup was developed, which can distinguish the stable contamination from the radioactive 52 K.A thin and a thick scintillator detector were installed behind the CRIS setup (Fig. 1).These detectors were used to count the β-particles emitted though the decay of 52 K.With this setup, the fluctuations in the background rate and signal-to-background ratio were significantly improved, as seen in Fig. 1(b).The obtained hfs spectrum of 52 K is presented in Fig. 1(c).Note that the hfs spectra of 47−51 K were re-measured with the standard CRIS method, and 50,51 K were measured with both ion-and βdetection.This allows a consistent calculation of isotope shifts of 47−52 K (See Tab II in Methods for details). The changes in the mean-square charge radii δ r 2 are calculated from the isotope shift ν AA via Here F , K NMS and K SMS are the atomic field shift, specific mass shift and normal mass shift factors, respectively (see Methods for details).Previously published charge radii of potassium isotopes [5,18,19,21,23] have been extracted from the isotope shifts using an F -value calculated with a non-relativistic coupled-cluster method and an empirically determined K SMS value, as reported in Ref. [24]. We employ the recently developed analytic response relativistic coupled-cluster (ARRCC) theory [25], an advanced atomic many-body method (see Methods for details), to calculate both the F and K SMS constants.The newly calculated atomic field shift factor, F = −107.2(5)MHz fm 2 is in good agreement with the literature value F = −110(3) MHz fm 2 , and is more precise.More importantly, the specific mass shift, a highly correlated atomic parameter, could be calculated from microscopic atomic theory for the first time.The calculated value, K SMS = −14.0(22)GHz u, Table I. Evaluated experimental isotope shifts δν 39,A , differences in mean-square charge radii δ r 2 , and charge radii R ch of nuclei 36−52 K. Systematic errors are reported in square brackets.The procedure for the evaluation is discussed in the Methods. A N I π δν 39,A (MHz) δ r 2 39,A (fm 2 ) R ch (fm) 36 is more precise than the empirical value, K SMS = −15.4(38)GHz u from Ref. [24], and shows good agreement.Table I presents the isotope shifts, changes in mean-square charge radii, and absolute charge radii of 36−52 K which were extracted using these new atomic constants.The isotope shifts and charge radii have been re-evaluated using all available data, as described in the Methods section.In Fig. 2(a) these charge radii are compared with values obtained using the atomic factors taken from Ref. [24].Good agreement is obtained, while the systematic error due to the uncertainty on the atomic factors is clearly reduced.A future measurement of the absolute radii of radioactive potassium isotopes through non-optical means (e.g.electron scattering at the SCRIT facility [26]), would help reduce the systematic uncertainties.Previously, the nuclear spin and parity of 52 K was tentatively assigned to be I π = (2 − ), based on the very weak feeding into the 52 Ca ground state [27].Here, we have analysed our data assuming two other alternative spin options.Given that the I = 1 and I = 3 assumptions produce unrealistically small and large charge radii (see Fig. 2(a)), our study further supports an I = 2 assignment. The inset in Fig. 2(a) compares the changes in meansquare charge radii (relative to the radius of isotope with neutron number N = 28) of several isotopic chains in this mass region, up to Z = 26.A remarkable observation is that the charge radii beyond N = 28 follow the same steep increasing trend, irrespective of the number of protons in the nucleus.Beyond N = 32, data are only available for potassium (this work) and manganese (Z = 25) [28].Both charge radius trends are very similar, with no signature of a characteristic kink that would indicate 'magicity' at N = 32. Previously, ab initio coupled cluster (CC) calculations based on the NNLO sat interaction [29] were used to describe the nuclear charge radii in calcium isotopes [4,12]. At that time, calculations in this framework could only be performed for spherical isotopes near 'doubly-magic' nuclei.While these calculations predicted the absolute charge radii near 40,48 Ca very well, they failed to reproduce the observed large charge radii around neutron number N = 32. In Fig. 2(b) we compare the experimental data to CC calculations that start from a symmetry-breaking reference state, which allows us to compute charge radii of all potassium isotopes (see Methods for details).Results obtained with the NNLO sat interaction significantly overestimate the experimental data near stability, where the experimental uncertainties on the total radii are the smallest.This interaction was fitted to experimental binding energies and charge radii of selected nuclei up to mass number A = 25 [29].Therefore, a newly constructed ∆NNLO GO (450) interaction was developed, which includes pion-physics and effects of the ∆(1232) isobar.This interaction is constrained by properties of only light nuclei with mass numbers A ≤ 4 and by nuclear matter at the saturation point (i.e. its saturation energy and density, and its symmetry energy, see Methods for details).By virtue of including saturation properties, CC calculations using the ∆NNLO GO (450) interaction yield an improvement in the accuracy of the description of potassium charge radii near stability, as shown in Fig. 2(b). However, both NNLO sat and ∆NNLO GO (450) interactions still underestimate the steep increase observed beyond N = 28.This is better visualised by plotting the differences in meansquare charge radii relative to 47 K (with neutron number N = 28), δ r 2 47,A , shown in Fig. 2(c).The systematic uncertainties near N = 28 are strongly reduced by choosing this reference.It is also worth noting that, for δ r 2 47,A below N = 28, both interactions show a good agreement with experimental results within the systematic uncertainty. What can be the reason for the underestimation of charge radii for N > 28?The reference state in CC calculations is the axial Hartree-Fock state.For nearly spherical nuclei, a general (triaxial) cranked HFB state that breaks time-reversal and gauge symmetries could perhaps provide a better reference.Also, since the angular momentum was not restored, the associated correlations are missing as well in CC results. The DFT is the method of choice for heavy systems.Nuclei with Z ≈ 20 cover the region where both methods, DFT and CC, can be successfully applied.Our DFT calculations use in particular the Fayans functional Fy(∆r,HFB) [30] (see Methods for details) which was developed with a focus on charge radii.This method closely reproduces the absolute charge radii of calcium isotopes, including the steep increase beyond N = 28 [11].Furthermore, Fy(∆r,HFB) reproduces the absolute radii of the magic tin [13], cadmium [31] as well as the odd-Z copper isotopes [10].It is to be noted however, that the potassium isotopes are a lighter system in which the polarization effects are expected to be stronger than in the heavier copper isotopes.To account for that, we extended the Fayans-DFT framework to allow for deformed HFB solutions.The isotopic chain contains odd-odd nuclei and the present DFT and CC treatment does not allow a clean spin selection for these.Consequently, the HFB calculations provide an averaged description for the odd-odd isotopes.As seen in Fig. 2(b), except for the neutron-poor side, Fy(∆r,HFB) calculations reproduce the average global trend rather well, in particular the steep increase above N = 28.However, this model grossly overestimates the odd-even staggering.The odd-even staggering in the potassium isotopes is significantly reduced with respect to calcium isotopes.This is very well captured by the CC calculations, due to the fact that these describe in detail the many-body correlations (see Methods for details).In nuclear DFT, local many-body correlations are treated less precisely. In summary, this work presents the first measurement of nuclear charge radii beyond the proposed 'magic' number N = 32 in the calcium region.This was achieved by combining collinear resonance ionization spectroscopy with β-decay detection, enabling the exotic isotope 52 K to be studied, despite its short half-life, low production rate and poor purity.Taking advantage of recent developments in atomic calculations, precise charge radii of the potassium chain were extracted.No sudden change in the 52 K charge radius is observed, thus no signature for 'magicity' at N = 32 is found.The comparison with nuclear theory predictions for the demanding case of potassium isotopes helps to uncover more about the strengths and open problems in current state-of-the-art nuclear models.CC calculations based on the new nucleon-nucleon potentials derived from chiral effective field theory, optimized with few-body nuclei properties as well as nuclear saturation properties, describe very well the absolute nuclear charge radii of the potassium isotopes near stability, and also the small odd-even staggering.However, the steep rise in charge radii above N = 28 remains underestimated.The similarity in the performance of ∆NNLO GO (450) and NNLO sat interactions suggests that the charge radii beyond N = 28 are insensitive to the details of chiral interactions at next-to-next-to-leading-order, and some crucial ingredient is lacking in these many-body methods.The Fayans-DFT model captures the general trend across the measured isotopes and reproduces the absolute radii rather well, including the steep increase up to N = 33.However, this model overestimates the odd-even staggering significantly.These findings highlight our limited understanding on the size of neutron-rich nuclei, and will undoubtedly trigger further developments in nuclear theory as demanding nuclear data on charge radii keeps uncovering problems with the best current models. METHODS The CRIS technique: The schematic layout of the CRIS setup is presented in Fig. 1.The mass selected ions were cooled and bunched in the ISCOOL device which operated at 100 Hz, the duty cycle of the CRIS experiment.The ion bunches typically have a 6 µs temporal width corresponding to a spatial length of around 1 m length.First, the bunched ion beam was neutralized in the CRIS beamline through collisions with potassium atoms in the charge-exchange cell (CEC).The remaining ions were deflected just after the CEC with an electrostatic deflector plate.In addition, atoms that were produced in highly excited states through the charge exchange process were field ionized and deflected out of the beam.The beam of neutral atoms passed through the differential pumping region and arrived in the 1.2-m interaction region (IR) maintained at a pressure of 10 −10 mbar.Here the atom bunch was collinearly overlapped with 3 laser pulses, which were used to stepwise excite and ionize the potassium atoms.A detailed study of this particular resonance ionization scheme can be found in Ref. [21].In the IR, ions can also be produced in non-resonant processes, introducing higher background rates [22].Normally, the ions created in the IR are guided towards an ion detector (MagneToF).This technique was used for the measurement of 47−51 K.The 51 K isotope was produced at a rate of less than 2000 particles per second and its hfs spectrum was measured in less than 2 hours.The study of 52 K, however, required a still more selective detection method.This isotope, produced at about 360 particles per second rate, is an isobar of the most abundant stable chromium isotope, which is the main contaminating species in the A = 52 beam, with an intensity of 6×10 6 particles per second.In order to avoid the detection of the non-resonantly ionized 52 Cr, the CRIS setup was equipped with a decay detection station, placed behind the end flange of the beamline.The MagneToF detector was removed from the path of the ion beam, and the ionized bunches were implanted into a thin aluminium window with 1 mm thickness allowing the transmission of β particles with energies larger than 0.6 MeV.The decay station behind this window consisted of a thin and thick scintillator detector (A and B in Fig. 1) for a coincidence detection of β-particles.The dimensions of the detectors were 1 mm × 6 cm × 6 cm and 6 cm × 6 cm × 6 cm.The β counts detected in coincidence were recorded by the data acquisition system (DAQ) together with the laser frequency detuning.The DAQ recorded the number of events in the detectors with a timestamp.The timestamp of the proton bunches impinging into the target of ISOLDE was also recorded and used to define the time gates in the data analysis. Laser system: A three-step resonance ionization scheme was used in this experiment. The laser light for the first excitation step was produced by a continuous wave (cw) titanium-sapphire (Ti:Sa) laser (M-Squared SolsTiS) pumped by an 18-W laser at 532 nm (Lighthouse Photonics).In order to avoid optical pumping to dark states due to long interaction times, this cw light was "chopped" into 50-ns pulses at a repetition rate of 100 Hz by using a Pockels cell [20].The wavelength of this narrowband laser was tuned to probe the hfs of the 4s 2 S 1/2 → 4p 2 P 1/2 transition at 769 nm.Atoms in the excited 4p 2 P 1/2 state were subsequently further excited to the 6s 2 S 1/2 atomic state by a pulsed dye laser (Spectron PDL SL4000) with a spectral bandwidth of 10 GHz.This dye laser was pumped by a 532 nm Nd:YAG laser (Litron TRLi 250-100) at a 100-Hz repetition rate.The fundamental output of the same Nd:YAG laser (1064 nm) was used for the final nonresonant ionization.The arrival of ion bunches and laser pulses in the interaction region were synchronized and controlled using a multi-channel pulse generator (Quantum Composers 9520 Series). Charge radii extraction: The perturbation of the atomic states caused by the different nuclear charge distribution in isotopes leads to small differences in the atomic transition frequency, δν AA , between centroids (ν A ,ν A ) of the hfs of two isotopes with mass number A and A .The isotope shifts of 48−52 K were extracted from the hfs spectra of 47−52 K analyzed using the SATLAS [32] Python package, as displayed in third column of Table II, along with all available results from literature.More details on the analysis process can be found in Ref. [21].The changes in the nuclear mean-square charge radii of 36−52 K can then be extracted from the isotope shifts using: where K MS and F are the atomic mass shift and field shift, m stands for the nuclear mass of isotopes A, A , and an electron.The nuclear mass was obtained by subtracting the mass of the electrons from the experimentally measured atomic mass reported in Ref. [33].The atomic constants, K MS and F , were calculated using the atomic ARRCC method as described below.The root-mean-square charge radii of these isotopes are: where R 39 is the charge radius of 39 K taken from Ref. [34]. Evaluation of the isotope shifts and charge radii: The isotope shifts of potassium isotopes were measured using several different techniques over many years, ranging from magneto-optical trap experiments [23] to laser spectroscopy of thermal [18,35,36] and accelerated beams [5,19,21], relying on photon and ion detection.The available results in Refs.[18,19,23,35,36] are referenced to the stable 39 K isotope, and are presented in the second column of Table II.The isotope shifts in Refs.[5,21] and this work, shown in the third column of Table II, were extracted with respect to 47 K.The systematic error from the experiments is given in curly brackets.Note that the systematic uncertainties in collinear laser spectroscopy experiments are mostly related to the inaccuracy of the acceleration voltage.In this work, the systematic uncertainty was negligible by using the laser scanning approach [21] and a well-calibrated high-precision voltage divider (with a relative uncertainty of 5 × 10 −5 ) from PTB.In order to compile a consistent data set with reliable evaluation of uncertainties, the following steps were taken: 1) The isotope shifts obtained with respect to 47 K were recalculated relative to 39 K, in order to link all data to the same reference.For this, the weighted average of all available δν 47,39 isotope shifts from Refs.[5,21] is used as a reference.These rereferenced values are listed in the fifth column of Table II and their uncertainty is increased due to the additional error associated with δν 39,47 (bold value in column six).Note that the systematic errors are always taken into account using the linear model [37], σ = σ sys + σ stat . 2) Next, the final isotope shift of each potassium isotope, δν 39,A (shown in the sixth column of Table II), was calculated as the weighted average (x) of the available results (x i ) using: where σ i is the total uncertainty of i th measurement.The error of the weighted mean was obtained using: accounting for possible over-, or under-dispersion using: where χ 2 is the reduced chi-squared. 3) These evaluated isotope shifts were used to extract the changes in mean-square charge radii (column 7 of Table II) using the new theoretical values for the atomic field and mass shifts, obtained in this work. The absolute charge radii of all potassium isotopes (last column of Table II) were then calculated by using the above mentioned Eq.3, relative to the absolute radius of the stable 39 K [34]. Atomic coupled-cluster calculations: The wave function of an atomic state with a closed-shell and a valence orbital electronic configuration can be expressed using the CC theory ansatz as where |Φ v is the mean-field wave function and S is the CC excitation operator.We further divide as S = T + S v to distinguish electron correlations without involving the valence electron (T ) and involving the valence electron (S v ).In the analytic response procedure, the first-order energy of the atomic state is obtained by solving the major harmonic oscillator shells (N max = 12) with the oscillator frequency Ω = 16 MeV.The three-body interaction has the additional cutoff on allowed threeparticle configurations This model-space is sufficient to converge the radii of all the potassium isotopes considered in this work to within ∼1% .In this work we calculate the expectation value of the squared intrinsic point proton radius, i.e.O = 1/Z i<j (r i − r j ) 2 δ tz,−1 .The coupled-cluster expectation value of the operator O is given by, is the left ground-state, and Λ is the linear combination of one-hole-one-particle and two-holetwo-particle de-excitation operators, see e.g.Ref. [39] for details.To obtain the charge radii we add finite proton/neutron, the relativistic Darwin-Foldy, and spinorbit corrections to the point proton radii.We note that the spin-orbit corrections are computed consistently as an expectation value within the CC approach, see Ref. [12].Charge radii along the chain of potassium isotopes from spherical as well as deformed calculations with the functional Fy(∆r,HFB) and compared with experimental data. [29], which has been constrained by nucleon-nucleon properties, and binding energies and charge radii of nuclei up to oxygen.It includes terms up to next-to-nextto leading order in the Weinberg power counting.The newly constructed ∆NNLO GO (450) interaction includes ∆ isobar degrees of freedom, exhibits a cutoff of 450 MeV, and is also limited to next-to-next-to-leading order contributions.Its construction starts from the interaction of Ref. [40] and its low-energy are constrained by the saturation density, energy and symmetry energy of nuclear matter, by pion-nucleon scattering [41], nucleon-nucleon scattering, and by the A ≤ 4 nuclei.A second interaction, ∆NNLO GO (394), was similarly constructed but with a cutoff of 394 MeV. To look at the sensitivities in the changes in the meansquare change radii, Fig. 3a compares results from the newly developed interactions with NNLO sat .While there are differences below N = 28, all interactions yield essentially identical results beyond N = 28.This suggests that charge radii beyond N = 28 are insensitive to details of chiral interactions at next-tonext-to-leading-order. To shed more light onto this finding, Fig. 3b compares results from deformed mean-field (MF) calculations with CCSD computations for two different interactions.The 1.8/2.0(EM)interaction [42] contains contributions at next-to-next-to-next-to-leading order and thereby differs from the interactions used in this work.First, for the ∆NNLO GO (394) interaction, MF and CC yield essentially the same results for N > 28, though CC includes many more wave-function correlations.Second, for the 1.8/2.0(EM)interaction, MF and CC results differ significantly by the strong odd-even staggering, which is a correlation effect.None of the interactions explain the dramatic increase of the charge radii beyond N = 28. DFT calculations: For the DFT part of this work, we use the non-relativistic Fayans functional in the form of Ref. [43].This functional is distinguished from other commonly used nuclear DFT in that it has additional gradient terms at two places, namely in the pairing functional and in the surface energy.The gradient terms allow, among other features, a better reproduction of the isotopic trends of charge radii [44].This motivated a refit of the Fayans functional to a broad basis of nuclear ground state data with additional information on changes in mean-square charge radii in the calcium chain [4,30].We use here Fy(∆r,HFB) from Refs.[4,30] which employed the latest data on calcium radii.It is only with rather strong gradient terms that one is able to reproduce the trends of radii in calcium at all, in particular its pronounced odd-even staggering, however, with a slight tendency to exaggerate the staggering.It was found later that Fy(∆r,HFB) performs very well in describing the trends of radii in cadmium and tin isotopes [13,31].Here we test it again for the potassium chain next to calcium.For all practical details pertaining to our Fayans-DFT calculations, we refer the reader to Ref. [30]. All above mentioned calculations with Fy(∆r,HFB) were done in spherical representation.That can be questioned for nuclei far off shell closures, the more so for odd nuclei where the odd nucleon induces a certain quadrupole moment.The new feature in the present calculations is that we use a code in axial representation which allows for deformations if the systems wants any.Fig. 2 shows the result from the deformed code [45], adapted for the Fayans functional.Here in Fig. 4 we illustrate the effect of deformation by comparison with spherical calculations.The effects are small, but show there is no uncertainty due to symmetry restrictions.The lack of angular-momentum projection in DFT calculations induces a systematic error on charge radii.We have estimated it from the angular-momentum spread to be below 0.005 fm on the average, thus having no consequences for the predicted trends. Figure 1 . Figure 1.Left: The CRIS setup at ISOLDE, CERN.The nuclei of interest were produced via various nuclear reactions after a 1.4-GeV proton beam impinged onto a UCx target.These diffused out of the target, into an ion source, and underwent surface ionization.The ion beam was then mass separated using a High Resolution Separator (HRS), and subsequently cooled and bunched in a linear Paul trap (ISCOOL).The bunched ion beam was guided towards the CRIS beamline, where the ions were first neutralized in a charge-exchange cell filled with potassium vapor.The neutral atoms were then delivered to the interaction region.Here the bunched beam of atoms was collinearly overlapped with the laser pulses to achieve resonance laser ionization.The ionized radioactive potassium ions could then be detected using either a MagneToF ion detector shown in (a), or plastic scintillator detectors (b).(c) The hfs of 52 K measured with the scintillator detectors.Figures (a), (b) and (c) show the detected events as a function of the laser frequency detuning. Figure 2 . Figure2.(a) Changes in the mean-square charge radii of potassium isotopes using the newly calculated atomic field shift (F ) and specific mass shift (KSMS) factors, as well as the values from Ref.[24].The red and gray bands indicate the uncertainties originating from these atomic constants, respectively.The inset shows the changes in the mean-square charge radii of neighbouring elements.The agreement of the data for different isotopic chains above N = 28 is striking.(b) Comparison of the measured charge radii of potassium isotopes with nuclear coupled-cluster calculations using two interactions (NNLOsat and ∆NNLOGO(450)) derived from the chiral effective field theory, and with the Fayans-DFT calculations with the Fy(∆r,HFB) energy density functional.(c) Changes in the mean-square charge radii of potassium relative to 47 K, in which the systematic uncertainties near N = 28 are largely cancelled. Figure 3 . Figure 3. Changes in mean-square charge radii of potassium isotopes calculated based on newly developed interactions and NNLOsat are compared with (a) experimental data and (b) that from deformed mean-field (MF) calculations with CCSD computations for two different interactions. Figure 4 . Figure 4.Charge radii along the chain of potassium isotopes from spherical as well as deformed calculations with the functional Fy(∆r,HFB) and compared with experimental data. The chiral interactions we used are NNLO sat
7,687.4
2020-12-03T00:00:00.000
[ "Physics" ]
Imperfection of the Contractual Relations in the Regional Agrarian Sector Daria Sergeevna Benz The article discovers problems of imperfection and an inefficiency of the contractual relations in agrarian sector of Russian regions, which undermine the bases of regional food security. Moreover, in the conditions of sanctions and new economic and political threats agrarian complex development has the biggest priority among economic tasks Russia’s facing. Finally, the level of food security is determined by the peculiarities of subjects’ behavior in agrarian sector. As for these subjects we mean farmers, manufacturers, merchandisers, consumers and the government. In this article we introduce the evaluation technique of regional food security level. As for two key conditions of food security we consider, first of all, price affordability of key food products and, the second, sufficiency of its consumption. Under developed technique we analysed the food security’ condition in Chelyabinsk Region (Russia). In the end of the article we develop a number of the models showing the roots of imperfection of the contractual relations in agrarian sector. Introduction 1. Contractual relations are the dominating form of any economic relations in modern business.The problems of Russian agrarian sector are especially actual nowadays.The last months were marked by the abrupt growth of political and economic risks.The sanctions of European Union and USA put over Russian subjects, led to growth of loan capital's cost.In connection with this sanctions Russia has to keep economic and social stability.In order to withstand the aggressive pressure of international environment, our country, its regions and municipalities, have to achieve a sustainable development of all economic sectors.The agrarian sector is strategically significant for Russian economy.According to the experts, approximately 8,5% of gross domestic product is produced in this sector, 3,4% of basic assets is concentrated, more than 11% of all employee are occupied (Skrynnik, 2010). There are three objectives of our research: to develop the evaluation technique of regional food security level; to analyse and estimate the condition of agrarian sector in Chelyabinsk Region; to construct a number of models of contractual relations' efficiency. As for subjects of the contractual relations we mean the following faces: farmers, manufacturers, merchandisers, consumers and the government (Figure 1).As for the contractual relations in agrarian sector we mean a special sort of the economic relations between, at least, two subjects concerning purchase and sale of foodstuff on the basis of developed institutes' system. As for essential characteristics of the contractual relations we indicate the following: there is an object of property rights transfer; there are at least two subjects; there is a mechanism of constraint to satisfy an agreement.We define contractual relations' efficiency in agrarian chain as ability of these relations to satisfy economic interests of its relations' subjects.These relations base on distribution of ultimate agricultural product's price.In other words, maximizing efficiency of the contractual relations implies the determination of optimum price -such price which harmonizes interaction between four subjects mentioned. Considering the problem of contractual relations' efficiency, we use the method of a dichotomy and divide all contractual relations into perfect and imperfect.The contractual relations in regional agrarian sector will be perfect if maximum level of regional food security is reached. As criteria of regional food security level we accept implementation of the following two conditions: sufficiency of key foodstuff consumption: price affordability.Thus, if two mentioned conditions are reached, we can call the contractual relations in regional agrarian sector perfect or effective ones.If at least one of two mentioned conditions isn't satisfied, the efficiency of the contractual relations along with a condition of food security will be insufficient.And, at last, if none of two conditions is satisfied, we can say about extremely negative level of regional food security. Research Methodology 2. An assessment of contractual relations' quality, first of all, has to provide possibilities of assessment of food security condition.For an assessment of regional food security condition we use two coefficients: K1 -the coefficient of sufficiency of key foodstuff consumption; K2 -the coefficient of price affordability.The first coefficient (K1) is the sum of ratios the actual consumption of each product (Qi a ) to standard consumption of corresponding product (Qi s ): , (1) where Qi a -actual consumption of i-product, Qi s -standard consumption of i-product, n -quantity of products in consumer' food basket.Standards of consumption are established with a glance of regional climatic features and are calculated for ablebodied population.Standards of consumption of key food products are defined by the Concept of the food security of Ural Federal District for the period till 2020 [2], developed within the Doctrine of food security of Russian Federation for the period till 2020 [3].The recommended standards of consumption are presented in Table 1.2) let's consider that the quantities of the actual consumption can't exceed the recommended standard, otherwise the buyer consumes excessive quantity of the product that doesn't influence food security in any way.Based on the first simplification, we transform the Formula (1) as follows: , (2) Numeration of products in the Formula (2) corresponds to number of products mentioned in Table 1.Relying on the second simplification, we come at a conclusion that the coefficient of sufficiency of consumption K1 will vary from 0 to 6. For calculation of coefficient of price accessibility K2 we will use the following formula: , (3) where i s -consumer expenses on standard quantity of i-product, AI -weight-average income per capita, n -quantity of products in consumer' food basket.The calculation of coefficient K2 is based on following simplifications: 1) we will consider not actual expenses, but expenses on standard quantity in coefficient numerator.Otherwise calculation of this coefficient loses sense; 2) the consumer' food basket consists of six products mentioned in Table 1.Based on mentioned simplification, we transform the Formula (3) as follows: , (4) where i s -consumer expenses on standard quantity of i-product according to numeration of products in Table 1 The coefficient K2 theoretically can vary from zero to infinity.However, in practice this coefficient has finite value.If the coefficient K2 has value below 10%, we consider a situation with regional food security perfect.As acceptable level we'll take range from 10% to 20%.Let's call an interval from 20% to 30% a satisfactory condition of food security.Thus, we'll take value of 30% for the critical level of coefficient K2.If the share of consumer' expenses on food products exceeds 30%, it makes an unsatisfactory condition of regional food security. The matrix presented in Table 2 allows us to make a final conclusion on the condition of regional food security. Table 2. Matrix, allowed evaluating the condition of regional food security The Source: Developed by authors. Main Body.Results. 3. In this part of the article we evaluate the condition of food security in Chelyabinsk Region.The quantities of consumption of the main food products per capita are specified in Table 3. ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences The Source: According to selective researches. To calculate the coefficient K1 (Formula 2) we use data from Table 3.The results are shown in Table 4.The Source: Evaluated by authors. Table 4 shows that the coefficient 1 has no high volatility in the course of time.It constantly remains in the range from 4 to 5 units that points to a satisfactory condition of food security.However, considering the greatest possible value of coefficient (6 units), it is possible to make a conclusion that it is still far to an excellent condition of regional food security. For end-to-end assessment we should calculate the coefficient 2. For calculation of this coefficient according formula 3 or we should calculate the weight-average income per capita (a coefficient denominator).There is a question of selection: if we calculate the coefficient for employed persons, we will take a salary into account.We'll consider that the population has no other income, except a salary, or they are extremely insignificant.If as selection we take all population of Chelyabinsk Region, we should consider along with employees also unemployed residents. At first we'll go on the first way.Data for January, 2015 about the average number of workers and their average earnings by types of economic activity are shown in Table 5.To calculate the numerator of coefficient K2 (Formula 4) we should estimate the level of prices for each food product.The average prices of six key food products are shown in Table 6.The Source: According to website http://chelstat.gks.ru. Let's estimate the annual cost of a consumer basket from six specified products based on standard quantities of consumption: . In calculation of monthly expenses this sum will be at the level of 4366,41 rubles.Thus, calculation of coefficient K2 will lead the following result: If in the denominator we consider not earnings of employed persons, but the average income per capita of all inhabitants of Chelyabinsk region, calculation coefficient K2 will be following: For summing up we use the matrix represented in Table 2.The results of calculation coefficient K2 over the working persons (1) and overall population of Chelyabinsk Region (2) are shown in Table 7.Thus, we can make a conclusion about acceptable or satisfactory condition of food security in Chelyabinsk region. Today the problem of contractual relations' efficiency in agrarian chain (figure 2) is actually reduced to the analysis of efficiency of interaction among four subjects: farmers (1), manufacturers (2), merchandisers (3) and the ultimate consumer (4). Figure 2. Contract interactions between subjects of an agrarian chain Let's consider that the purpose function of the first three subjects is maximizing selling price (P1, P2, P3), and the purpose function of the consumer is maximizing the utility (U3) from a product. On Figure 2 we can see the factors of contractual relations' efficiency: for subjects 1 -3 selling prices (P1, P2, P3) are the factors of efficiency growth, and their expenses (S1, S2, S3) are the factors of efficiency decrease.For the consumer there is another situation: the higher price of a product (P3), the lower contractual efficiency for consumer, as his utility (U3) decreases. Let's suppose that each of three subjects (the farmer, the manufacturer, the merchandiser) has his own function of contractual relations' efficiency (E) showing dependence on selling price (P): , where Ei -contractual relations' efficiency for i-subject, Pi -release price for i-subject, i -1, 2, 3,4.At first we construct functions of contractual relations efficiency for the first three subjects of an agrarian chain.Taking into attention the positive dependence of contractual relations efficiency on the price the general view of function will look like: , where ki -the price elasticity coefficient of contractual relations efficiency for i-person.The price elasticity of contractual relations efficiency for i-person for three subjects researched firmly depends on cost price growth rate of cost price ( Si): , where Si -level of cost price for i-person.Graphically dependence of contractual relations efficiency of subjects 1 -3 of selling price is reflected in Figure 3. 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, No 5 S2 September 2015 621 The gradient angles 1, 2, 3, demonstrate the three subjects' price elasticity of contractual relations efficiency: The factor of costs is the key for price elasticity of contractual relations efficiency.Let's summarize curves of contractual relations efficiency (E1, E2, E3) vertically.In that way total efficiency will be reflected by curve E. The ultimate consumer also has his own efficiency curve of contract relations efficiency (E4).Peculiarity of this curve is its negative slope (Figure 4).The lower price of consumption the higher his satisfaction.This idea is also reflected by the curve of contractual relations efficiency (E4): The key factor of price elasticity of contractual relations efficiency for consumer is his utility.Product utility reflects importance of goods for consumer.The researched elasticity is derivative from elasticity of demand for food products. Let's determine a total curve of contractual relations efficiency of all subjects researched (TE).The total curve can be received by addition of two curves ( E1-3 and E4).The Figure 5 shows the total curve of contractual relations efficiency (TE).If we take into account that elasticity of contractual relations efficiency for consumer changes, this factor displaces curve E4.The Figure 7 shows new total curve of contractual relations efficiency TE2.The problem of the contractual relations is the new one in economic science.The theory of contracts gained the distribution within new institutional school.However, scientists paid attention to the problem of food security a long time ago.Still physiocrats, led by F. Quesnay, assume that the main source of wealth is agriculture.Work in this sphere of agriculture they called productive unlike any other "sterile" work. The surplus product is created directly by work of farmers and provides welfare for all society, thus, the central sector of any economy is agrarian (Quesnay, 1972).Malthus, T was the first economist who pointed at danger of overpopulation in the Earth taking into account the law of the decreasing productiveness of soil (Malthus, 1820).He pointed at necessity of marriage regulations and birth rate regulation. As for present time, the end of the XX century was marked by unprecedented growth of planet population, especially in developing countries that caused numerous energy crises and environmental problems. Conway, G. and Barber, E. define food security as the guaranteed access for all inhabitants to food products enough for healthy and active life at any time.And, according to some researchers, the major factor limiting this access is inaccessibility of energy, especially it concerns developing countries.The authors, mentioned above, see the solution in the state policy directed to support agro-industrial complex, fight against poverty and birth rate regulation ( onway, Barber , 1990) According to Gorbacheva, A. and Kupchenko, A, the food security can be characterized as ability of a country to produce enough of food.Thus, food has to be of satisfactory qualitaty and safe for life and health of the population.The government has to pay more attention to low-income population (Gorbacheva A.A., Kupchenko A.Ju, , 2013).These authors mark out the following components of food security: 1) physical availability of food; 2) economic availability of food; 3) safety and quality of the food. When Mikhaylushkin, P and Barannikova, A determine food security they point at negative influence of external factors.Even in the conditions of political and economic risks the government has to ensure food security (Mihajlushkin P.V., Barannikov A.A, 2013)]. As for Russian scientists, nobody researches the contractual relations efficiency in agrarian sector.The new institutional theory of contracts becomes widespread in the 1970's.For example, Macneil, I. became the first who gave classification of contracts.He determined contracts as mini-societies with the extensive range of norms which aren't limited by those, from which are directly connected with the act of an exchange" (Campbell D, 2004) Macneil's classification marks off the contractual relations on classical, neoclassical and relational.Discrecity, comprehensiveness of agreements and transparency of all future circumstances are characteristics of the classical contract.As for classical contract the formal clauses come out on top.As usually, classical contract is short-term contract.Long-term contracts are characterized by uncertainty.Such contracts are not transparent.Macneil, I. called such contracts neoclassical.Neoclassical contracts are connected with incompleteness of any agreement. Williamson, O. constructed types of management of the contractual relations and considered questions of opportunism as well as economy of transactional expenses (Williamson O, 1979). The special approach of Hodgson, G. to contractual relations is following: he connects them with transfer of property rights, and not just with a bipartite transfer of goods, services or money between agents (Hodgson G, 1988). Jensen, M. and Meckling, W. focused attention on such factors of contractual relations efficiency as asymmetry of information and behavioral opportunism (Jensen, 1976).These factors also result in incompleteness of contract spoken by Hart, O. (Hart., Moore, 1988) As for Russian scientists Auzan, A., Bendukidze, K., Benz, D., Kozlova, E., Kudryashova, E., Kuzminov, Y., Oleynik, A., Popov, E., Silova, E., Tambovtsev, V., Shastitko, A. and Yudkevich, M. research problems of contractual relations efficiency.For example, Benz, D., Kozlova, E., Silova, E. show interconnection between opportunism, quality of corporate institutes and imperfection of contractual relations (Benz, D.S., E.V. Kozlova and E.S. Silova, 2014) In this article we touch upon a question of contractual relations imperfection in agrarian sector.This article shows our attempt to unite new institutional approach to the contractual relations imperfection with a problem of regional food security. 5. Today, one of the most actual problems of Russian economy is the problem of food security.Among key problems we can find the following: the advancing growth rate of food prices in comparison with growth rate of the population's income leads to growth of share of consumer's expenses on products in their total income; poor quality of some food products influences health and life expectancy of population; low efficiency of some agricultural enterprises, including insufficient financial stability, low labor productivity, lack of investments, technological lag.The key reason of contractual relations imperfection in Russian agrarian is low competition among landowners.In view of the import embargo on the food from abroad Russian agro-industrial companies and the merchandisers understand essential decrease in the competition today.Therefore, absence of stimuli to expand quantities of production and to rise quality level reduces all economic efficiency.Absence of access for companies to the cheap credits for expansion of its production plays a negative role for all agrarian sector. Summing up the results, we can point at the key problem of Russian agrarian policy.There is lack of agreement between participants of the agricultural market.Notably, consumers have to buy the extremely expensive product while the farmer, being the first subject in agrarian chain, sometimes realizes the production even at a loss.That's why we can see inefficient redistribution which leads to low contractual relations efficiency.The underdevelopment of institutional forms of interaction between subjects mentioned dictates necessity of the government's intervention.The constructed graphic models of contractual relations efficiency will help to make detailed research in the field of development of the mechanism of the government support of agriculture. Thanks 6. This article is prepared within Grant Prezident by MK-6017.2015.6.We express gratitude to the Russian President V. Putin for his trust and Grant's allocation for the research of the problem concern inefficiency of contractual relations in Russian agrarian sector.Besides, we express special gratitude to the Director of Institute of economy of branches, business and administration (Chelyabinsk State University), the Doctor of Economics, professor V. Barkhatov for the help in Grant Prezident's receiving and the constant help in our scientific researches. Figure 1 . Figure 1.The subjects of the contractual relations in agrarian sector. Figure 4 . Figure 4.The curve of contractual relations efficiency for ultimate consumer (E4) Similar to the situation represented in Figure 2, gradient angles , ' show elasticity of contractual relations efficiency for the ultimate consumer: Figure 5 . Figure 5.The total curve (TE) of contractual relations efficiency for all the subjects of contractual relations If we take into account elasticity of contractual relations efficiency, we can displace the curve E1-3, and then we can Figure 6 . Figure 6.Shift of total curve of contractual relations efficiency (TE) as a result of any factor's impact on elasticity of the curve E1-3 Figure 7 . Figure 7. Shift of total curve of contractual relations efficiency (TE) as a result of any factor's impact on elasticity of the curve E4.Discussions 4. Table 1 . Quantities of standard consumption of key food products in Ural Federal District 6] Table 3 . The quantities of consumption of the main food products, kg per capita Table 4 . The coefficient of sufficiency of key foodstuff consumption ( 1) Table 5 . Data about the average number of workers and their average earnings by types of economic activityISSN 2039-9340 (print)After calculating the weight-average income per capita based of data in Table5, we received value of 27045,77 rubles. Table 7 . The results of an assessment of food security' condition in Chelyabinsk Region
4,551.4
2015-09-04T00:00:00.000
[ "Agricultural and Food Sciences", "Economics", "Political Science" ]
Sepia, a taxonomy oriented read classifier in Rust Summary Here we present Sepia, a fast and accurate read classifier. It is implemented in Rust, has the ability to switch between different taxonomies, can detect inconsistencies in taxonomies, and can estimate similarities between organisms in a query dataset and reference sequences in the index. Statement of need Bioinformatics tools to infer the taxonomic composition of sets of biological sequences are quintessential for taxonomic profiling and contamination checking.A variety of tools have been developed to accomplish taxonomic profiling and these tools can be roughly classified into two categories: (i) read alignment based tools such as MIDAS (Nayfach et al., 2016) and MetaPhlan (Segata et al., 2012), which map reads against a database of reference sequences or taxon specific marker sequences, and (ii) read classification based tools, such as Kraken (Wood & Salzberg, 2014), Kraken2 (Wood et al., 2019) and Centrifuge (Kim et al., 2016).Many read classifiers use a least common ancestor (LCA) approach where reads in common among sibling taxa get assigned to the rank and taxonomy in common with those taxa; e.g., a kmer in common with both Escherichia and Salmonella, both members of the family Enterobacteriaceae, would be elevated to be associated in a database with the higher order taxon Enterobacteriaceae instead.While all these tools heavily rely on taxonomies, changing the taxonomies (e.g., correcting wrong placements of accessions or adapting a novel taxonomy) is not easily done.In response, we wrote Sepia, a taxonomic read classifier to address rapidly changing developments in taxonomy (e.g., the genome-based GTDB taxonomy (Parks et al., 2018)) and algorithm development.There are three areas where Sepia directly addresses issues needing improvement: (i) taxonomy, (ii) classification accuracy, and (iii) the ability to perform fast batch classification for multiple datasets. Taxonomy Taxonomic read classifiers that use an LCA approach work on the assumption that biological taxonomies reflect evolutionary relationships, thus making it possible to use taxonomies as a predictive framework in read classification.Inclusion of artificial taxa that are not supported by a genome-based phylogeny (e.g., garbage bin taxa or taxa with an unknown placement 'incertae sedis') or artifactual errors (e.g., a genus placed erroneously in the wrong family in some of but not all of the accessions in the taxonomy) have a deleterious effect on the accuracy LCA based classification algorithms, potentially leading to some taxa not being classified at all or consistently classified as the wrong taxon.To address these issues Sepia uses human-readable taxonomy strings as input.While building the index putative ambiguities or inconsistencies in the taxonomy are flagged and logged to a file for the user to address. Sequence similarity and classification accuracy Read classifiers tend to miss-or overclassify reads, especially in situations where a lot of reads represent a taxon that is not present in the indexed reference dataset.To address this issue Sepia records the per read k-mer ratio, which is the ratio of k-mers supporting the proposed classification versus the total number of k-mers used for the classification.The average k-mer ratio highly correlates to the Average Nucleotide Identity (ANI) between reference strains in an index and strains in a query dataset (Figure 1), and low k-mer ratios can be used to remove over-or misclassified reads after classification. Batch classification While the process of taxonomic read classification is usually fast, the time needed to load a large index into RAM (e.g., 47,894 accessions for reference sequences of GTDB rs202 requires a minimum of 98 Gb of RAM) can take longer than the actual classification process.When used repeatedly on a batch of sequence datasets, this can make the use of a read classifier time prohibitive.To overcome this time expensive hurdle, a batch classify function is included in Sepia; a user generated file containing sequence datasets for multiple samples is used as input, the index is read only once into RAM and subsequently the sequence data for the individual samples are classified. Implementation Sepia is written in the Rust programming language.The k-mer or minimizer index is an implementation of the compact hash table described by (Wood et al., 2019).Briefly, this hash table consists of a fixed array of 32 bits hash cells to store key-value pairs with a generic hash function (i.e., murmurHash3 (Appleby, 2011) in Kraken 2 and Sepia) and a load factor of 70% for collision resolution, thus needing considerably more space than key-value pairs.Alternatively, the user can choose to use an experimental index with a perfect hash function (Limasset et al., 2017) as implemented in Rust (https://github.com/10XGenomics/rust-boomphf). A perfect hash function maps a set of actual key values to the table without any collisions, thereby potentially decreasing the space requirement of the hash table compared to the compact hash table of Kraken 2. Next, taxonomy information is encoded into unsigned 32 values such that higher order taxa always have a lower value than lower order taxa, allowing for rapid set operations to infer LCAs for a set k-mers or minimizers within a single read or read pair.There is no limit to the number of taxonomic levels in a taxonomy string.This allows for the user to combine different taxonomies for different taxonomic domains (e.g., NCBI viral taxonomy combined with GTDB taxonomy for Archaea and Bacteria).The unsigned 32 encoded taxonomy is compactly stored in a directed acyclic graph, the direction being from child to ancestral node.This allows for a rapid look-up of lineages and generation of sets for LCA inference.Upon completion Sepia produces two files; (i) a file with the per read or read pair classification, and (ii) a summary file, reporting the read count per taxon, the average k-mer ratio and the total length of all reads classified as a specific taxon.Using simulated and real datasets we found Sepia to be very similar in performance to Kraken 2; We found not significant differences in sensitivity and specificity of read classifications, however, we found Kraken 2 to be between 30% and 15% faster for Illumina short read datasets (HiSeq 125bp paired-end and MiSeq 300bp paired-end, respectively), however Sepia and Kraken 2 classified reads at a similar speed for an Oxford Nanopore dataset (NCBI SRA accession SRR15372305).
1,412.6
2021-12-24T00:00:00.000
[ "Biology", "Computer Science", "Environmental Science" ]
Targeted depletion of PIK3R2 induces regression of lung squamous cell carcinoma Oncogenic mutations in the PI3K/AKT pathway are present in nearly half of human tumors. Nonetheless, inhibitory compounds of the pathway often induce pathway rebound and tumor resistance. We find that lung squamous cell carcinoma (SQCC), which accounts for ~20% of lung cancer, exhibits increased expression of the PI3K subunit PIK3R2, which is at low expression levels in normal tissues. We tested a new approach to interfere with PI3K/AKT pathway activation in lung SQCC. We generated tumor xenografts of SQCC cell lines and examined the consequences of targeting PIK3R2 expression. In tumors with high PIK3R2 expression, and independently of PIK3CA, KRAS, or PTEN mutations, PIK3R2 depletion induced lung SQCC xenograft regression without triggering PI3K/AKT pathway rebound. These results validate the use PIK3R2 interfering tools for the treatment of lung squamous cell carcinoma. INTRODUCTION Lung cancer is one of the main causes of death worldwide [1][2][3]. There are two major lung cancer histological types: small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC); the latter comprising three subtypes: adenocarcinoma, squamous cell carcinoma (SQCC) and large cell lung carcinoma [4][5][6]. Approximately 20% of lung cancers are SQCC, and survival rates for lung SQCC remain low. Development of therapies for this tumor type is thus a major research objective [2]. In the last decade, the introduction of nextgeneration sequencing aided the detailed description of the lung cancer mutational profile. A variety of agents that target EGFR (epidermal growth factor receptor), ALK (anaplastic lymphoma kinase), or ROS1 (c-ros oncogene 1) have improved the outcome of lung cancer patients bearing specific driver mutations. These studies also provided a list of less frequent potential driver mutations, which could be relevant in a proportion of patients; a number of trials are currently testing new molecularly targeted compounds [2][3][4][5][6][7][8][9]. For NSCLC, however, effective targeted therapies have mainly improved treatment of adenocarcinoma [2]. The SQCC mutational profile is complex and distinct from other NSCLC. The most frequent alterations found are TP53, NFE2L2, KEAP1, several genes involved in squamous cell differentiation, CDKN2A, RB1, FGFR1 amplifications, and other mutations at low frequencies; the class IA phosphoinositide 3-kinase (PI3K) is also deregulated in SQCC [4,8,10,11]. In addition to PTEN mutation, or PI3KCA/CB amplification, PI3K activity is enhanced by mutations or deregulated expression of its regulatory subunits. We previously shown that p85β and p85α have non-redundant functions, a distinct subcellular localization, and a different pattern of expression in normal and transformed cells, p85α is more abundant in normal cells, whereas p85β levels is enhanced in melanoma, breast and colon cancer [24,25]. p85β exhibits a higher affinity for the enzyme substrate (PI4,5P 2 ); in addition, whereas p85α fully inhibits the activity of associated p110 and functions as a tumor suppressor, p85β/p110 show a residual activity in the absence of growth factors; in addition, p85β exhibits oncogenic activity [24,26]. Although p85β overexpression accelerated tumor progression in the mouse [24], it was unknown whether depletion of p85β in an already developed tumor might induce tumor regression. Here we show that PIK3R2 expression is increased in human lung SQCC, and its depletion induced SQCC tumor regression, supporting development of PIK3R2 interfering tools as a therapy for lung SQCC. We tested whether the increase in PIK3R2 levels was also observed in lung SQCC cell lines and translates into enhanced p85β protein expression. We determined p85α and p85β levels as well as those of p110α and p110β catalytic subunits in ten lung SQCC cell lines (described in Supplementary Table S1). As controls, we used an adenocarcinoma line (H1703), a normal epithelial cell line (MCF10), and an advanced melanoma cell line (BLM) with increased p85β expression [25]. In addition, we included the human T cell lymphoblastoid Jurkat cell line (JK), which contains similar levels of p85α and p85β [29]. We gave a value of 1 to the signal intensity of p85α and of p85β in JK cell lanes, and refer the signal intensity of the different lines to that of JK cells (considered 1). In addition, we determined the relative p85β/α content by calculating their ratio in each cell line compared to that of JK cells (which has a ratio of 1). SQCC lines expressed p110α and p110β isoforms; since p85 protects p110 from degradation [27], cells with higher p85 levels also had increased p110 levels ( Figure 1A). In the case of regulatory subunits, MCF10 epithelial cells had higher p85α than p85β levels; the H1703 adenocarcinoma, and three of the ten SQCC lines (HCC15, H1869, H2170) had similar p85α and p85β levels; the remaining seven SQCC lineshad higher levels of p85β, with p85β/p85α ratios varying from 2 to >10 ( Figure 1B). Thus, both lung SQCC tumors and more than 50% of the SQCC cell lines preferentially expressed PIK3R2. PIK3R2 depletion reduces PI3K pathway activation in lung SQCC cells The expression of p85β/p110α complexes in NIH3T3 cells enhances basal PI3K pathway activation (in the absence of growth factors) [24]. Considering that SQCC shows greater p85β levels compared to p85α, we tested the result of reducing PIK3R2 expression on PI3K pathway activityin two representative SQCC cell lines, H226 (p85β/p85α ratio >10) and CaLu-1 (p85β/p85α ratio ~3). We cloned a short hairpin PIK3R2-specific sequence in an inducible lentiviral vector (pLKO-TetON) and obtained stable clones that expressed PIK3R2 shRNA after doxycycline induction (72 h) ( Figure 2). Cells were examined in exponential growth, after serum starvation (2 h), and after 30 or 60 min stimulation with serum. The majority of the regulatory form in H226 cells is p85β, and its depletion moderately reduced p110β and p110α levels; this effect was not as evident in CaLu-1 cells, which suggests that the remaining p85α in these cells is sufficient to stabilize p110 ( Figure 2). In a similar assay, PIK3R1 silencing did not markedly affect pErk levels (a moderate reduction was seen in H226). In addition, p85α depletion reduced Cell lines were maintained in exponential growth, lysed, and lysates adjusted for equal protein loading. Extracts (50 μg) were resolved in SDS-PAGE and examined in western blot (WB) using the indicated antibodies. Mr, relative mobility. p85β signals in the different lanes were measured, normalized to the loading control and compared to the p85β signal in Jurkat cells considered 1; p85α signals was examined similarly and compared to the signal in Jurkat cells, also considered 1 as these cells contain comparable amounts of p85α and p85β. Graph (bottom left) shows the mean ± SEM of p85α and β signal normalized for that of Jurkat cells. To define the excess of p85β compared to p85α, we calculated the p85β/p85α signal intensity ratio (each compared to that of Jurkat cells). Graph at the bottom right shows the mean ± SEM of p85β/p85α ratios; SQCC lines with ratios <2 are in grey. Dashed lines indicate p85β/p85α ratios of 1 and 2. p110β and p110α levels in both cell lines (more clearly in CaLu-1), which suggests that p85α more effectively protects p110 from degradation than p85β; activation of most PI3K effectors were either unaffected (pPRAS40) or increased (T308-pAkt, T389-pp70S6K) after p85α depletion (Supplementary Figure S1), with the exception of S473-pAkt in CaLu-1 cells, which appeared to require both p85β and p85α expression for optimal S473-pAkt phosphorylation ( Figure 2). This finding might be explained by the p85α contribution to p110 stability. The increased activation of the remaining PI3K effectors after PIK3R1 silencing, despite reduction of p110 levels, suggests that p85α depletion causes enrichment in p85β/p110 complexes (Figure 2), which enhance PI3K activation [24]. These observations show that p85β is the main regulatory isoform that mediates PI3K pathway activation in H226 and CaLu-1 lung SQCC cells. PIK3R2 depletion triggers SQCC apoptosis and xenograft regression In breast and colon cancers PIK3R2 overexpression correlates with tumor grade [24]. As PIK3R2 expression is increased in SQCC tumors, we postulated that interference with PIK3R2 in an established SQCC tumor would halt tumor progression. To examine the effect of reduced PIK3R1 or R2 expression in lung SQCC tumor xenografts, we infected different cell lines with an inducible lentiviral vector that encodes a short hairpin specific for PIK3R1 or R2; stable clones expressed these shRNA after doxycycline induction. We optimized protocols for xenografts of the different cell lines in immunodeficient mice (see Methods); as controls we used SQCC cell lines expressing the pLKO empty vector (doxycycline-treated) or cell lines expressing PIK3R1/2 shRNA (without doxycycline). We first tested the effect of PIK3R1 silencing on the growth of tumor xenografts derived from H226, H520, and CaLu-1 cells with p85β/p85α ratios >2. In response to doxycycline, the cells showed reduced PIK3R1 levels, although this treatment did not significantly reduce tumor size ( Figure 3A; Supplementary Figure S2A). We also analyzed the consequences of PIK3R2 depletion in tumors of cells with a p85β/p85α ratio of ~1. In HCC15, H2170, PIK3R2 depletion did not affect tumor size ( Figure 3B; Supplementary Figure S2B). H1869 cells were resistant to lentiviral infection and could not be examined. We next analyzed the effect of depleting PIK3R2 in tumor xenografts from lung SQCC cells with preferential PIK3R2 expression. Seven of the ten SQCC cell lines exhibited increased expression of PIK3R2. All of them were analyzed, except for SW900 cells, which showed extremely low division rates that made it difficult to select clones expressing PIK3R2 shRNA. We detected two types of response after PIK3R2 depletion, tumors derived from four lines were almost completely eliminated (H2882, H520, H226, CaLu-1; Figure Figure S3B). The response to PIK3R2 depletion was detected in SQCC cell lines with predominant PIK3R2 expression, but it was not proportional to the p85β/p85α ratio, and did not correlate with PTEN, KRAS or PIK3CA mutation or with higher PIK3CA or PIK3CB expression ( Figure 4, Supplementary Table S2). For SK-MES-1 and EPLC, results were similar in scid/scid mice and scid/ beige mice, which lack NK cells [34] (Figure 4), excluding an effect of NK cells in the responses. To examine the effect of reducing PIK3R2 levels at a cellular level, we prepared a set of xenografts from H226 and CaLu-1 cells and mice were sacrificed at experiment midterm, when the tumor size began to diminish. Histological examination of these tumor samples showed a low mitotic index ( Figure 5, red arrows), which was even lower after PIK3R2 depletion. Few apoptotic figures were observed in control or PIK3R1 shRNA-treated tumors, and were numerous in PIK3R2-depleted tumors ( Figure 5). Immunostaining of Ki-67+ and active caspase 3 in control, PIK3R1 and PIK3R2 shRNA-treated samples showed a lower number of dividing cells and a larger number of apoptotic cells after PIK3R2 shRNA treatment ( Figure 5). These results indicate that PIK3R2 depletion in SQCC lines decelerates cell growth and induces cell death. In one of the cell lines, we tested whether rescue of p85β expression restores tumor growth; we also tested the efficacy of a prolonged treatment. We used CaLu-1 cells expressing doxycycline-inducible PIK3R2 shRNA. In this assay, we allow the tumors to grow until ~100mm 3 , when we then began the treatment with doxycycline in the drinking water (2 mg/ml). The tumor size reduction was slower than when treating smaller tumors ( Figure 6A ~100mm 3 versus Figure 4A ~50 mm 3 tumors), which suggests that accessibility to doxycycline in big tumors might be suboptimal and the treatment reduced tumor size more gradually. At ~1 month of treatment, most of the tumors were very small; animals were randomly divided in two groups, one was maintained and the other deprived of doxycycline. All the tumors maintained with doxycycline remained small for another month, although they show a slow but continuous moderate increase in size ( Figure 6A). Among tumors deprived of doxycycline we detected two clearly distinct behaviors, one set remained small and behaved similarly than tumors maintained in doxycycline, whereas the other set grew at significantly higher rates and sizes ( Figure 6A). The tumors with a marked regrowth index corresponded to those that restored p85β expression, while the set of tumors with moderate growth expressed very low p85β levels ( Figure 6B). These results suggest that tumors remain responsive after a two-month treatment, although p85β depletion is not sufficient for a complete and stable tumor disappearance. In addition, restoration of p85β expression, results in marked tumor regrowth ( Figure 6A,B). In conclusion, PIK3R2 depletion reduced growth of all SQCC tumors with enhanced p85β levels and was sufficient to trigger tumor regression in the six SQCC lines with this phenotype (Figure 4). Shrinkage of SQCCderived tumors supports development of an approach aimed to decrease PIK3R2 action for the treatment of SQCC. PIK3R2 depletion decelerates cell cycle progression We also examined the effect on cell cycle progression of depleting PIK3R2 in responsive (H226 and CaLu-1) and non-responsive (HCC15) cells. We BrdU-labeled newly synthesized DNA in exponentially growing cells, remove BrdU and collected the cells at various time points after BrdU deprivation (chase), to follow progression of BrdU+ S phase cells through cell cycle. This method permits examination of cell cycle in asynchronous cultures since tumor cells are difficult to synchronize in G0/G1 phases. The proportion of H226 cells labeled with BrdU during the 90 min pulse period was similar in control and PIK3R1-depleted cells but was approximately 30% lower in PIK3R2-depleted cells, supporting a slower G0/G1 transition to S phase. In addition, whereas the majority of control H226 cells made the S> G2/M transition similarly than PIK3R1depleted cells and exited from G2/M to G0/G1 similarly (moderately faster in the case of control), PIK3R2-depleted cells entered G2/M and were accumulated in this phase without showing progression to G0/G1 phase at ~ 9h ( Figure 6C). H226 cells have a p85β/p85α ratio >10; this behavior might reflect the need of a p85 molecule to complete cytokinesis [35]. In CaLu-1 cells (with a p85β/ p85α ratio of ~3), cells were not sequestered in G2/M Figure 3: PIK3R1 depletion does not trigger lung SQCC xenograft regression. A. H226, CaLu-1 and H520 lung SQCC cell lines expressing inducible PIK3R1 shRNA were cultured (72 h) and extracts examined in WB to confirm PIK3R1 shRNA efficiency. Cell lines were expanded in culture and injected subcutaneously into scid/scid mice (~10 7 cells in 100 μl PBS plus 100 μl matrigel). Graph (top) illustrates the size of each tumor at different times after initiation of treatment; n.s., not significant, two-way ANOVA test. Graph (bottom) shows percent change in size from beginning to end of treatment; n.s., not significant, Chi square test. Mr, relative mobility. B. Lung SQCC cell lines SK-MES and EPLC, with similar p85β and p85α levels, were infected with viruses encoding inducible control or PIK3R2 shRNA; clones were selected and xenografts established in scid/beige as above. We compared tumor growth after doxycycline addition to drinking water. In WB, we tested PIK3R2 shRNA efficiency in reducing p85β levels. The graph indicates the size of each tumor at different times after initiation of treatment. Differences between control and treated tumors was analyzed using a 2-way ANOVA test; n.s., not significant. www.impactjournals.com/oncotarget shRNA were cultured (72 h) at indicated doxycycline concentrations. Sample extracts were examined in WB to confirm PIK3R2 shRNA efficiency in reducing p85β levels (top). Cell lines were expanded in culture and injected subcutaneously into scid/beige mice (~10 7 cells in 100 μl PBS plus 100 μl matrigel). Tumors developed for several days until they reached ~50 mm 3 ; we then included doxycycline (2 mg/ml) in drinking water and measured tumor size three times/week. We compared tumor growth after doxycycline addition to drinking water. As a control, we used tumors generated with cell lines expressing empty vector and treated with doxycycline (control 1) or generated with cell lines expressing inducible PIK3R2 shRNA with no doxycycline (control 2); both controls gave comparable results. Graphs in A, (left) and B) indicate the size of each tumor at different times after treatment. Differences between control and treated mouse tumors were analyzed with a 2-way ANOVA test. Graphs in A, (right) show percent change in tumor size, comparing the final with initial tumor size (pre-treatment). We compared groups using the Chi square test. www.impactjournals.com/oncotarget A. Xenografts were established using CaLu-1 cells expressing inducible-PIK3R2 shRNA. When tumors reached a volume ~100 mm 3 , mice were treated with doxycycline for ~one month, when we randomly divided the animals in two groups. Doxycycline was withdrawn in one of the groups and maintained in the other. Graph shows the Mean ± SEM size of tumors kept in doxycycline (white square) as well as those deprived of the treatment, which were divided in slow-(grey) or fast-growing tumors (black). Differences between groups were analyzed using a 2-way ANOVA test. B. WB confirmed p85β re-expression in fast growing tumors and p85β low levels in the slow-growth tumors. Graph shows p85β signal intensity (corrected for the loading control) in arbitrary units (A.U.). Mr. indicates relative mobility. C. CaLu-1 or H226 cells expressing control, PIK3R1 or PIK3R2 shRNA were cultured alone or with doxycycline (5 μg/ml, 72 h). A non-responsive cell line (HCC15, control) expressing PIK3R2 shRNA was treated similarly. Cells were labeled with BrdU (20 μM, 1h; 90 min in the case of H226) then deprived of BrdU and chased at different times. We examined the percentage of BrdU+ cells that progressed from 2N (G0/ G1) to 2-to-4N (S) and to 4N DNA content (G2/M phases) by flow cytometry (Mean percentage, n = 3)(left panels). In the medium panel, we represent the proportion of cells remaining in S phase at different times compared to maximal, considered 100%. Right panels show the percentage of cells in G0/G1 at different times. *P<0.05, differences between cell cycle distribution were examined using Chi square test; percent of S or G0/G1 cells was compared using Student's t test. phase, possibly due to the presence of p85α; nonetheless, PIK3R1 depletion moderately accelerated and PIK3R2 depletion significantly decelerated cell cycle progression, as estimated by the rate of S phase exit and return to G0/ G1 phases ( Figure 6C). PIK3R2 depletion did not affect S to G2/M transition, or exit from G2/M to G0/G1 in HCC15 cells ( Figure 6C). This shows that p85β depletion selectively decelerates cell cycle progression in PIK3R2 shRNA sensitive cells. PIK3R2 depletion does not induce PI3K pathway rebound To analyze the status of PI3K pathway in shRNAtreated tumors, we tested extracts from control, PIK3R1 and PIK3R2 shRNA-treated-H226, -CaLu-1 and -H520 tumor samples. Depletion of PIK3R2, but not of PIK3R1, reduced pT308-, pS473-Akt, pPRAS40 and pp70S6K levels without notably affecting pErk/Erk levels ( Figure 7A). Physiological PI3K activation is followed by induction of negative feedback mechanisms that maintain the transient nature of PI3K activation [12][13][14][15][16][17][18][19][20][21][22]. A major problem of PI3K inhibitor-based therapies is that they also abrogate these feedback inhibitory mechanisms. The combined effect of reducing enzyme as well as feedback pathways activities often results in net pathway reactivation after a few hours [12][13][14][15][16][17][18][19][20][21][22]. In some cases, this effect is observed with p110α inhibitors alone and is corrected by the combined addition of p110α plus p110β inhibitors; in other models, even pan-PI3K inhibitors induce pathway reactivation after long incubation periods [36][37][38]. Prolonged PIK3R2 shRNA treatment induced stable PI3K pathway inhibition ( Figure 7A), which might be the result of a net reduction in PI3K molecules levels that are stabilized by p85, or might indicate that negative feedback pathways do not act in lung SQCC lines. To determine whether PIK3R2 deletion is advantageous compared to PI3K inhibition, we tested whether the inhibitors induce PI3K pathway reactivation after extended treatment. We optimized the dose of these compounds used for short-term inhibition of PI3K effectors; we tested rapamycin, a pan-PI3K inhibitor (Ly294002, Ly), a p110β-specific inhibitor (TGX221, TGX) and a p110α inhibitor (PIK75, PIK) [39]. Rapamycin only inhibited the mTOR substrate pS6K, while PIK and TGX reduced pAkt and pp70S6K levels in a dose-dependent manner (Supplementary Figure S4). We tested whether prolonged treatment with Ly, TGX, PIK or a TGX/PIK combination could reactivate PI3K effectors. In H226 cells, treatment with optimal Ly or TGX doses for 48 h increased pAkt and ppPRAS40 levels compared to the 1 h treatment; although PIK or TGX/PIK maintained low pAkt and ppPRAS40 levels after 48 h, they increased total Erk and pErk levels ( Figure 7B). In H520 and CaLu-1 cells, all inhibitors failed to restrain the pathway after incubation for 48 h, as estimated by measuring levels of PI3K effectors phosphorylation, which were higher at 48 h compared to 1 h treatment ( Figure 7B). In H520 and CaLu-1 cells, we also detected a compensatory increase in pErk and total Erk levels after incubation with PIK and TGX/PIK ( Figure 7B). Not all effectors recovered equally in the different treatments and cell lines. For instance, the TGX/PIK combination was more efficient that either inhibitor alone in maintaining low pathway activity, although reactivation was detected at 48 h compared to 1 h in H520 and CaLu-1 cells. Decreased pPRAS40 levels were paralleled by a reduction in PRAS40 cell levels and PIK treatment blocked pS6K phosphorylation less effectively in CaLu-1 than in the other cell lines, but showed longer duration ( Figure 7B). Together, in most PI3K effectors examined, we detected an increase in phosphorylated forms after 48 h of incubation compared to 1 h ( Figure 7B). In contrast to PIK3R2 depletion, which stably inhibited the PI3K pathway, sustained treatment with PI3K inhibitors resulted in PI3K pathway reactivation. DISCUSSION Targeted therapies are improving clinical management of lung adenocarcinoma, however, they have been less efficient for lung squamous cell carcinomas, which make up one-fifth of lung cancers. Increased expression of PIK3R2, which encodes the regulatory subunit p85β, enhances basal PI3K pathway activation and parallels tumor progression in melanoma, colon cancer and breast carcinoma [24,25]. In the present study, we show that SQCC tumors also exhibit an increased PIK3R2 expression. Moreover, we demonstrate that the interference with PIK3R2 expression in established SQCC xenografts reduces tumor survival. In particular, interference with PIK3R2 expression triggered tumor shrinkage in all the cell lines with predominant p85β expression. PIK3R2 deletion induced cell death and stably inhibited the PI3K pathway, without inducing pathway rebound. Greater sensitivity of some tumors to PIK3R2 depletion was independent of PIK3CA orCB expression levels and of KRAS, PTEN and PIK3CA mutation. These results support the development of therapies oriented to reduce the action of PIK3R2 for lung SQCC. Progress in whole-genome characterization has allowed generation of comprehensive profiles of genetic alterations in SQCC and identification of new therapeutic targets [2][3][4][5][6][7][8][9][10][11]. In addition to targets suggested by specific SQCC somatic DNA modifications, altered expression of WT proteins is also considered for clinical management. This is the case of MET tyrosine kinase receptor overexpression, which was observed in 29% of lung SQCC patients [2,40]. Immunotherapy is another promising lung cancer treatment [41]. Nonetheless, the poor prognosis of lung cancer, and of SQCC in particular, encourages development of alternative strategies. www.impactjournals.com/oncotarget Figure 7: PIK3R2 depletion, but not PI3K inhibitors, induces stable PI3K pathway inhibition. A. Tumor xenografts were obtained using H226, CaLu-1 and H520 lung SQCC cells expressing inducible PIK3R1 or PIK3R2 shRNA. When tumors reached ~50 mm 3 , mice were treated with doxycycline in drinking water (2 mg/ml); they were sacrificed when tumors began to diminish. Normalized tumor extracts were examined by WB with indicated antibodies. B. H226, CaLu-1 and H520 cells were cultured in exponential growth were incubated with: vehicle (DMSO, 1:10 3 V:V), Ly294002 (5 μM), PIK75 (200 nM), TGX221 (30 μM), or PIK75 (200 nM) plus TGX221 (30 μM) for the last 1 h of culture or during 48 h; extracts were tested in WB. Mr indicates relative mobility. In both A and B, the pAkt or pPRAS40 signal was measured and normalized to the loading control (Akt or tubulin, respectively), and compared to that in controls (DMSO, considered 100%). Graphs show percentage of signal compared to maximal as mean ± SD at 48h of treatment. Arrows on the right side of bars show the percentage of signal compared to maximal detected after 1h of treatment.* P <0.05; **P <0.01; unpaired Student's t test with Welch correction. www.impactjournals.com/oncotarget One of the most promising targets in oncology is PI3K. PIK3CA copy number gains are present are higher frequencies in SQCC than in lung adenocarcinoma [8]. Expectations are high for PI3K inhibitors in cancer. Nonetheless, although results are encouraging for PI3Kδ inhibitors in some hematopoietic tumors [ 14], the results from a recent clinical trial using buparlisib (a pan-PI3K inhibitor) for NSCLC patients (preselected to exhibit PI3K pathway activation) showed that only 3% (SQCC and nonsquamous) responded by reducing tumor size [16,17]. This is not exceptional, and except for some combined therapies that include PI3K, Akt, mTOR inhibitors, e.g., for breast and prostate cancer subtypes [36,37,42], treatment of solid tumors with these inhibitors has been often ineffective [12,16,17,42,43]. Better patient selection criteria [16] and identification of the pathway rebound and resistance mechanisms [22] have increased interest in combined therapy. As commented above, deregulated WT gene expression (as is the case of MET receptor) has also been targeted successfully for therapy [2]. Considering that SQCC presents amplification of the chromosome region including PIK3R2, which encodes p85β [10,11] and the observation that PIK3R2 expression is often increased in clinical SQCC (Figure 1), we proposed that PIK3R2 could be a promising target for lung SQCC. We based this proposal on the previous observation that p85β and p85α exhibit a distinct subcellular localization and a different pattern of expression in normal and transformed cells (p85α is the more abundant isoform in normal cells, whereas p85β expression is enhanced some tumors) [24,25]. At the molecular level, p85β exhibits higher affinity for the enzyme substrate (PI4,5P 2 ) than p85α, and a lower capacity to restrain p110 activity in the absence of growth factors [24]. In agreement with these features, p85β overexpression but not that of p85α induced cell transformation [24,44]. Finally, although it was unknown whether depletion of p85β in an already developed tumor might induce tumor regression, pik3r2-deficient mice exhibit reduced colon cancer formation whereas p85β overexpression accelerated tumor progression in the mouse [24]. Here we show that PIK3R2 depletion, but not that of PIK3R1, induced regression of established SQCC tumors exhibiting high PIK3R2 levels (Figure 3 and 4). Several new avenues could be pursued in order to further dissect the differential function of p85α or p85β in cancer. First, the analysis of their differential regulation of the expression, examination of p85α or p85β associated proteins pre-and on-treatment and finally to ascertain the differential protein binding to the N-terminal region (the less conserved region) of each of the two proteins. PIK3R2 depletion in SQCC tumors with high PIK3R2 expression caused sustained PI3K inhibition without inducing pathway reactivation (Figure 4 and 7), which would reduce the probability of resistance, an important drawback of enzyme inhibitors [13-16, 19-23, 35, 38]. The absence of pathway reactivation might be explained by the elimination after PIK3R2 depletion of one of the targets of the feedback inhibitory pathways, the p85 molecule [38]. In a clinical setting, p85β expression level could potentially be examined in circulating cancer cells. Although tumor response to PIK3R2 depletion required enhanced PIK3R2 expression, the magnitude of the PIK3R2 silencing effect in different tumor cells was not strictly proportional to the p85β expression levels, suggesting that other tumor features modulate this response. Nonetheless, tumor response to PIK3R2 depletion did not require PIK3CA, PTEN or KRAS mutation, and caused shrinkage of all tumors examined with enhanced p85β expression. We show that tumors remain responsive after a two-month treatment, although p85β depletion was not sufficient for a complete and stable tumor disappearance ( Figure 6). Rescue of p85β expression, resulted in tumor regrowth. We have tested the consequences of depleting PIK3R2 only in lung SQCC lines; it is possible that other tumor types showing enhanced PIK3R2 expression at advances phases, such as breast and colon carcinoma or melanoma [24,25], might also benefit from a therapeutic strategy based on PIK3R2 depletion. One of the alterations found in SQCC is PIK3CA mutation or amplification [8]. Although any of the tested cell lines had PIK3CA mutations (Supplementary Table S2), CaLu-1 cells exhibit a K-Ras mutation that enhances PI3Kα activity [45] and responded to PIK3R2 deletion. Future studies will definitively conclude on whether or not PIK3CA alteration affects p85β-dependence in lung SQCC. The strength of antisense or interfering RNA gene expression regulators has been demonstrated extensively [46,47], although their use as therapeutic tools in patients has been hampered by difficulty in delivering the RNA to the tumor; this might be facilitated in tumors accessible by aerosol therapy such as lung SQCC [46,47]. The use of small molecules that impair p85β/p110 complex formation might also be an alternative; we have developed a FRET-based screening assay to search for such compounds (not shown). Finally, the CRISP-Cas approach, which has been successful in mice [48], might also be useful for PIK3R2 deletion in lung SQCC with enhanced PIK3R2 expression. p85α:p85β ratios could be examined in the clinic by quantitative PCR since PIK3R2 mRNA levels are proportional to p85 protein content [ref 29]; also, current techniques for gene copy number variation analysis could be applied to identify the SQCC cases with increased PIK3R2 containing region. Additionally, p85α:p85β expression ratio might be examined in patient biopsy samples either by simultaneous analysis of the expression of both proteins by highly specific antibodies to the human proteins or by reverse phase protein analysis (RPPAs) applied to formalin-fixed and paraffin embedded (FFPE) tissue. www.impactjournals.com/oncotarget Although further technical improvements are needed, several siRNA targets and antisense oligonucleotides are currently under study and hold promise for the incorporation of these therapies for medical applications [49][50][51][52]. Their development for PIK3R2 inhibition is justified by the clear potential of PIK3R2 depletion for treatment of lung SQCC tumors as it induced tumor regression without triggering PI3K pathway reactivation. Tumor xenografts All procedures using mouse models were approved by the Ethics Committee of the Centro Nacional de Biotecnología (CNB-CSIC) and were carried in accordance with EU and Spanish legislation (RD 53/2013). Approximately 10 7 cells (stably transfected with shRNA-p85α, -p85β or empty vector) were mixed (50%) with Matrigel (BD) in a total volume of 0.2 ml and were inoculated subcutaneously in the flank of 8-weekold female mice under isofluorane anesthesia. We used the scid/scid mouse strain (BALB/cJHanHsd-Prkdc scid ), which develops neither T nor B cells, as well as (in the case of SK-MES and EPLC) the scid/Beige mouse strain (C.B-17/ IcrHsd-Prkdc scid Lyst bg-J ), which lacks T, B and NK cells (both strains from Harlan Laboratories). Tumors developed for 3 to 7 days until the volume of most ranged from 75 to 100 mm 3 , after which we included doxycycline (2 mg/ml) in mouse drinking water containing 5% saccharose. We weighed mice and measured tumors with calipers twice weekly and calculated volume as V = (smaller side length 2 x larger)/2. As endpoints, we established a) loss of 20% of initial weight, b) tumors greater than 1500 mm 3 or c) complete loss of treated tumors for two weeks, indicating tumor regression. Histopathology, immunohistochemical analysis Tissue samples for histopathology and immunohistochemistry were fixed in 10% buffered formalin, paraffin-embedded, sectioned (4 μm) and hematoxylin/eosin-stained. Indirect immunohistochemical staining was performed on formalin-fixed paraffinembedded tissue sections, using streptavidin-biotin peroxidase complex [55]. Antibodies used for immunostaining were anti-Ki-67 (1:100; MIB-1, Dako) and -cleaved caspase 3 (1:100; 9661 rabbit IgG; Cell Signalling). Briefly, tissues were deparaffinized and sections hydrated in a graded ethanol series. Endogenous peroxidase activity was quenched with 3% hydrogen peroxide (10 min). In order to retrieve the Ki-67 and cleaved caspase 3 antigens, the tissues were placed in citrate buffer (10 mM, pH 6) or EDTA buffer (0.5 mM, pH=8) for 20 min, and sections were incubated with primary antibody (overnight, 4°C). After rinsing with Tris-buffered saline (TBS), sections were incubated with peroxidase-based EnVision complex (Dako). Peroxidase activity was demonstrated by diaminobenzidine staining. Finally, sections were washed in water, lightly hematoxylin-counterstained, dehydrated and mounted with DPX (distyrene, a plasticizer, and toluene-xylene). For controls, primary antibody was omitted or samples were incubated with an isotype control antibody. The percentage of positive cells were evaluated by scoring 10 randomly selected 40X fields a light microscopy (Nikon Y-THS). Statistical analyses and in silico studies PIK3R2 mRNA expression was analyzed in different human lung cancer types using Oncomine, as described (www.oncomine.org) and we represented data obtained form Bhattacharjee et al. 2001 [30]. Gel band intensity was quantitated with ImageJ software (NIH). Significance was calculated with ANOVA, Chi square and Student's t test using GraphPad Prism 5.0 (GraphPad Software).
7,562.8
2016-11-08T00:00:00.000
[ "Biology", "Medicine" ]
Analysis of the Spatial Properties of Correlated Photon in Collinear Phase-Matching : In this paper, the spatial properties of correlated photon in collinear phase-matching in the process of spontaneous parametric down conversion (SPDC) are researched. Based on the study of the phase-matching angle, non-collinear angle, and correlated photon wavelength, a theoretical model of non-collinear angular variation is derived, which can be used to estimate and predict the width of the correlated photon ring. The experimental measurement is carried out with CMOS camera, and the measurement results are consistent with the theoretical simulation results, which verifies the rationality of theoretical reasoning. Meanwhile, the change of the correlated photon divergence angle outside the crystal is studied, the closer the wavelength is to the degenerate, the smaller the measurement value of the divergence angle, which is agreement with the theoretical simulation. The results of the study play a reference role in the evaluation of the spatial properties of correlated photon and lay a foundation for the measurement of the correlated photon number rate and the calibration of a photodetector. Introduction Spontaneous parametric down conversion (SPDC) is a nonlinear optical process in which a higher energy photon from laser beam splits into a pair of lower energy photons in the nonlinear crystal [1][2][3][4][5]. The photon pairs are produced simultaneously, but the properties of the individual photon are free and different, depending on the electric field and intensity of the higher energy photon during the down conversion process [6,7]. The two photons, called entangled photons or correlated photons [8], have a strict entanglement property in polarization time and space [9], which has been applied in quantum cryptography, quantum simulation, and quantum metrology [10][11][12][13][14][15][16][17]. In recent years, it is used for the absolute measurement of the quantum efficiency of photodetectors in the time-correlated single photon counting regime without relying on the calibrated standards [18][19][20][21][22]. During this process, the emission cone properties of correlated photons, such as spectral range, non-collinear angle, and divergence angle, play an important role in the selection of photosensitive surface of photodetector and the determination of its position. The SPDC of the down conversion process has two types depending on whether the correlated photons have the same or orthogonal polarization. If produced correlated photon pairs have the same polarization, but opposite to the pump, it is labelled as type-I down conversion, and vice versa, labelled as type-II down conversion [23][24][25]. Moreover, the pair of photons generated propagate in the same direction as the pumped photons, which is collinear phase-matching. Conversely, when the photon pairs travel in a different direction than the pump, it is the non-collinear phase-matching. The collinear phase-matching is a special case of non-collinear phase-matching. Compared with non-collinear phase-matching, Theoretical Estimation of the Correlated Photon Ring Properties The SPDC process where the pump photon at energyhω p splits into a correlated photon pair of signal and idler within the uniaxial crystal of BBO (β-BaB 2 O 4 : barium metaborate) which energies are respectivelyhω s andhω i , does follow energy conservation and momentum conservation rules so that [35]: k p =k s +k i (2) whereh is the Planck constant; ω j and k j are the angular frequency and the wave vector for the pump, signal and idler modes (j = p, s, i). The Equation (2) is also known as the phase-matching condition. In order to achieve the maximum conversion efficiency, Equations (1) and (2) need to be met simultaneously. Figure 1 denotes the produced correlated photon ring profile due to a pump beam interacts with the BBO crystal. Equation (2) is rewritten as the wave vector models of pump and parametric (signal and idler): k s sin α = k i sin β (4) where α and β are the angles of signal and idler with pump beam, respectively. In addition to this the α (or β) is non-collinear angle in the crystal and it is called collinear phasematching as α = β = 0 • , which is a limit case of the non-collinear interaction scheme. The n j and λ j stand for the refraction index and wavelength, respectively. Thus, the relation of three wavelengths of signal (s), idler (i) and pump (p): 1 where α and β are the angles of signal and idler with pump beam, respectively. In addition to this the α (or β) is non-collinear angle in the crystal and it is called collinear phasematching as α = β = 0°, which is a limit case of the non-collinear interaction scheme. The nj and λj stand for the refraction index and wavelength, respectively. Thus, the relation of three wavelengths of signal (s), idler (i) and pump (p): 1 1 1 A pump beam interacts with the BBO crystal to produce a correlated photon ring, and the profile is recorded by the CMOS camera photosensitive surface. In the paper, the SPDC process occurs for a pump wavelength of 266 nm and interacts with the nonlinear crystal. It is assumed that the generated parametric photons have a wavelength range of 400 nm-794 nm, and the signal wavelength λs is 400 nm-532 nm. Then, the sum of the vector modes of parametric photons is Ksum: where ns and ni are the refraction index of parametric photons. For negative uniaxial BBO crystals with ne < no, type I down conversion refers to the generated two-photon pairs with ordinary state, whose refractive index does not depend on the propagation direction. However, the pump beam as extraordinary state and the refractive index np(θ) associates with the propagation direction of light [36]: where np o and np e respectively represent the main refractive index of pump beam in the crystal, which can be calculated by the Sellmeier formulas of the BBO crystal [37,38]. The θ is the angle between the vector direction of extraordinary light (e light) and the optical axis of the crystal, also known as phase-matching angle. Figure 1. A pump beam interacts with the BBO crystal to produce a correlated photon ring, and the profile is recorded by the CMOS camera photosensitive surface. In the paper, the SPDC process occurs for a pump wavelength of 266 nm and interacts with the nonlinear crystal. It is assumed that the generated parametric photons have a wavelength range of 400 nm-794 nm, and the signal wavelength λ s is 400 nm-532 nm. Then, the sum of the vector modes of parametric photons is K sum : where n s and n i are the refraction index of parametric photons. For negative uniaxial BBO crystals with n e < n o , type I down conversion refers to the generated two-photon pairs with ordinary state, whose refractive index does not depend on the propagation direction. However, the pump beam as extraordinary state and the refractive index n p (θ) associates with the propagation direction of light [36]: where n p o and n p e respectively represent the main refractive index of pump beam in the crystal, which can be calculated by the Sellmeier formulas of the BBO crystal [37,38]. The θ is the angle between the vector direction of extraordinary light (e light) and the optical axis of the crystal, also known as phase-matching angle. The variation of K sum and θ at different signal wavelength is illustrated in Figure 2. It is clearly observed that the K sum is inversely proportional to λ s , but θ is directly proportional to λ s . That is, with the increase of the signal wavelength, the sum of the vector modes of parametric decreases while the phase-matching angle increases. As a result, when the maximum value K sum-max is 3.9735 × 10 7 m −1 at 400 nm and minimum value K sum-min is 3.9547 × 10 7 m −1 at 532 nm. The pump vector mode is at minimum, k p (θ) = K sum-min , according to Equations (6) and (7), the minimum value of refractive index n p (θ) min and the maximum value of phase-matching angle θ max are 1.6742 and 47.6339 • respectively. Simi-larly, k p (θ) = K sum-max , namely, the maximum refractive index n p (θ) max and the minimum phase-matching angle θ min are 1.6822 and 44.4721 • , respectively. tional to λs. That is, with the increase of the signal wavelength, the sum of the vector modes of parametric decreases while the phase-matching angle increases. As a result, when the maximum value Ksum-max is 3.9735 × 10 7 m −1 at 400 nm and minimum value Ksum-min is 3.9547 × 10 7 m −1 at 532 nm. The pump vector mode is at minimum, kp(θ) = Ksum-min, according to Equations (6) and (7), the minimum value of refractive index np(θ)min and the maximum value of phase-matching angle θmax are 1.6742 and 47.6339° respectively. Similarly, kp(θ) = Ksum-max, namely, the maximum refractive index np(θ)max and the minimum phase-matching angle θmin are 1.6822 and 44.4721°, respectively. As the phase-matching angle changes between 44.4721° and 47.6339°, the wavelengths of signal and idle which output from the back of the crystal also changes, as shown in Figure 3. In the case of collinear phase-matching, the parametric wavelengths are 400 nm (for the signal photon) and 794 nm (for the idler photon) at 44.4721°. In order to satisfy the conservations of energy and momentum of the Equations (1) and (2), with the increase of phase-matching angle θ, λs increase while λi decrease, all the way to θ is 47.6339°, λs = λi = 532 nm (for the degenerate photon pair). Of course, not only does the parametric photon pair with strict collinear phase-matching, but the satisfied non-collinear phase-matching condition can be emitted. As the phase-matching angle changes between 44.4721 • and 47.6339 • , the wavelengths of signal and idle which output from the back of the crystal also changes, as shown in Figure 3. In the case of collinear phase-matching, the parametric wavelengths are 400 nm (for the signal photon) and 794 nm (for the idler photon) at 44.4721 • . In order to satisfy the conservations of energy and momentum of the Equations (1) and (2), with the increase of phase-matching angle θ, λ s increase while λ i decrease, all the way to θ is 47.6339 • , λ s = λ i = 532 nm (for the degenerate photon pair). Of course, not only does the parametric photon pair with strict collinear phase-matching, but the satisfied non-collinear phase-matching condition can be emitted. The key point of the correlated photon rate measurement and detector calibration using the SPDC is to determine the location and properties of the correlated photon ring. For this, it is important to concentrate on the study of the spatial distribution properties of the correlated photons within and outside the BBO crystal, which is the type I phasematching of the degenerate wavelength collinear (λ s = λ i = 2λ p = 532 nm). When the pump beam interacts with the crystal surface at a certain angle, the correlated photons that satisfy the phase-matching requirements are produced, and the photons at different wavelengths correspond to different non-collinear angles, then, travel behind the crystal along different divergence angles. It is conical in space and ring-shaped on the photosensitive surface of the CMOS camera, which is perpendicular to the direction of the pump beam propagation. Moreover, the degenerate correlated photon pair is co-propagation with the pump, that is, 0 • of non-collinear angle, and the projection on the camera's photosensitive surface is beam-like, as shown in Figure 1. From Equation (2), with the perfect phase-matching, the value of phase mismatch of three wave vectors (pump, signal and idler) in the crystal is 0, expressed as ∆k = 0, and ∆k = k s + k i − k p . In practical applications, there are many factors such as pump beam dispersion, cutting angle deviation and crystal temperature instability, and ∆k = 0, which reduces the efficiency of crystal conversion. In general, specifies the absolute value of the maximum phase mismatch mode allows |∆k| = π L , where L is the length of the interaction of three waves in the crystal [29,39]. From the previous analysis, it can be determined that in the process of SPDC, the phase-matching angle of the crystal meeting the conditions can emit correlated photons with a wide spectral range, as shown in Figure 3. The laser of 266 nm pumps the BBO crystal of phase-matching angle of 47.6339 • , generated the correlated photons in 532 nm collinear while others noncollinear, and carries out a research of spatial distribution characteristics. Also, the spectral range emitted from the crystal is wide, however, the non-collinear angle variations of different wavelength correlated photons that satisfy the phase-matching criterial has a range, set as ∆α (as signal) and ∆β (as idler). The smaller the value of ∆α (or ∆β), the narrower the correlated photon ring received on the CMOS camera's photosensitive surface, and the converse is wider. The key point of the correlated photon rate measurement and detector calibration using the SPDC is to determine the location and properties of the correlated photon ring. For this, it is important to concentrate on the study of the spatial distribution properties of the correlated photons within and outside the BBO crystal, which is the type I phasematching of the degenerate wavelength collinear (λs = λi = 2λp = 532 nm). When the pump beam interacts with the crystal surface at a certain angle, the correlated photons that satisfy the phase-matching requirements are produced, and the photons at different wavelengths correspond to different non-collinear angles, then, travel behind the crystal along different divergence angles. It is conical in space and ring-shaped on the photosensitive surface of the CMOS camera, which is perpendicular to the direction of the pump beam propagation. Moreover, the degenerate correlated photon pair is co-propagation with the pump, that is, 0° of non-collinear angle, and the projection on the camera's photosensitive surface is beam-like, as shown in Figure 1. From Equation (2), with the perfect phase-matching, the value of phase mismatch of three wave vectors (pump, signal and idler) in the crystal is 0, expressed as Δ 0 k = , and , where L is the length of the interaction of three waves in the crystal [29,39]. From the previous analysis, it can be determined that in the process of SPDC, the phase-matching angle of the crystal meeting the conditions can emit correlated photons with a wide spectral range, as shown in Figure 3. The laser of 266 nm pumps the BBO crystal of phase-matching angle of 47.6339°, generated the correlated photons in 532 nm collinear while others noncollinear, and carries out a research of spatial distribution characteristics. Also, the spectral range emitted from the crystal is In this part, assume that the three waves satisfy completely the phase matching conditions (θ = 47.6339 • ), which is ∆k = k s cos α 0 + k i cos β 0 − k p = 0, and the noncollinear angles of signal and idler are α 0 and β 0 , respectively. As the modification of correlated photon wavelengths, the corresponding non-collinear angle modifies as shown in Figure 4. It can be seen that the correlated photons of different wavelengths propagate along different non-collinear angles. When the wavelength of both ends approaches the degenerate wavelength at 532 nm, the non-collinear angle decreases, and it is 0 • at the point of 532 nm. This phenomenon conforms to the condition of complete phase matching. In the actual process, if the pump beam in the crystal direction remains the same, the signal along the non-collinear angle (α 0 +∆α), corresponding idler will along (β 0 +∆β). At this point, the phase mismatch ∆k act satisfies: where, α is the actual propagation direction of signal, which is the sum of non-collinear angle α 0 that is completely phase matching and the non-collinear angular variation ∆α. Similarly, β is the actual propagation direction of idler and it is the sum of non-collinear angle β 0 that is completely phase matching and the variation ∆β. gles of signal and idler are α0 and β0, respectively. As the modification of correlated photon wavelengths, the corresponding non-collinear angle modifies as shown in Figure 4. It can be seen that the correlated photons of different wavelengths propagate along different non-collinear angles. When the wavelength of both ends approaches the degenerate wavelength at 532 nm, the non-collinear angle decreases, and it is 0° at the point of 532 nm. This phenomenon conforms to the condition of complete phase matching. In the actual process, if the pump beam in the crystal direction remains the same, the signal along the non-collinear angle (α0+Δα), corresponding idler will along (β0+Δβ). At this point, the phase mismatch Δkact satisfies: where, α is the actual propagation direction of signal, which is the sum of non-collinear angle α0 that is completely phase matching and the non-collinear angular variation Δα. Similarly, β is the actual propagation direction of idler and it is the sum of non-collinear angle β0 that is completely phase matching and the variation Δβ. In order to research the variation of signal non-collinear angle, Δkact is expanded in a Taylor series: The maximum phase mismatch is taken for Δkact, and Equation (9) of Δkact expansion through second-order approximation can be rewritten as: with In order to research the variation of signal non-collinear angle, ∆k act is expanded in a Taylor series: The maximum phase mismatch is taken for ∆k act , and Equation (9) of ∆k act expansion through second-order approximation can be rewritten as: Simplifying Equation (10), the variation of signal non-collinear angle (∆α) can be written as: with · λ s n s · sin β 0 cos β 0 cos α 0 w = −2π · n s λ s · cos α 0 + n i λ s n s λ i 2 · n i λ i · cos 2β 0 cos β 0 In a similar way, the variation of idler non-collinear angle can be deduced which is similar to Equation (12). According to the above reasoning process, the variation of the non-collinear angle of the correlated photon is bound up with the non-collinear angle with complete phase matching and the action length of the three waves in the crystal. Experimental Measurement In this part, an experimental platform is built to measure the properties of the angular distribution of the correlated photons generated by a BBO crystal, which verifies the theoretical research on the non-collinear angular variation through experimental method. The layout of the experimental measurement setup is shown in Figure 5. In a similar way, the variation of idler non-collinear angle can be deduced which is similar to Equation (12). According to the above reasoning process, the variation of the non-collinear angle of the correlated photon is bound up with the non-collinear angle with complete phase matching and the action length of the three waves in the crystal. Experimental Measurement In this part, an experimental platform is built to measure the properties of the angular distribution of the correlated photons generated by a BBO crystal, which verifies the theoretical research on the non-collinear angular variation through experimental method. The layout of the experimental measurement setup is shown in Figure 5. The pump source is a continuous laser of 266 nm with the maximum power up to 55 mW. The quartz prism (vertex angle of 70 • ) and the dichroscope (reflectivity of 98% in the wavelength of 266 nm) eliminate the residual 532 nm in the pump beam of 266 nm. Moreover, the G-T represents the Glan-Taylor polarizer made of α-BBO and it is plated with 200 nm to 400 nm anti-reflection film, which modifies the polarization ratio of the pump, increases the proportion of e light in type-I SPDC, and further increases the correlated photon number generated. The zero-order half-wave plate is used to change the phase difference of 1/2 wavelength to realize the opening and closing of the correlated photons generation process. A plano-convex lens with a focal distance of 150 mm focuses the pump beam at the entrance of the nonlinear crystal. The nonlinear crystal used negative uniaxial BBO is 2 mm long, cut for type-I collinear phase-matching at θ = 47.63 • . Of course, the crystal phase-matching condition also can be turned to n(ω) = n p (2ω)(n(ω) and n p (2ω) said the refractive index of the correlated photon and the pump) due to k s,i = n(ω) · ω c (k s,i and ω are, respectively, the wave number and frequency of the correlated photon, c is the photon propagation speed in a vacuum). The results show that the phase-matching condition can be regarded as the refractive index of the correlated photon in the crystal is equivalent to that of the pump. In the case that most nonlinear optical crystals in the visible are in normal dispersion, and the higher the wave frequency, the greater the refractive index, namely n(ω) < n p (2ω). Therefore, the phasematching condition cannot be satisfied theoretically when the wave beam propagates in isotropic medium. However, for the anisotropic BBO crystal, due to the birefringence effect, two waves with different refractive indices are allowed to propagate in the same direction, and the phase matching can be realized by the effect in the normal dispersion range. The long-pass filter (with a transmittance of over 95% across the spectrum from 400 nm to 794 nm) and the band-pass filter are represented as LP-F and BP-F, respectively, whose functions are to eliminate any residual radiation at the continuous laser and selecting the wavelength of the correlated photons. Moreover, the non-collinear angular variation and the divergence angular distribution are recorded by means of the imaging device of CMOS camera, whose effective area is 13.312 mm × 13.312 mm and readout mode is chosen 4 × 4, placed in the behind of BP-F. Observe the spectral distribution of correlated photons emitted from the back of BBO crystal, and the results will be presented in Figure 6. It can be found that the signal and idler wavelength of 472 nm and 610 nm are a pair of correlated photons. It is observed that the same ring images appear as the theoretical analysis, in which the spot in each central same area is composed by 266 nm pump not eliminated completely. Although the detection sensitivity of CMOS camera in 400 nm to 794 nm is at least 6 times higher than 266 nm, however, the correlated number is 8 orders of magnitude lower than the pump, the ring cannot be seen without the filters. The results of the theoretical calculations, experimental measurements, and detailed explanations will be developed in the next section. correlated photons. It is observed that the same ring images appear as the theoretical analysis, in which the spot in each central same area is composed by 266 nm pump not eliminated completely. Although the detection sensitivity of CMOS camera in 400 nm to 794 nm is at least 6 times higher than 266 nm, however, the correlated number is 8 orders of magnitude lower than the pump, the ring cannot be seen without the filters. The results of the theoretical calculations, experimental measurements, and detailed explanations will be developed in the next section. Results Analysis According to theoretical analysis, the correlated photon ring width is related to the non-collinear angular variation, that is, the ring width at different wavelengths is different, and the corresponding non-collinear angular variation is also different. In this section, first using the Equation (12) to calculate the 415 nm, 472 nm, 550 nm, 572 nm, 605 nm, 610 nm, 685 nm, 700 nm, and 710 nm, a total of nine wavelengths corresponding noncollinear angular variation, we find that the theoretical results have a floating range. Then, the ring profile measured by CMOS camera is processed, and the width values can be obtained with the image processing technology, finally converted into the non-collinear angular variation behind crystal (not exuded). Figure 7a shows the theoretical simulations and experimental measurements of the non-collinear angular variation at different correlated photon wavelengths. The black and red curves represent the theoretical values and the measured values of non-collinear angular variation, respectively. Figure 7b illustrates the error bars of the experimental data. It can be observed that the measured values are less than the theoretical simulation, which is the result of the theoretical research under the premise of the largest phase mismatch. Also, it provides reference and guidance for predicting the maximum width and position of correlated photon rings. The non-collinear angular variations of 472 nm, 550 nm and 572 nm are simulated theoretically, higher than the other wavelengths, and the experimental results are consistent with the theory. Meanwhile, with the decrease of signal wavelength and the increase of idler wavelength, the difference between the theoretical and measured values of the non-collinear angular variation decreases. At the wavelengths of 685 nm and 550 nm, the difference between theory and measured data are minimum 0.0168 • and maximum 0.1906 • , respectively. The reason is that the closer the non-collinear angle is to the degenerate wavelength, the greater the influence of the experimental conditions such as phase-matching angle, pump incident angle, and diameter. The location and size of the ring need to determine before calibrating the photodetector using correlated photons. To understand and determine the position and size of the ring before using correlated photons to carry out the photodetector calibration, another physical quantity, divergence angle, describing the spatial properties of the correlated photons, is introduced. It is the included angle between the outer circle of the correlated photon and the pump beam emitted from the back of the crystal, which is half of the apex angle of the spatial propagation cone, and in Figure 6, is expressed as θ s / θ i representing respectively the divergence angle of signal or idler. The non-collinear angle is the between the propagation direction of the correlated photon in the crystal and the pump beam, while the divergence angle is another form of its existence outside the crystal. And the conversion between the two angles can be made by Snell's Law. Therefore, measured values of divergence angles at different wavelengths are obtained based on the correlated photon rings in the above experiment, however, the theoretical simulations require combining Equations (3)-(5) and Snell's Law, as presented in Figure 8. At both ends of the degenerate wavelength of 532 nm (415 nm ≤ λ < 532 nm and 550 nm ≤ λ ≤ 710 nm), the divergence angle decreases as the wavelength of signal increases, while is diametrically opposite for the relationship between the idler wavelength and divergence angle, which is increases as the wavelength of idler increases. When the correlated photon wavelengths are 472 nm, 550 nm and 572 nm (near degeneracy), the measured average of the divergence angle differs by 1.73 • from the theoretical. On the one hand, it has an error of 0.25 • between the cutting angle and phase-matching angle which causes the degeneracy wavelength to shift. On the other hand, the effect of non-collinear angle on the phase-matching angle of degenerate collinear wavelength is significant. It can be seen from Figure 7 that the non-collinear angular variation mean of the three wavelengths is 0.0156 • , which is greater than the others. nm, 685 nm, 700 nm, and 710 nm, a total of nine wavelengths corresponding non-collinear angular variation, we find that the theoretical results have a floating range. Then, the ring profile measured by CMOS camera is processed, and the width values can be obtained with the image processing technology, finally converted into the non-collinear angular variation behind crystal (not exuded). Figure 7a shows the theoretical simulations and experimental measurements of the non-collinear angular variation at different correlated photon wavelengths. The black and red curves represent the theoretical values and the measured values of non-collinear angular variation, respectively. Figure 7b illustrates the error bars of the experimental data. It can be observed that the measured values are less than the theoretical simulation, which is the result of the theoretical research under the premise of the largest phase mismatch. Also, it provides reference and guidance for predicting the maximum width and position of correlated photon rings. The non-collinear angular variations of 472 nm, 550 nm and 572 nm are simulated theoretically, higher than the other wavelengths, and the experimental results are consistent with the theory. Meanwhile, with the decrease of signal wavelength and the increase of idler wavelength, the difference between the theoretical and measured values of the non-collinear angular variation decreases. At the wavelengths of 685 nm and 550 nm, the difference between theory and measured data are minimum 0.0168° and maximum 0.1906°, respectively. The reason is that the closer the non-collinear angle is to the degenerate wavelength, the greater the influence of the experimental conditions such as phase-matching angle, pump incident angle, and diameter. are 472 nm, 550 nm and 572 nm (near degeneracy), the measured average of the divergence angle differs by 1.73° from the theoretical. On the one hand, it has an error of 0.25° between the cutting angle and phase-matching angle which causes the degeneracy wavelength to shift. On the other hand, the effect of non-collinear angle on the phase-matching angle of degenerate collinear wavelength is significant. It can be seen from Figure 7 that the non-collinear angular variation mean of the three wavelengths is 0.0156°, which is greater than the others. Discussions The main result of this work is based on the quantity expression of phase-matching condition, and the theory studies the second-order change of the non-collinear angle with the wavelength of the correlated photons. At the same time, the parameters of the characterization spatial properties (the non-collinear angle of correlated photon, spectral width, divergence angle) are quantitatively described and analyzed without varying the parameters such as pump and crystal. In the experiment, a pair of correlated photon images can be observed and recorded, which verifies the process of producing the correlated photons in SPDC satisfying the law of energy conservation. However, the previous works mainly studied the spatial distribution of correlated photons produced by varying the pump parameters (such as, focusing on pump beam and changing pump spectral width) during the process of SPDC, which influenced the spatial symmetry of the two-photon field [25,[40][41][42]. Also, a qualitative study was implemented for the spatial characteristics of correlated photons under the condition that the diameter, near field and far field of pump were considered [43]. Moreover, the spectral characteristics of the type-I SPDC, which were in the three commonly phase-matching conditions (collinear degenerate, noncollinear degenerate and collinear nondegenerate), were reported, and the relationship between spectroscopy and photon freedom entanglement was studied, which provided the theoretical guidance for the application of quantum ghost imaging [44], interference effect [45,46], and other techniques. The entangled two-photon field generated by SPDC phase matching in BBO crystal was explored in depth by Karan et al [47], who summarized the effects of various crystal and pump parameters on the entangled two-photon field, in order to illustrate how various experimental parameters affect the photon pairs generated. Finally, the two-photon wave function was derived based on angle-orbital angular momentum and the two-photon angular Schmidt spectrum was calculated. From the perspective of quantum electrodynamics, Forbes et al [48] quantitatively analyzed the degenerate down conversion mechanism of localized and nonlocalized, providing a reasonable explanation for the uncertainty of the position-momentum of the correlated photon propagation. The spatial properties of the SPDC in type-I BBO uniaxial crystal with degenerate wavelength collinear phase-matching are investigated theoretically and experimentally. Based on the quantity expression of phase-matching condition, the relationship between the non-collinear angle and the correlated photon wavelength has been presented. Also, the theoretical process of the non-collinear angular variation is derived and analyzed, and the results have a floating range of 0.0235 • -0.2032 • . Then, the spatial divergence angle coming out of the crystal, is calculated. Of course, the correlated photon rings are measured by CMOS cameras and the measured image can be converted to the corresponding measurements of non-collinear angular variations and divergence angles. The experimental measurement results verify the correctness of the numerical simulation of non-collinear angular variation in the range. As for the divergence angle, the measurements at both ends of the degenerate wavelength 532 nm (except for three nearby wavelengths) are consistent with the theoretical simulation. And the non-collinear angular variation mean of three correlated photon wavelengths near the degenerate wavelength is larger than others, which also has a great influence on the phase-matching angle of the degenerate wavelength collinear. Moreover, at the phase-matching angle 47.63 • , the measured divergence angle decreases with the increase of signal and increases with the increase of idler, and the result is agreement with theoretical analysis. The theoretical analysis model of the spatial properties of the correlated photon pairs established in the paper will lay a foundation for a deeper understanding of the positions and characteristics of parametric photons in advance in the photodetector calibration process. Data Availability Statement: Data available on request due to restrictions eg privacy or ethical. The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the Chinese National Key Research and Development project not been completed and some of the data involved in this article are still in the confidential stage. Conflicts of Interest: The authors declare no conflict of interest.
7,820.4
2021-01-07T00:00:00.000
[ "Physics" ]
Weak ergodicity breaking of receptor motion in living cells stemming from random diffusivity Molecular transport in living systems regulates numerous processes underlying biological function. Although many cellular components exhibit anomalous diffusion, only recently has the subdiffusive motion been associated with nonergodic behavior. These findings have stimulated new questions for their implications in statistical mechanics and cell biology. Is nonergodicity a common strategy shared by living systems? Which physical mechanisms generate it? What are its implications for biological function? Here, we use single particle tracking to demonstrate that the motion of DC-SIGN, a receptor with unique pathogen recognition capabilities, reveals nonergodic subdiffusion on living cell membranes. In contrast to previous studies, this behavior is incompatible with transient immobilization and therefore it can not be interpreted according to continuous time random walk theory. We show that the receptor undergoes changes of diffusivity, consistent with the current view of the cell membrane as a highly dynamic and diverse environment. Simulations based on a model of ordinary random walk in complex media quantitatively reproduce all our observations, pointing toward diffusion heterogeneity as the cause of DC-SIGN behavior. By studying different receptor mutants, we further correlate receptor motion to its molecular structure, thus establishing a strong link between nonergodicity and biological function. These results underscore the role of disorder in cell membranes and its connection with function regulation. Due to its generality, our approach offers a framework to interpret anomalous transport in other complex media where dynamic heterogeneity might play a major role, such as those found, e.g., in soft condensed matter, geology and ecology. I. INTRODUCTION Cell function relies heavily on the occurrence of biochemical interactions between specific molecules. Encounters between interacting species are mediated by molecular transport within the cellular environment. A fundamental mode of transport for molecules in living cells is represented by diffusion, a motion characterized by random displacements. The quantitative study of diffusion is thus essential for understanding molecular mechanisms underlying cellular function, including target search [1], kinetics of transport-limited reactions [2, 3], trafficking and signaling [4]. These processes take place in complex environments, crowded and compartmentalized by macromolecules and biopolymers. A prototypical example is the plasma membrane, where the interplay of lipids and proteins with cytosolic (e.g., the actin cytoskeleton) and extracellular (e.g., glycans) components generates a highly dynamic and heterogeneous organization [5]. The diffusion of a single molecule j, whose position x j is sampled at N discrete times t i = i∆t, is often characterized by the time-averaged mean-square displacement (T-MSD): which for a Brownian particle scales linearly in the time-lag t lag . Application of fluorescencebased techniques to living cells has evidenced striking deviations from Brownian behavior in the nucleus [6], cytoplasm [7][8][9][10] and plasma membrane [11,12]. Indeed, numerous cellular components show anomalous subdiffusion [13], characterized by a power law dependence of the MSD ∼ t β , with β < 1 [14][15][16]. Owing to the implications of molecular transport for cellular function and the widespread evidence of subdiffusion in biology, major theoretical efforts have been devoted to understand its physical origin. Subdiffusion is generally understood to be the consequence of molecular crowding [17] and several models have been developed to capture its main features. In general, subdiffusion can be obtained by models of energetic and/or geometric disorder, such as: (i) the continuous-time random walk (CTRW), i.e., a walk with waiting times between steps drawn from a power law distribution [18]; (ii) fractional Brownian motion, i.e., a process with correlated increments [19]; (iii) obstructed diffusion, i.e., a walk on a percolation cluster or a fractal [15]; (iv) diffusion in a spatially and/or temporally heterogeneous medium [20][21][22]. Some of these models have been associated with relevant biophysical mechanisms such as trapping [23], the viscoelastic properties of the environment [24] or the presence of barriers and obstacles to diffusion [25]. Advances in single particle tracking (SPT) techniques have allowed the recording of long single-molecule trajectories and have revealed very complex diffusion patterns in living cell systems [5,11]. Recently, it has been shown that some cellular components show subdiffusion associated with weak ergodicity breaking (wEB) [9,10,12], with the most obvious signatures being the non-equivalence of the T-MSD and the ensemble-averaged MSD (E-MSD). The experimentally determined ensemble-averaged MSD over a time interval m∆t is defined by: where J is the number of observed single-particle trajectories and t i is the starting time relative to first point in the trajectory. Moreover, ergodicity breaking has been further confirmed by the presence of aging [26,27], i.e. the dependence of statistical quantities on the observation time. Based on these findings, several stochastic models presenting nonstationary (and thus nonergodic) subdiffusion have been proposed [20,[28][29][30][31]. Among these, CTRW has been used to model nonergodic subdiffusion in living cells [9,10,12] and has begun to provide theoretical insight into the physical origin of wEB in biological systems [28], associating the nonergodic behavior with the occurrence of particle immobilization with a heavy-tailed distribution of trapping times. At the same time, these intriguing findings have generated new questions: Is nonergodic subdiffusion a strategy shared by other biological systems? Can biophysical mechanisms other than trapping lead to similar behaviors? What is its functional relevance? Elucidating these issues is crucial to unravel the role of nonergodic subdiffusion in cellular function. The main aim of the present work is to explore other forms of transport in biological systems to provide answers to these questions. Here we used SPT to study the diffusion of a prototypical transmembrane protein, the Hamster Ovary (CHO) cells reproduces the essential features of the receptors naturally occurring on dendritic cells [34,35], thus serving as a valid model system. To characterize its dynamics, we performed video microscopy of quantum-dot labeled DC-SIGN stably transfected in CHO cells in epi-illumination configuration ( Fig. 1A-B, see Appendix A for details on cell culture and labeling procedures). In order to follow the standard biology nomenclature and to differentiate it from its mutated forms, in this manuscript we refer to the full receptor as the wild-type DC-SIGN (wtDC-SIGN). We tracked quantum dot positions with nanometer accuracy by means of an automated algorithm [36]. We acquired more than 600 trajectories, all longer than 200 frames with some as long as 2000 frames, at a camera rate of 60 frame · s −1 to allow the evaluation and the comparison of time and ensemble averaged MSD. The T-MSD of individual trajectories displayed a linear behavior (β ∼ 1), consistent with pure Brownian diffusion (Fig. 1C). The fitting of the average T-MSD provided a value β = 0.95 ± 0.05. In addition, the distribution of the exponents β obtained by nonlinear fitting of the T-MSDs of the individual trajectories (inset of Fig. 1D) showed an average β = 0.98 ± 0.06. Since the T-MSD values corresponding to different trajectories were broadly scattered, for each trajectory we calculated the diffusion coefficient D s by a linear fit of the T-MSD at time lags < 10% of the trajectory duration [37]. As expected, the resulting values of D s were found to have a very broad distribution, spanning more than two orders of magnitude ( Fig. 1D). However, in marked contrast with the T-MSD, the E-MSD deviated significantly from linearity, showing subdiffusion with an exponent β = 0.84 ± 0.03 (Fig. 1E). The difference between the scalings of T-MSD and E-MSD is a clear signature of wEB [38]. To inquire whether DC-SIGN dynamics also exhibits aging, we computed the time-ensemble-averaged MSD (TE-MSD) by truncating the data at different observation times T : and extracting the corresponding diffusion coefficient D TE by linear fitting [37]. In systems with uncorrelated increments, it can be shown under rather general assumptions that D TE ∼ T β−1 [21,39]. The observed D TE indeed scaled as a power law with an exponent of −0.17 ± 0.05 (Fig. 1F), yielding a value of β in good agreement with the exponent determined from E-MSD. These results thus demonstrate that wtDC-SIGN dynamics exhibits aging. III. FAILURE OF THE CTRW MODEL The motion of some biological components, including the Kv2.1 potassium channel in the plasma membrane [12], lipid granules in yeast cells [9] and insulin-containing vesicles in Pancreatic β-cells [10], has been reported to exhibit subdiffusion compatible with the coexistence of an ergodic and a nonergodic process. The nonergodic part of the process has been modeled within the framework of the CTRW [28,38,39]. CTRW is a random walk in which a particle performs jumps whose lengths have a finite variance, but between jumps the walker remains trapped for random dwell times, distributed with a power-law probability density ∼ t −(1+β) , which for β ≤ 1 has an infinite mean. The duration of trapping events is independent of the previous history of the system. The energy landscape of this process is characterized by potential wells with a broad depth distribution. Such energetic disorder yields nonergodicity, since no matter how long one measures, deep traps cause dwell times on the order of the measurement time. Within the biological context, these traps generally have been associated with chemical binding to stationary cellular components (e.g. actin cytoskeleton [12] or microtubuli [10] In SPT experiments, the limited localization accuracy for determining the particle position sets a lower limit for the diffusivity value that can experimentally be measured. In our case, this lower threshold lies at D th = 6 · 10 −4 µm 2 s −1 . Therefore, a segmentation algorithm [41] was applied to the x-and y-displacements of our trajectories in order to detect events with diffusivity lower than D th . Surprisingly, transient trapping was only detected over less than 5% of the total recording time ( Fig. 2A-C). The detected trapping times displayed an average duration of 330 ± 30 ms (Fig. 2D). An alternative analysis, based on the transient confinement zone algorithm [42], gave comparable results [43]. In order to understand the nature of these trapping events, we attempted to fit their distribution by means of both an exponential and a power law distribution function ∼ t −(1+β) , as expected for CTRW [12]. The power law pdf provided a better fit to the data, yielding an exponent β = 0.83 ± 0.05 ( In addition, we constructed the distribution of escape times by identifying the duration of the events in which a trajectory remains within a given radius R TH (Fig. 2E). For a CTRW, the long-time dynamics is dominated by anomalous trapping events and, as a result, this quantity is expected to be independent of R TH [12]. In strong contrast to the CTRW model, IV. DC-SIGN DISPLAY CHANGES OF DIFFUSIVITY Recently, diffusion maps of the cell membrane have shown the presence of patches with strongly varying diffusivity [36,44,45]. Based on this evidence, we have recently proposed a class of models describing ordinary Brownian motion with a diffusivity that varies randomly, but is constant on time intervals or spatial patches with random size [21]. These models describe anomalous diffusion and wEB in complex and heterogeneous media, such as the cellular environment, without invoking transient trapping. To address whether the observed nonergodic dynamics of DC-SIGN can be described with this theoretical framework, we further analyzed individual trajectories by means of a change-point algorithm to detect variations of diffusivity in time [41]. In brief, the algorithm consists in a likelihood-based approach to quantitatively recover time-dependent changes in diffusivity, based on the calculation of maximum likelihood estimators for the determination of diffusion coefficients and the application of a likelihood ratio test for the localization of the changes. Notably, DC-SIGN trajectories displayed a Brownian motion with relatively constant diffusivity over intervals of varying length, but that changed significantly between these intervals ( Fig. 3A-C). Similar features were identified in a large fraction of trajectories, with 63% showing at least one diffusivity change (Fig. 3D), in qualitative agreement with the models of random diffusivity [21]. To obtain a comprehensive understanding of our data, we considered an annealed model in which randomly diffusing particles undergo sudden changes of diffusion coefficient [21]. The distribution of diffusion coefficients D that a particle can experience is assumed to have a power-law behavior ∼ D σ−1 for small D (with σ > 0) and a fast decay for D → ∞. Given D, the transit time τ (i.e., the time τ a particle moves with a given D) is taken to have a probability distribution with mean ∼ D −γ (with −∞ < γ < ∞). Since the motion during the transit time τ is Brownian, particles explore areas with radius r ∼ √ τ D, and the radius of the region explored with such diffusion coefficient has probability distribution with mean We performed in silico experiments of 2D diffusion ( Fig. 4A-B), assuming a distribution of diffusion coefficients D given by: and a conditional distribution of transit times τ given by: where b and k are dimensional constants and Γ(x) is the Gamma function. The remarkable agreement between simulations and experimental data strongly supports heterogeneous diffusion as the origin for DC-SIGN nonergodicity. It must be noticed that, in contrast to CTRW, our model does not assume particle immobilization, but a continuous distribution of diffusivity, with P D (D) ∼ D σ−1 for small D. However, from the experimental point of view, it is not possible to distinguish immobilization from very slow diffusion. In fact, the limited localization accuracy of SPT experiments translates into a lower limit for the diffusivity value D th that can be detected. Therefore, in our analysis, trajectories, or portion of trajectories, with diffusivity lower than this threshold value (D th = 6 · 10 −4 µm 2 s −1 ) are identified as immobile, as shown in Fig. 2A-B. From the model described above, the distribution of the duration of these "apparent" immobilizations can be calculated as: We neglect here the possibility that the trajectory of an in-silico particle contains two consecutive segments characterized by diffusivities D i and D i+1 which are both smaller than D th , as this probability is vanishingly small for the parameter regime of our setup. Independently of D th , the integral in Eq. (6) scales asymptotically as τ −1−β with β = σ γ , providing for the distribution of immobilization events the same behavior predicted by the CTRW [12]. Therefore, the distribution of immobilization times in Fig. 2D is fully compatible with the prediction of our model, further confirming its agreement with the experimental data. V. DYNAMICS OF RECEPTOR MUTANTS From the structural point of view (Fig. 5A), DC-SIGN is a tetrameric transmembrane protein, with each of the four subunits comprising: (i) an extracellular domain that allows binding of the receptor to pathogens, i.e., ligand binding domain; (ii) a long neck region; and (iii) and a transmembrane part followed by a cytoplasmic tail that allows interactions with inner cell components and facilitates the uptake and internalization of pathogens. [46,47]. Moreover, DC-SIGN contains a single N -glycosylation site mediating interactions with glycan-binding proteins [43]. To gain insight into the molecular mechanisms of DC-SIGN nonergodic diffusion, we generated three mutated forms of the receptor (Fig. 5A). These mutations have been reported to modify the interaction of DC-SIGN with other cellular components, strongly affecting DC-SIGN function [43,48,49]. The N80A mutant lacks the N -glycosylation site. This defect hinders interactions of DC-SIGN with components of the extracellular membrane that bind to sugars [43]. The ∆35 mutant lacks a significant part of the cytoplasmic tail, preventing interactions with cytosolic components such as actin [48]. Finally, the ∆Rep mutant lacks part of the neck region, abrogating interactions between different DC-SIGN molecules [49]. We found that each mutation has a very different effect on the dynamics of the receptor. The N80A mutant (Fig. 5C-F Overall, these results demonstrate that the molecular structure of the receptor strongly influences its diffusive behavior on the cell membrane and the occurrence of weak ergodicity breaking. VI. NONERGODICITY AND BIOLOGICAL FUNCTION Together with our previous biophysical studies on DC-SIGN [43,49], the data and analysis presented in this paper allow us to link the dynamical behavior of DC-SIGN to its functional role in pathogen capture and uptake (known as endocytosis). In terms of steady-state organization, wtDC-SIGN, N80A and ∆35 preferentially form nanoclusters on the cell membrane, which are crucial for regulating pathogen binding [43,49], whereas removal of the neck region (∆Rep) reduces nanoclustering and binding efficiency to small pathogens, such as viruses [49]. Our results thus show that the diffusive behavior of the receptor is strongly linked to nanoclustering, but not merely due to size-dependent diffusivity and/or time-dependent cluster formation and breakdown. In fact, dual-color SPT experiments performed at high labeling density do not reveal correlated motion between nearby DC-SIGN nanoclusters, excluding the occurrence of dynamic nanocluster coalescence [43]. Moreover, although superresolution imaging has revealed that wtDC-SIGN, N80A and ∆35 form nanoclusters with similar distributions of size and stoichiometry [43,49], our dynamical data evidence significant differences in their diffusion patterns (Figs. 1 and 5). Our data are in fact consistent with the view of the plasma membrane as a highly dynamic and heterogeneous medium, where wEB stems from the enhanced ability of DC-SIGN nanoclusters to interact with the membrane environment, including components from the outer and inner membrane leaflet. This interaction is inhibited (or strongly reduced) in the case of the ∆Rep mutant since it does not form nanoclusters [49]. As a result, the motion of ∆Rep is Brownian and ergodic, and interestingly this dynamic behavior correlates with its impaired pathogen binding capability [34,49]. In contrast, we observed that both wtDC-SIGN and N80A, which show a similar degree of nanoclustering [43], exhibit wEB. But, the distribution of diffusivity of N80A is significantly broader than that of wtDC-SIGN, and is shifted towards lower diffusivity values ( Fig. 5C-F). This increased heterogeneity correlates with altered interactions of the N80A with extracellular components, resulting from the removal of the glycosylation site. Indeed, we have recently shown that the N80A mutant has a reduced capability to interact with extracellular sugar binding partners [43]. Thus, it appears that the extracellular milieu next to the membrane contributes to the degree of dynamical heterogeneity sensed by the receptor. Remarkably, this correlation also extends to the functional level, as we have recently shown that interactions of DC-SIGN with extracellular sugar-binding proteins influence encounters of DC-SIGN with the main endocytic protein clathrin. In turn, this resulted in reduced clathrin-dependent endocytosis of the receptor and its pathogenic ligands [43]. Finally, the ∆35 mutant exhibits nanoclustering [49] and wEB similar to that of wtDC-SIGN, From the biological point of view, however, this mutant is not able to interact with cytosolic components in close proximity to the inner membrane leaflet, including actin [48]. Therefore, in contrast to the extracellular influence observed for the N80A mutant, the results obtained for the ∆35 mutant indicate that interactions with the actin cytoskeleton, responsible for the CTRW-like behavior of other proteins [12], do not play a major role in DC-SIGN wEB. Nevertheless, it should be mentioned that the reduced endocytic capability of the ∆35 could not be uniquely attributed to its dynamic behavior on the cell membrane but rather to its impaired interaction with downstream partners involved in internalization [48,49]. VII. CONCLUSIONS AND OUTLOOK We have demonstrated that the DC-SIGN receptor displays subdiffusive dynamics, char- might provide a deeper comprehension of these mechanisms at the molecular level. The model used to interpret our data provides a flexible and realistic framework to describe anomalous motion in cell membranes. Although in the present work we have focused our simulations on time-dependent changes of diffusivity, similar conclusions can be obtained assuming a spatial dependence, with constant diffusivity on membrane patches of random size [21]. The current data do not allow discrimination between the two scenarios. The application of techniques that combine dynamic and spatial mapping at high labeling conditions [45,50] would be necessary to verify the occurrence of spatial maps of diffusivity. In addition, numerical simulations of spatial-dependent random diffusivity require the construction of 2-dimensional diffusivity maps consistent with the model's probability densities, which is a non-trivial task. While the work presented here focuses on the cell plasma membrane, we point out that these results have much broader implications. In fact, our model and analysis are very general and can be applied to any diffusive system that shows wEB, in order to investigate the role that heterogeneous diffusivity plays in observed anomalies. Fundamental questions about the nature of anomalous and nonergodic diffusion in disordered media arise in many fields, such as life sciences [28], soft condensed matter [51,52], ultracold gases [53,54], geology [55] and ecology [56]. Our work provides an alternative conceptual framework and specific tools for answering these questions. controller (TC-324B, Warner Instrument) and a digital CO 2 controller (DGTCO2BX, Okolab) at 37 • C and in 5% CO 2 atmosphere. Trajectories were analyzed with custom Matlab code based on the algorithm described in Ref. [36]. In order to avoid artifacts in trajectory reconnection caused by quantum dots blinking dynamics and/or high local density of quantum dots, each trajectory was terminated at the first video frame not displaying a clearly identifiable bright spot in the surrounding of the quantum dot localization obtained in the previous frame. Similarly, the trajectory reconstruction was also interrupted if the presence of multiple bright spots did not allow unambiguous identification of the same quantum dots in successive video frames. As a further check for false-positive reconnection, trajectories were overlaid to raw movies and visually inspected. Appendix C: Data analysis Time-, ensemble-and time-ensemble-averaged mean-squared displacements were calculated as described in [12]. Exponents of the E-MSD, average T-MSD and D t,ens were obtained by linear fitting of the log-log transformed data. Errors were calculated as the 99% confidence interval of the fitting parameters. Short-time diffusion coefficients were extracted from the linear fit of the first 10% of the points of T-MSD curves [37]. Measurements of the apparent diffusion of quantum dots on fixed cells and glass coverslips were used to estimate the smallest detectable diffusivity. Short-time diffusion coefficients were obtained as described above for trajectories of immobilized quantum dots and the corresponding probability distribution was calculated. 95% of the immobilized quantum dots trajectories showed values lower than 6 · 10 −4 µm 2 /s, which was therefore set as the threshold (D th ) for classifying a trajectory as mobile. Dynamical changes in the motion of DC-SIGN receptors were identified by application of the change-point algorithm described in Ref. [41]. In brief, the trajectories were recursively segmented and a maximum-likelihood-ratio test was applied to the trajectory displacements (∆x, ∆y) in order to identify sudden changes of diffusivity. The critical values for Type I error rates were set to a confidence level of 99%, corresponding to 1% probability of having a false-positive identification of a change-point. For each dynamical region identified by the algorithm, the short-time diffusion coefficient was calculated from a linear fit of the first 10% of the points of the corresponding MSD curves [37]. Regions showing a short-time diffusion coefficient lower than D th were considered compatible with transient immobilization. Appendix D: Simulations Simulated trajectories (500 per parameter set) were obtained by generating random diffusion coefficients D according to the probability distribution given in Eq. (4). For each diffusion coefficient, the corresponding transit time τ was calculated as a random number drawn from the distribution given in Eq. (5). Particle coordinates r = {x, y} were generated as:
5,430.6
2014-07-09T00:00:00.000
[ "Physics", "Biology" ]
Goldstone bosons and the Englert-Brout-Higgs mechanism in non-Hermitian theories In recent work Alexandre, Ellis, Millington and Seynaeve have extended the Goldstone theorem to non-Hermitian Hamiltonians that possess a discrete antilinear symmetry such as $PT$. They restricted their discussion to those realizations of antilinear symmetry in which all the energy eigenvalues of the Hamiltonian are real. Here we extend the discussion to the two other realizations possible with antilinear symmetry, namely energies in complex conjugate pairs or Jordan-block Hamiltonians that are not diagonalizable at all. In particular, we show that under certain circumstances it is possible for the Goldstone boson mode itself to be one of the zero-norm states that are characteristic of Jordan-block Hamiltonians. While we discuss the same model as Alexandre et al. our treatment is quite different, though their main conclusion that one can have Goldstone bosons in the non-Hermitian case remains intact. In their paper Alexandre et al. presented a variational procedure for the action in which the surface term played an explicit role, to thus suggest that one has to use such a procedure in order to establish the Goldstone theorem in the non-Hermitian case. However, by taking certain fields that they took to be Hermitian to actually either be anti-Hermitian or be made so by a similarity transformation, we show that we are then able to obtain a Goldstone boson using a completely standard variational procedure. Since we use a standard variational procedure we can readily extend our analysis to a continuous local symmetry by introducing a gauge boson. We show that when we do this the gauge boson acquires a non-zero mass by the Higgs mechanism in all realizations of the antilinear symmetry, except the one where the Goldstone boson itself has zero norm, in which case, and despite the fact that the continuous local symmetry has been spontaneously broken, the gauge boson remains massless. I. INTRODUCTION Following work by Bender and collaborators [1][2][3][4][5] it has become apparent that quantum mechanics is much richer than conventional Hermitian quantum mechanics. However, if one wishes to maintain probability conservation, one needs to be able to define an inner product that is time independent. The reason that one has any freedom at all in doing this is because the Schrödinger equation i∂ t |ψ = H|ψ only involves the ket state and leaves the bra state unspecified. While the appropriate bra state is the Hermitian conjugate of the ket when the Hamiltonian is Hermitian, for non-Hermitian Hamiltonians a more general bra state is needed. However, one cannot define an inner product that is time independent for any non-Hermitian Hamiltonian. Rather, it has been found ( [6,7] and references therein) that the most general Hamiltonian for which one can construct a time-independent inner product is one that has an antilinear symmetry, and in such a case the required bra state is the conjugate of the ket state with respect to that particular antilinear symmetry. When a Hamiltonian has an antilinear symmetry its energy eigenspectrum can be realized in three possible ways, all eigenvalues real and eigenspectrum complete, some or all of the eigenvalues in complex conjugate pairs with the eigenspectrum still being complete, or eigenspectrum incomplete and Hamiltonian of non-diagonalizable, and thus necessarily of non-Hermitian, Jordan-block form. Of these three possible realizations only the first can also be achieved with a Hermitian Hamiltonian, and while Hermiticity implies the reality of energy eigenvalues, there is no theorem that would require a non-Hermitian Hamiltonian to have complex eigenvalues, with Hermiticity only being sufficient for the reality of eigenvalues but not necessary. 1 The necessary condition for the reality of energy eigenvalues is that the Hamiltonian have an antilinear symmetry [7][8][9][10], while the necessary and sufficient condition is that in addition all energy eigenstates are eigenstates of the antilinear operator [3]. Interest in non-Hermitian Hamiltonians with an antilinear symmetry was first triggered by the work of Bender and collaborators [1,2] who found that the eigenvalues of the Hamiltonian H = p 2 + ix 3 are all real. This surprising reality was traced to the fact that the Hamiltonian possesses an antilinear P T symmetry (P is parity and T is time reversal), under which P pP = −p, P xP = −x, T pT = −p, T xT = x, T iT = −i. In general for any Hamiltonian H with an antilinear symmetry A (i.e. with AH = HA), when acting on H|ψ = E|ψ one has AH|ψ = AHA −1 A|ψ = HA|ψ = E * A|ψ . Thus for every eigenstate |ψ of H with energy E there is another eigenstate A|ψ of H with energy II. SPONTANEOUSLY BROKEN NON-HERMITIAN THEORY WITH A CONTINUOUS GLOBAL SYMMETRY The model introduced in [13] consists of two complex (i.e. charged) scalar fields φ 1 (x) and φ 2 (x) with action where the star symbol denotes complex conjugation, and thus Hermitian conjugation since neither of the the two scalar fields possesses any internal symmetry index. Since the action is not invariant under complex conjugation, it is not Hermitian. It is however invariant under the following CP T transformation and thus has an antilinear symmetry. 4 Since one can construct the energy-momentum tensor T µν by the variation T µν = 2(−g) −1/2 δI(φ 1 , φ 2 , φ * 1 , φ * 2 )/δg µν with respect to the metric g µν of the covariantized form of the action (momentarily replace ordinary derivatives by covariant ones and replace the measure by d 4 x(−g) 1/2 ), it follows from general coordinate invariance that a so-constructed energy-momentum tensor is automatically covariantly conserved in solutions to the equations of motion that follow from stationarity of the same action. Then, since one can set H = d 3 xT 00 , it follows that the associated Hamiltonian is time independent. Moreover, since the metric is CP T even, then since the action is CP T invariant it follows that the Hamiltonian is CP T invariant too. The Hamiltonian associated with (1) thus has an antilinear CP T symmetry. 5 In regard to (2), we note here that for φ 2 the transformation is not the conventional CP T transformation of scalar fields that is used in quantum field theory (one in which all scalar field CP T phases are positive [14]) but a similarity transformation of it. We will need to return to this point below, but for the moment we just use (2) as is. As written, the action given in (1) is invariant under the electric charge transformation to thus possess a standard Noether current that is conserved in solutions to the equations of motion (36) and (37) associated with (1). We note here that the authors of [13] used a non-standard Euler-Lagrange variational procedure (one which involves a non-trivial surface term) to obtain a non-standard set of equations of motion and a non-standard current (one not a Noether current invariance of the action), one that is nonetheless conserved in solutions to this non-standard set of equations of motion, and we discuss this issue in Sec. IV. However, we shall use a standard variational procedure and a standard Noether current approach. With the potential of the field φ 1 being of the form of a double-well potential, in its non-trivial minimum the scalar field φ 1 would acquire a non-trivial vacuum expectation value. This would then break the electric charge symmetry spontaneously, and one would thus wonder whether there might still be a massless Goldstone boson despite the lack of Hermiticity. As shown by the authors of [13] for the current they use and by us here for the above j µ , in both the cases a Goldstone boson is indeed present. To study the dynamics associated with the action given in (1) we have found it convenient to work in the component basis 4 The study of [7,12] shows that for relativistic actions such as that given in (1) CP T must be an invariance, a point we elaborate on further below. In their paper the authors of [13] took T to conjugate fields. While T does conjugate wave functions in quantum mechanics, conventionally in quantum field theory T does not conjugate q-number fields (it only conjugates c-numbers). Rather, it is charge conjugation C that conjugates fields. Thus what the authors of [13] refer to as P T is actually CP T , just as required by the analysis of [7,12]. However none of the conclusions of [13] are affected by this. 5 Just as is familiar from Hermitian quantum field theory, one can also construct the same metric-derived energy-momentum tensor from the translation invariance of the action and the equations of motion of the fields, since nothing in that construction actually requires Hermiticity. The advantage of using the metric approach, which also is not sensitive to Hermiticity, is that it ensures that the Hamiltonian that is obtained has the same transformation properties under CP T symmetry as the starting action. Because of the transformations in the φ 2 sector that are given in (2) ψ 1 has to have odd T parity while ψ 2 has to have even T parity. (However, their C parities are standard, with ψ 1 having even C parity while ψ 2 has odd C parity.) We discuss this pattern of T parity assignments further below, where we will make a commutation relation preserving similarity transformation that will effect ψ 1 → −iψ 1 , ψ 2 → −iψ 2 , to thus change the signs of their T and CP T parities. 7 The P T -symmetric p 2 + ix 3 theory is actually CP T symmetric since p and x are C even and charge conjugation plays no role in non-relativistic systems. Given a mode with λ 0 = 0 (the determinant in the (χ 2 ,ψ 1 ) sector of M being zero), then just as noted in [13], the presence of a massless Goldstone boson is apparent, and the Goldstone theorem is thus seen to hold when a non-Hermitian Hamiltonian has an antilinear symmetry. 8 If we restrict the sign of the factor in the square root in λ ± to be positive (the case considered in [13]), then all mass eigenvalues are real. However, we note that we obtain a mode with λ 0 = 0 regardless of the magnitude of this factor, and thus even obtain a Goldstone boson when the factor in the square root term is negative and mass eigenvalues appear in complex conjugate pairs. Moreover, as we show in Sec. III below, when the factor in the square root term is zero, in the (χ 1 ,ψ 2 ) sector the matrix M becomes Jordan block. The Goldstone boson mode is thus present in all three of the eigenvalue realizations that are allowed by antilinearity (viz. antilinear symmetry). Moreover, technically we do not even need to ascertain what the antilinear symmetry might even be, since as shown in [7,10], once we obtain an eigenvalue spectrum of the form that we have obtained in the (χ 1 ,ψ 2 ) sector, the mass matrix must admit of an antilinear symmetry. Thus antilinearity implies this particular form for the mass spectrum, and this particular form for the mass spectrum implies antilinearity. Finally, we note that if in the (χ 2 ,ψ 1 ) sector we set µ 4 = m 4 2 , then not only does λ 1 become zero just like λ 0 , but as we show in Sec. III the entire sector becomes Jordan block, with the Goldstone boson eigenfunction itself then having the zero norm that is characteristic of Jordan-block systems. III. EIGENVECTORS OF THE MASS MATRIX To discuss the eigenvector spectrum of the mass matrix M , it is convenient to introduce the P T theory V operator. Specifically, it was noted in [7][8][9] that if a time-independent Hamiltonian has an antilinear symmetry there will always exist a time-independent operator V that obeys the so-called pseudo-Hermiticity condition V H = H † V . If V is invertible (this automatically being the case for any finite-dimensional matrix such as the mass matrix M of interest to us here), then H and H † are isospectrally related according to H † = V HV −1 , to thus have the same set of eigenvalues. Since such an isospectral relation requires that the eigenvalues of H be real or in complex pairs, pseudo-Hermiticity is equivalent to antilinearity. If H is not Hermitian, one has to introduce separate right-and left-Schrödinger equations in which H acts to the right or to the left. Then from the relation i∂ t |n = H|n obeyed by solutions to the right-Schrödinger equation we obtain −i∂ t n| = n|H † , with n| then not being a solution to the left-Schrödinger equation as it does not obey −i∂ t n| = n|H. Consequently in the non-Hermitian case the standard Dirac norm n(t)|n(t) = n(0)|e iH † t e −iHt |n(0) is not time independent (i.e. not equal to n(0)|n(0) ), and one cannot use it as an inner product. However, the V norm constructed from V is time independent since i∂ t n(t)|V |n(t) = n(t)|(V H − H † V )|n(t) = 0. Since we can set we see that it is the state n|V that is a solution to the left-Schrödinger equation and not the bra n| itself. Moreover, from (13) we obtain to thus confirm the time independence of the V norm. Through the V operator then we see that time independence of inner products and antilinear symmetry are equivalent. Given that L n | = n|V is a solution to the left-Schrödinger equation, in the event that it is also a left-eigenvector of H and |R n is a right-eigenvector of H, in the antilinear case the completeness relation is given not by |n n| = I but by instead. As shown in [15], when charge conjugation is separately conserved, the left-right R n |V |R m V -norm is the same as the overlap of the right-eigenstate |R n with its P T conjugate (like P T conjugation Hermitian conjugation is also antilinear). And more generally, the V -norm is the same as the overlap of a state with its CP T conjugate [7]. In the special case where all the eigenvalues of a Hamiltonian are real and the eigenspectrum is complete, the Hamiltonian must either already obey H = H † or be transformable by a (non-unitary) similarity transformation S into one that does according to SHS −1 = H ′ = H ′ † . For the primed system one has right-eigenvectors that obey with the eigenstates of H and H ′ being related by On normalizing the eigenstates of the Hermitian H ′ to unity, we obtain With H ′ = H ′ † we obtain We can thus identify S † S with V when all energy eigenvalues are real and H is diagonalizable, and as noted in [7], can thus establish that the V norm is the S † S norm, so that in this case L n |R m = R n |V |R m = R n |S † S|R m = δ m,n is positive definite. 9 The interpretation of the V norms as probabilities is then secured, with their time independence ensuring that probability is preserved in time. Having now presented the general non-Hermitian formalism, a formalism that holds in both wave mechanics and matrix mechanics [3], and holds in quantum field theory [7], we can apply it to the mass matrix M given in (9). And while this matrix does arise in a quantum field theory, all that matters in the following is that it has a non-Hermitian matrix structure. The matrix M breaks up into two distinct two-dimensional blocks, and we can describe each of them by the generic where A, B and C are all real. The matrix N is not Hermitian but does have a P T symmetry if we set P = σ 3 and T = K where K effects complex conjugation. The eigenvalues of N are given by and they are real if A 2 > B 2 and in a complex conjugate pair if A 2 < B 2 , just as required of a non-Hermitian but P T -symmetric matrix. Additionally, the relevant S and V operators are given by and they effect regardless of whether A 2 − B 2 is positive or negative (if A 2 is less than B 2 , then while not Hermitian SN S −1 is still diagonal). However, as we elaborate on below, we note that if A 2 − B 2 is zero then S and V become undefined. Other than at A 2 − B 2 = 0 the matrix N ′ = SN S −1 is diagonal, and with N being given by N = S −1 N ′ S, the righteigenvectors of N that obey N R ± = Λ ± R ± are given by the columns of S −1 , and the left-eigenvectors of N that obey L ± N = Λ ± L ± are given by the rows of S. Given the right-eigenvectors one can also construct the left-eigenvectors by using the V operator. When A 2 > B 2 the left eigenvectors can be constructed as L ± | = R ± |V , and we obtain and these eigenvectors are normalized according to the positive definite L n |R m = R n |V |R m = δ m,n , i.e. according to L ± R ± = 1, L ∓ R ± = 0. In addition N and the identity I can be reconstructed as to thus be diagonalized in the left-right basis. When A 2 − B 2 is negative, the quantity (A − B) 1/2 is pure imaginary, and since R| is the Hermitian conjugate of |R , in the A 2 < B 2 sector up to a phase we have In a quantum theory with the mass matrix serving as the Hamiltonian, |R ± would evolve as e −i(C±iD)t = e −iCt±Dt , while L ± | would evolve as R ∓ |V , i.e. as e iCt∓Dt . As had been noted in general in [7] and as found here, the only overlaps that would be non-zero would be ∓ L ± |R ± = R ∓ |V |R ± = ±i, and they would be time independent. Since L ± | = R ± |V , these matrix elements would be transition matrix elements between growing and decaying states. Such transition matrix elements are not required to be positive or to even be real. While all of these eigenstates and the S and V operators are well-defined as long as A 2 is not equal to B 2 , at A 2 = B 2 they all become singular. Moreover at A 2 = B 2 the vectors R + and R − become identical to each other (i.e. equal up to an irrelevant overall phase), and equally L + and L − become identical too. The matrix N thus loses both a left-eigenvector and a right-eigenvector at A 2 = B 2 to then only have one left-eigenvector and only one right-eigenvector. At A 2 = B 2 the two eigenvalues become equal (Λ + = Λ − = C) and have to share the same leftand right-eigenvectors. The fact that S becomes singular at A 2 = B 2 means that N cannot be diagonalized, with its eigenspectrum being incomplete. N thus becomes a Jordan-block matrix that cannot be diagonalized. 10 Even though all of L ± , R ± become singular at A 2 = B 2 , N still has left-and right-eigenvectors L and R that are given up to an arbitrary normalization by and no matter what that normalization might be, they obey the zero norm condition characteristic of Jordan-block matrices: Even though the eigenspectrum of N is incomplete, the vector space on which it acts is still complete. One can take the extra states to be with L ′ R ′ = 0, so that R and R ′ span the space on which N acts to the right, while L and L ′ span the space on which N acts to the left. Comparing now with (9), we see that for the (χ 1 ,ψ 2 ) sector we have while for the (χ 2 ,ψ 1 ) sector we have From (29) and (30) the eigenvalues given in (11) follow. For the (χ 1 ,ψ 2 ) sector we thus have two eigenvectors with real eigenvalues if (2m 2 Since none of this affects the (χ 2 ,ψ 1 ) sector, for all three of the possible classes of eigenspectra associated with a non-Hermitian Hamiltonian with an antilinear symmetry we obtain a massless Goldstone boson. For the (χ 2 ,ψ 1 ) sector the eigenvalues are λ 0 = 0 and λ 1 = m 2 2 − µ 4 /m 2 2 . Both are them are real, and we shall take m 4 2 to not be less than µ 4 so that λ 1 could not be negative. Additionally, the left-and right-eigenvectors are given by as normalized to The Goldstone boson is thus properly normalized if one uses the left-right norm, with the two states in the (χ 2 ,ψ 1 ) sector forming a left-right orthonormal basis. Thus in the non-Hermitian case the standard Goldstone theorem associated with the spontaneous breakdown of a continuous symmetry continues to hold but the norm of the Goldstone boson has to be the positive left-right norm (or equivalently the P T theory norm [13]) rather than the standard positive Hermitian theory Dirac norm for which the theorem was first proved [16][17][18][19]. However, something unusual occurs if we set µ 2 = m 2 2 . Specifically, the eigenvalue λ 1 becomes zero, to thus now be degenerate with λ 0 . The eigenvectors R 0 and R 1 collapse onto a common single R and L 0 and L 1 collapse onto a common single L, and the normalization coefficients given in (31) diverge. The (χ 2 ,ψ 1 ) sector thus becomes of non-diagonalizable Jordan-block form. In this limit one can take the left-and right-eigenvectors to be and they obey the zero norm condition As such this represents a new extension of the Goldstone theorem, and even though the standard Goldstone theorem associated with the spontaneous breakdown of a continuous symmetry continues to hold, the norm of the Goldstone boson is now zero. Since a zero norm state can leave no imprint in a detector, we are essentially able to evade the existence of a massless Goldstone boson, in the sense that while it would still exist it would not be observable. IV. COMPARISON WITH THE WORK OF ALEXANDRE, ELLIS, MILLINGTON AND SEYNAEVE If we do a functional variation of the action given in (1) we obtain With all variations held fixed at the surface, stationarity leads to with these equations of motion being completely equivalent to (7). With these equations of motion one readily checks that the electric current (4) is conserved, just as it should be. There is however an immediate problem with these equations of motion, namely if we complex conjugate (36) we obtain not (37) but instead. The reason why this problem occurs is because while (37) is associated with ∂I/∂φ 1 and ∂I/∂φ 2 , (38) is associated with (∂I/∂φ * 1 ) * = ∂I * /∂φ 1 and (∂I/∂φ * 2 ) * = ∂I * /∂φ 2 and I is not equal to I * if I is not Hermitian. A similar concern holds for (7) as not one of its four separate equations is left invariant under complex conjugation. In order to get round this the authors of [13] propose that (37) not be valid, but rather one should use (36) and (38) instead. In order to achieve this the authors of [13] propose that one add an additional surface term to (35) so that one no longer imposes stationarity with respect δφ 1 and δφ 2 , but only stationarity with respect to δφ * 1 and δφ * 2 alone. 11 If one does use (36) and (38), the electric current j µ is no longer conserved (i.e. the surface term that is to be introduced must carry off some electric charge), but instead it is the current that is conserved in solutions to the equations of motion. As such, this j ′ µ current is a non-Noether current that is not associated with a symmetry of the action I (unless the inclusion of the surface term then leads to one), and thus its spontaneous breakdown is somewhat different from the standard one envisaged in [16][17][18][19]. Nonetheless, as noted in [13], when the scalar fields acquire vacuum expectation values, the mass matrix associated with (36) and (38) still has a zero eigenvalue. With the authors of [13] showing that it is associated with the Ward identity for j ′ µ , it can still be identified as a Goldstone boson. The work of [13] thus breaks the standard connection between Goldstone bosons and symmetries of the action. As such, the result of the authors of [13] is quite interesting as it provides possible new insight into the Goldstone theorem. However, the analysis somewhat obscures the issue as it suggests that the generation of Goldstone bosons in non-Hermitian theories is quite different from the generation of Goldstone bosons in Hermitian theories. It is thus of interest to ask whether one could show that one could obtain Goldstone bosons in a procedure that is common to both Hermitian and non-Hermitian theories. To this end we need to find a way to exclude (38) and validate (37), as it is (36) and (37) that we used in our paper in an approach that is completely conventional, one in which the surface term in (35) vanishes in the standard variational procedure way. To reconcile (36) and (37) or to reconcile the equations of motion in (7) with complex conjugation it is instructive to make a particular similarity transformation on the fields, even though doing so initially appears to lead to another puzzle, the Hermiticity puzzle, which we discuss and resolve below. It is more convenient to seek a reconciliation for (7) first, so from I(χ 1 , χ 2 , ψ 1 , ψ 2 ) we identify canonical conjugates for φ 1 and φ 2 of the form With these conjugates we introduce [7] S(ψ 1 ) = exp and obtain Since these transformations preserve the equal-time commutation , they are fully permissible transformations that do not modify the content of the field theory. Applying (41) to I(χ 1 , χ 2 , ψ 1 , ψ 2 ) we obtain where Stationary variation with respect to χ 1 , χ 2 , ψ 1 , and ψ 2 replaces (7) by and now each one of the equations of motion is separately invariant under complex conjugation. Returning now to the original φ 2 , φ * 2 fields we obtain while the equations of motion become and now there is no complex conjugation problem, with (48) being the complex conjugate of (47). 12 In addition we note under the transformations given in (45) the equations given in (38) transform into If we now switch the sign of φ * 2 , (47) is unaffected, while (49) becomes We recognize (50) as being (48). With (47) being unaffected by the switch in sign of φ * 2 , the mass matrix based on (47) and (48) is the same as the mass matrix based on (47) and (50). However, since all we have done in going from (36), (37) and (38) is make similarity transformations that leave determinants invariant, the eigenvalues associated with (36) and (37) (i.e. with (9)) on the one hand and the eigenvalues associated with (36) and (38) on the other hand must be the same. And indeed this is exactly found to be the case, with all four of the eigenvalues given in [13] being precisely the ones given in our (11). One can thus obtain the same mass spectrum as that obtained in [13] using a completely conventional variational procedure. In addition, we note that with (47) and (50) the current j ′ µ given in (39) that is used in [13] now is conserved. In fact, under the transformations given in (45) the j µ current given in (4) transforms into j ′ µ . Thus all that is needed to bring the study of [13]) into the conventional Goldstone framework (standard variation procedure, standard spontaneous breakdown of a symmetry of the action) is to first make a similarity transformation. Now the reader will immediately object to what we have done since now the µ 2 (χ 1 ψ 2 − χ 2 ψ 1 ) term in (43) and the iµ 2 (φ * 1 φ 2 − φ * 2 φ 1 ) term in (46) are both invariant under complex conjugation. Then with the actions in (43) and (46) then seemingly being Hermitian, we are seemingly back to the standard Hermitian situation where the Goldstone theorem readily holds, and we have seemingly gained nothing new. However, it cannot actually be the case that action in (43) could be Hermitian, since similarity transformations cannot change the eigenvalues of the mass matrix M given in (9), and as we have seen for certain values of parameters the eigenvalues can be complex or M could even be Jordan block. We thus need to explain how, despite its appearance, a seemingly Hermitian action might not actually be Hermitian. The answer to this puzzle has been provided in [7], and we describe it below. However, before doing so we note that there are two other approaches that could also achieve a reconciliation. The first alternative involves starting with the fields χ 1 , χ 2 , ψ 1 , ψ 2 as the fields that define the theory, and I(χ 1 , χ 2 , ψ 1 , ψ 2 ) as the input action. In this case one immediately obtains the equations of motion given in (7). As they stand these equations are inconsistent if all the four fields are Hermitian. If we take χ 1 and χ 2 to be Hermitian, then these equations force ψ 1 and ψ 2 to be anti-Hermitian. And if ψ 1 and ψ 2 are taken to be anti-Hermitian, both the equations of motion and the action given in (7) then are invariant under a complex conjugation (i.e. Hermitian conjugation) in which ψ 1 and ψ 2 transform into −ψ 1 and −ψ 2 . Moreover, in such a case the −iψ 1 and −iψ 2 fields that are generated through the similarity transformations given in (41) that would then be Hermitian. Of course then the interaction term given in (6) would be Hermitian as well, and we again have a seemingly Hermitian theory. Now suppose we do take ψ 1 and ψ 2 to be anti-Hermitian. Then if we start with I(χ 1 , χ 2 , ψ 1 , ψ 2 ) we cannot get back to I(φ 1 , φ 2 , φ * 1 , φ * 2 ) given in (1), since in the correspondence given in (5) φ * 2 was recognized as the conjugate of a φ 2 = ψ 1 + iψ 2 field that was expanded in terms of Hermitian ψ 1 and ψ 2 . If we now take φ 2 to still be defined as φ 2 = ψ 1 + iψ 2 , the associated φ * 2 would now be given by −(ψ 1 − iψ 2 ), and thus equal to minus the previous ψ 1 − iψ 2 used in (5). With this definition a rewriting of I(χ 1 , χ 2 , ψ 1 , ψ 2 ) in the (φ 1 , φ 2 , φ * 1 , φ * 2 ) basis would yield and equations of motion Now complex conjugation can be consistently applied, with (53) being derivable from (50) by complex conjugation. And again it is j ′ µ that is conserved. A second alternative approach is to reinterpret the meaning of the star operator used in φ * 1 and φ * 2 . Instead of taking it to denote Hermitian conjugation, we could instead take it denote CP T conjugation, i.e. φ * 1 = CP T φ 1 T P C, φ * 2 = CP T φ 2 T P C. Now we had noted in (2) that in order to enforce CP T symmetry on I(φ 1 , φ 2 , φ * 1 , φ * 2 ) we took φ 1 to be even and φ 2 to be odd under CP T , and we had noted that in general a scalar field should be CP T even (i.e. the same CP T parity as the CP T even fermionicψψ [7]). However, if we apply the similarity transformation given in (41) to φ 2 = ψ 1 + iψ 2 to get −iφ 2 , that would change the CP T parity. Thus while φ 2 has negative CP T parity it is similarity equivalent to a field that has the conventional positive CP T parity, with the transformed I ′ (φ 1 , φ 2 , φ * 1 , φ * 2 ) and the resulting equations of motion now being CP T symmetric if φ 2 is taken to have positive CP T parity, viz. CP T φ 2 T P C = φ * 2 , CP T φ * 2 T P C = φ 2 . (We leave φ 1 as given in (2), viz. CP T φ 1 T P C = φ * 1 , CP T φ * 1 T P C = φ 1 .) The difficulty identified by the authors of [13] can thus be resolved by a judicious choice of which fields are Hermitian and which are anti-Hermitian, by a judicious choice of which fields are CP T even and which are CP T odd, or by similarity transformations that generate complex phases that affect both Hermiticity and CP T parity. However in all of these such resolutions we are led to theories that now appear to be Hermitian and yet for certain values of parameters could not be, and so we need to address this issue. V. RESOLUTION OF THE HERMITICITY PUZZLE In [7] the issues of the generality of CP T symmetry and the nature of Hermiticity were addressed. In regard to Hermiticity it was shown that Hamiltonians that appear to be Hermitian need not be, since Hermiticity or selfadjointness is determined not by superficial inspection of the appearance of the Hamiltonian but by construction of asymptotic boundary conditions, as they determine whether or not one could drop surface terms in an integration by parts. And even if one could drop surface terms we still may not get Hermiticity because of the presence of factors of i in H that could affect complex conjugation. In regard to CP T it was shown that if one imposes only two requirements, namely the time independence of inner products and invariance under the complex Lorentz group, it follows that the Hamiltonian must have an antilinear CP T symmetry. Since this analysis involves no Hermiticity requirement, the CP T theorem is thus extended to the non-Hermitian case. As noted above, the time independence of inner products is achieved if the theory has any antilinear symmetry with the left-right V norm being the inner product one has to use. Complex Lorentz invariance then forces the antilinear symmetry to be CP T . In field theories one ordinarily constructs actions so that they are invariant under the real Lorentz group. However, the same analysis that shows that actions with spin zero Lagrangians are invariant under the real Lorentz group (the restricted Lorentz group) also shows that they are invariant under the complex one (the proper Lorentz group that includes P T transformations for coordinates and CP T transformations for spinors). Specifically, the action I = and since the metric η µν is symmetric and w µν is antisymmetric, thus given by δI = 2w µν d 4 x∂ ν [x µ L(x)]. Since the change in the action is a total divergence, the familiar invariance of the action under real Lorentz transformations is secured. However, we note that nothing in this argument depended on w µν being real, with the change in the action still being a total divergence even if w µν is complex. The action I = d 4 xL(x) is thus actually invariant under complex Lorentz transformations as well and not just under real ones, with complex Lorentz invariance thus being just as natural to physics as real Lorentz invariance. For our purposes here we note that the Lorentz invariant scalar field action I(φ 1 , φ 2 , φ * 1 , φ * 2 ) given in (1) is thus invariant not just under real Lorentz transformations but under complex ones as well. Since in the above we constructed a time-independent inner product for this theory, the I(φ 1 , φ 2 , φ * 1 , φ * 2 ) action thus must have CP T symmetry. And indeed we explicitly showed in (2) that this was in fact the case. Since theories can thus be CP T symmetric without needing to be Hermitian, it initially looks as though the two concepts are distinct. However, the issue of Hermiticity was addressed in [7], and the unexpected outcome of that study was that the only allowed Hamiltonians that one could construct that were CP T invariant would have exactly the same structure as (or be similarity equivalent to) the ones one constructs in Hermitian theories, namely presumed Hermitian combinations of fields and all coefficients real. 13 These are precisely the theories that one ordinarily refers to as Hermitian. However, thus turns out to not necessarily be the case since theories can appear to be Hermitian but not actually be so. To illustrate the above remarks it is instructive to consider some explicit examples, one involving behavior in time and the other involving behavior in space. For behavior in time consider the neutral scalar field with action I S = d 4 x[∂ µ φ∂ µ φ − m 2 φ 2 ]/2 and Hamiltonian H = d 3 x[φ 2 + ∇φ · ∇φ + m 2 φ 2 ]/2. Solutions to the wave equation −φ+∇ 2 φ−m 2 φ = 0 obey ω 2 (k) = k 2 +m 2 . Thus the poles in the scalar field propagator are at ω(k) = ±[k 2 +m 2 ] 1/2 , Hermitian just by superficial inspection. Rather, one has to construct its eigenstates first and look at their asymptotic behavior. In order to obtain eigenvectors for H PU (ω 1 , ω 2 ) that are normalizable the authors of [21] made the similarity transformation y = e πpzz/2 ze −πpzz/2 = −iz, q = e πpzz/2 p z e −πpzz/2 = ip z , on the operators of the theory so that [y, q] = i. Under this same transformation H PU (ω 1 , ω 2 ) transforms into where for notational simplicity we have replaced p x by p, so that [x, p] = i. With the eigenvalue z of the operator z being replaced in ψ 0 (z, x) by −iz (i.e. continued into the complex z plane), the eigenfunctions are now normalizable. 15 When acting on the eigenfunctions ofH P U (ω 1 , ω 2 ) the y and q = −i∂ y operators are Hermitian (as are x and p = −i∂ x ). However, as the presence of the factor i in the −iqx term indicates,H P U (ω 1 , ω 2 ) is not Hermitian. Since in general to establish Hermiticity one has to integrate by parts, drop surface terms and complex conjugate, we see that while we now can drop surface terms forH P U (ω 1 , ω 2 ) we do not recover the generic H ij = H * ji when we complex conjugate, even as we can now drop surface terms for the momentum operators when they act on the eigenstates ofH P U (ω 1 , ω 2 ) and achieve Hermiticity for them. 16 When ω 1 and ω 2 are real and unequal, the eigenvalues of the HamiltonianH P U (ω 1 , ω 2 ) are all and the eigenspectrum (two sets of harmonic oscillators) is complete. In that caseH P U (ω 1 , ω 2 ) can actually be brought to a form in which it is Hermitian by a similarity transformation. Specifically, one introduces an operator Q and obtainsH With the Q similarity transformation not affecting the asymptotic behavior of the eigenstates ofH P U (ω 1 , ω 2 ), and with y, q, x, and p thus all being Hermitian when acting on the eigenstates ofH ′ P U (ω 1 , ω 2 ), the Hermiticity of H ′ P U (ω 1 , ω 2 ) in the conventional Dirac sense is established. We can thus regardH P U (ω 1 , ω 2 ) with real and unequal ω 1 and ω 2 as being Hermitian in disguise. Moreover, in addition we note that since Q becomes singular at ω 1 = ω 2 , at ω 1 = ω 2HP U (ω 1 , ω 2 ) cannot be diagonalized, to thus confirm that H P U (ω) is Jordan block. 17 In general then we see that a Hamiltonian may not be Hermitian even though it may appear to be so, and may be (similarity equivalent to) Hermitian even when it does not appear to be so. And moreover, one cannot tell beforehand, as one needs to first solve the theory and see what its solutions look like. Other then possibly needing to continue into the complex plane in order to get convergence, when a Hamiltonian has all eigenvalues real and eigenspectrum complete it is always possible to similarity transform it into a form in which it is Hermitian in the standard Dirac sense. If a Hamiltonian obeys H = H † , then under a similarity transform that effects H ′ = SHS −1 , we note that ji Hermiticity condition being a condition that is not preserved under a general similarity transformation. Thus if one starts with some general H ′ that does not obey H ′ = H ′ † , it might be similarity equivalent to a Hermitian H but one does not know a priori. It only will be similarity equivalent to a Hermitian H if the eigenvalues of H ′ are all real and the eigenspectrum is complete. And the necessary 15 As noted in [6,7], the analog statement for the Pais-Uhlenbeck two-oscillator theory path integral is that the path integral measure has to be continued into the complex plane in order to get the path integration to converge. A similar situation pertains to the path integral associated with the relativistic neutral scalar field theory with action , a theory whose non-relativistic limit is the Pais-Uhlenbeck theory. 16 The use of the similarity transformations given in (58) parallels the use of (40) in Sec. IV. However, while using the similarity transformation of (40) was mainly a convenience, for H P U (ω 1 , ω 2 ) the similarity transformation of (58) is a necessity because of the need to construct normalizable wave functions. The presence of the factor i in (59) is thus related to the intrinsic structure of the Pais-Uhlenbeck theory. 17 The transformation with Q is the analog of the transformation of the spontaneously broken scalar field theory mass matrix given in (22), and the singularity in Q at ω 1 = ω 2 is the analog of that in (22) when A = B. condition for that to be the case is that H ′ possess an antilinear symmetry. However, unlike a Hermiticity condition a commutation relation is preserved under a similarity transformation (even a commutation relation that involves an antilinear operator [7]), with antilinear operators being more versatile than Hermitian operators. So much so in fact that in [7] it was argued that one should use CP T symmetry as the guiding principle for constructing quantum theories rather than Hermiticity. 18 When we characterize an operator such as z, p z , x, or p x as being Hermitian we are only referring to representations of the [z, p z ] = i and [x, p x ] = i commutation relations, without any reference to a Hamiltonian that might contain these operators. A Hamiltonian can thus be built out of Hermitian operators and can have all real coefficients, and yet not be Hermitian itself. The equal-frequency and complex-frequency Pais-Uhlenbeck models are particularly instructive in this regard. In the equal-frequency case none of the z, p z , x, or p x operators themselves are Jordan block, only H P U (ω) is. The spectrum of eigenstates of the position and momentum operators are complete, and all are contained in the space on which H PU (ω) acts. However, not all of these states are eigenstates of the Hamiltonian [22], with the one-particle sector of H P U (ω) behaving just like the example given in (26) and (28). Moreover, in the complex H PU (α, β) case all the eigenvalues of the position and momentum operators are real even though those of the Hamiltonian that is built out of them are not. As the equal-frequency and complex-frequency Pais-Uhlenbeck models show, one cannot tell whether a Hamiltonian might be Hermitian just by superficial inspection. One needs to solve the theory first and see what the eigenspectrum looks like. Thus one can have Hamiltonians that do not look Hermitian but are similarity equivalent to ones that are Hermitian, and one can have Hamiltonians that do look Hermitian but are not at all. As we see from these examples, whether or not an action is CP T symmetric is an intrinsic property of the unconstrained action itself prior to any stationary variation, but whether or not a Hamiltonian is Hermitian is a property of the stationary solution alone. 19 Hermiticity of a Hamiltonian cannot be assigned a priori, and can only be determined after the theory has been solved. However, the CP T properties of actions or fields can be assigned a priori (i.e. prior to a functional variation of the action, and thus a property of every variational path and not just the stationary one), and thus that is how Hamiltonians and fields should be characterized. One cannot write down any CP T invariant theory that up to similarity transformations does not have the same form as a Hermitian theory, though whether any such CP T invariant Hamiltonian actually is similarity equivalent to a Hermitian one is only establishable by constructing the solutions to the theory and cannot be determined ahead of time. Turning now to the study of [13], we note that it displays all of the features that we have just described. The interest of the authors of [13] was in exploring the status of the Goldstone theorem in non-Hermitian but P T -symmetric theories, and so they took as an example a relativistic field theory whose action was not Hermitian, i.e. not Hermitian by superficial inspection. However, by a similarity transformation it could be brought to a form given in (46) in which the action is Hermitian by superficial inspection (i.e. no factors of i and operators that are presumed to be Hermitian). However, while it now appears to be Hermitian it could not be since in the tree approximation that they studied the ensuing mass matrix was not Hermitian either. With the mass matrix having the three possible P T symmetry realizations (real and unequal eigenvalues, real and equal eigenvalues, eigenvalues in complex conjugate pairs) for various values of its parameters, the tree approximation to the model of [13] completely parallels the discussion of the three realizations of Pais-Uhlenbeck two-oscillator model given in (54), (55) and (56), where the Hamiltonian looks to be Hermitian but is not. It is of interest to note that to establish that the Pais-Uhlenbeck two-oscillator model theory is not Hermitian we had to construct wave functions and examine there asymptotic behavior, while for the tree approximation to the model of [13] we only need to look at a finite-dimensional matrix. Thus we can start with a fully-fledged field theory such as that based on the action given in (1), (46) or (51) and not need to identify the region in the complex plane where the functional path integral might exist or need to descend to the quantum mechanical limit and look at the asymptotic behavior of wave functions in order to determine whether or not the theory is Hermitian. 20 In the broken symmetry case we only need look at the finite-dimensional mass matrix that we get in tree approximation. For parameters in the model of [13] that obey (2m 2 1 m 2 2 − m 4 2 − 3µ 4 ) 2 − 4µ 4 m 4 2 > 0, the mass matrix can be brought to a Hermitian form by the similarity transformation presented in (22). Thus in this case the mass matrix is Hermitian in disguise. For this particular example the Goldstone theorem is the standard one, since if one can derive the Goldstone theorem in a Hermitian theory, it continues to hold if one makes a similarity transformation on it. 21 Whether or not the mass matrix given in (11) actually can be transformed to a Hermitian matrix depends on the values of the parameters in the action. However, as we have seen, no matter what the values of these parameters, and no matter whether the CP T -invariant mass matrix is realized by real eigenvalues, complex pairs of eigenvalues, or is of Jordan-block form, for any choice of the parameters one is able to obtain a Goldstone theorem. One can thus anticipate a Englert-Brout-Higgs mechanism for a local extension of the continuous symmetry that we have broken spontaneously, and we turn now to this issue. VI. SPONTANEOUSLY BROKEN NON-HERMITIAN THEORY WITH A CONTINUOUS LOCAL SYMMETRY Now that we have seem that we can consistently implement the Goldstone mechanism in a CP T -symmetric, non-Hermitian theory, it is natural to ask whether we can also implement the familiar Englert-Brout-Higgs mechanism developed in [24][25][26][27]. To this end we introduce a local gauge invariance and a gauge field A µ , and with F µν = ∂ µ A ν − ∂ ν A µ replace (1) and (3) by and With (2), the I(φ 1 , φ 2 , φ * 1 , φ * 2 , A µ ) action is CP T invariant since both i and A µ are CP T odd (spin one fields have odd CP T [14]). We make the same decomposition of φ 1 and φ 2 fields as in (5), and replace (6) by In the tree approximation minimum used above in which (g/4)χ 2 1 = m 2 1 − µ 4 /m 2 2 ,ψ 2 = −iµ 2χ 1 /m 2 2 ,χ 2 = 0,ψ 1 = 0, we induce a mass term for A µ of the form However, before assessing the implications of (65) we recall that in Sec. IV we had to reconcile I(χ 1 , χ 2 , ψ 1 , ψ 2 ) with the Hermiticity concern raised in [13]. The same is now true of I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ). In Sec. IV we had identified three solutions for I(χ 1 , χ 2 , ψ 1 , ψ 2 ), and all can be implemented for I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ). Thus we can consider a judicious choice of which fields are Hermitian and which are anti-Hermitian, a judicious choice of which fields are CP T even and which are CP T odd, or can apply similarity transformations that generate complex phases that affect both Hermiticity and CP T parity. In regard to Hermiticity, if we take A µ to be Hermitian (i.e. complex conjugate even), and as before take ψ 1 and ψ 2 to be anti-Hermitian (complex conjugate odd), then I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) will be invariant under complex conjugation, as will then be the equations of motion and tree approximation minimum that follow from it, and (65) will hold. Also, as we had noted in Sec. V, even though I(χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) might now be invariant under complex conjugation it does not follow that the scalar field mass matrix M given in (9) has to be Hermitian, and indeed it is not. in fact they did not. It is however the case for the study that we have presented here. If we transform ψ 1 and ψ 2 as in (41) but make no transformation on A µ , we obtain where As constructed, I ′ (χ 1 , χ 2 , ψ 1 , ψ 2 , A µ ) is invariant under complex conjugation if ψ 1 and ψ 2 are even under complex conjugation. Now the tree approximation minimum is given by (g/4)χ 2 1 = m 2 1 − µ 4 /m 2 2 ,ψ 2 = µ 2χ 1 /m 2 2 ,χ 2 = 0, ψ 1 = 0, A µ = 0, and we induce a mass term for A µ of the form (68) The mass of the gauge boson is thus again given by (65). Finally, in regard to interpreting the star symbol as the CP T conjugate, since A µ is real there is no change in the discussion presented in Sec. IV, and (65) continues to hold. As we can see from (65) and (68), the gauge boson does indeed acquire a non-zero mass unless m 2 1 m 2 2 = µ 4 or m 4 2 = µ 4 . The first of these conditions is not of significance since if m 2 1 m 2 2 = µ 4 it follows thatχ 1 (and thusψ 2 ) is zero and there is no symmetry breaking, and the gauge boson stays massless. 22 However, the condition m 4 2 = µ 4 is related to the symmetry breaking since it does not obligeχ 1 to vanish. Moreover, since the m 4 2 = µ 4 condition does not constrain the (χ 1 ,ψ 2 ) sector λ ± eigenvalues given in (11) in any particular way (m 2 1 not being constrained by the m 4 2 = µ 4 condition), we see that regardless of whether or not m 4 2 and µ 4 are in fact equal to each other, we obtain the Englert-Brout-Higgs mechanism in the (χ 2 ,ψ 1 ) sector no matter how the antilinear symmetry is realized in the (χ 1 ,ψ 2 ) sector, be it all eigenvalues real, eigenvalues in a complex pair, or mass matrix being of non-diagonalizable Jordan-block form. In the (χ 2 ,ψ 1 ) sector both λ 0 and λ 1 as given in (11) are real, and not degenerate with each other as long as m 4 2 = µ 4 . However, something very interesting occurs if m 4 2 = µ 4 . Then the (χ 2 ,ψ 1 ) sector becomes Jordan block and the Goldstone boson acquires zero-norm. Since the Goldstone boson can no longer be considered to be a normal positive norm particle, it cannot combine with the gauge boson to give the gauge boson a longitudinal component and make it massive. And as we see, and just as required by consistency, in that case the gauge boson stays massless. Thus in a non-Hermitian but CP T -symmetric theory it is possible to spontaneously break a continuous local symmetry and yet not obtain a massive gauge boson. VII. SUMMARY In the non-relativistic antilinear symmetry program one replaces Hermiticity of a Hamiltonian by antilinearity as the guiding principle for quantum mechanics for both infinite-dimensional wave mechanics and either finite-or infinite-dimensional matrix mechanics. For infinite-dimensional relativistic quantum field theories whose actions are invariant under the complex Lorentz group the antilinear symmetry is uniquely prescribed to be CP T . Hamiltonians that have an antilinear symmetry can of course be Hermitian as well, with all energy eigenvalues then being real and all energy eigenvectors being complete. However, in general, antilinear symmetry permits two additional options for Hamiltonians that cannot be realized in Hermitian theories, namely energy eigenvalues could be real while energy eigenvectors could be incomplete (Jordan block), or energy eigenvectors could still be complete but energy eigenvalues could come in complex conjugate pairs. In the first case all Hilbert space inner products are positive definite, in the 22 If m 2 1 m 2 2 −µ 4 = 0, the unbroken (χ 1 , ψ 2 ) sector mass matrix given in (64) is of the form (1/2m 2 2 )(µ 2 χ 1 −im 2 2 ψ 2 ) 2 . It has two eigenvalues, λa = µ 4 /m 2 2 − m 2 2 and λ b = 0. In the (χ 1 , ψ 2 ) basis the right-eigenvector for λa is (µ 2 , −im 2 2 ), while the right-eigenvector for λ b is (m 2 2 , −iµ 2 2 ). The fact that λ b is zero is not of significance since it occurs in the absence of spontaneous symmetry breaking, and would thus not be maintained under radiative corrections. second (Jordan-block) case norms are zero, and in the third case the only norms are transition matrix elements and their values are not constrained to be positive or to even be real. Moreover, in the antilinear symmetry program Hamiltonians that look to be Hermitian by superficial inspection do not have to be, while Hamiltonians that do not look to be Hermitian by superficial inspection can actually be similarity equivalent to Hamiltonians that are Hermitian (viz. Hermitian in disguise). In applications of antilinear symmetry to relativistic systems it is of interest to see how many standard results that are obtained in Hermitian theories might still apply in the non-Hermitian case, and whether one could obtain new features that could not be realized in the Hermitian case. To address these issues Alexandre, Ellis, Millington and Seynaeve studied how spontaneously broken symmetry ideas and results translate to the non-Hermitian environment. With broken symmetry and the possible existence of massless Goldstone bosons being intrinsically relativistic concepts, they explored a CP T symmetric but non-Hermitian two complex scalar field relativistic quantum field theory with a continuous global symmetry. Their actual treatment of the problem was somewhat unusual in that they allowed for non-vanishing surface terms to contribute in the functional variation of the action, with this leading to a specific nonstandard set of equations of motion of the fields. Their reason for doing this was that the equations of motion obtained by the standard variational procedure with endpoints held fixed were not invariant under complex conjugation. However, they still found a broken symmetry solution with an explicit massless Goldstone boson. In the treatment of the same model that we provide here we use a conventional variational calculation in which fields are held fixed at the endpoints. However, to get round the complex conjugation difficulty we make a similarity transformation on the fields which then allows us to be able to maintain invariance of the equations of motion under complex conjugation (the similarity transformation itself being complex). However, on doing this we obtain an action that appears to be Hermitian, and if it indeed were to be Hermitian there would be nothing new to say about broken symmetry that had not already been said for Hermitian theories. However, while appearing to be Hermitian the theory in fact is not, and thus it does fall into the non-Hermitian but antilinearly symmetric category. In their analysis Alexandre, Ellis, Millington and Seynaeve studied the tree approximation to the equations of motion of the theory and found broken symmetry solutions. What is particularly noteworthy of their analysis is that even though they were dealing with a fully-fledged infinite-dimensional quantum field theory, in the tree approximation the mass matrix that was needed to determine whether there might be any massless Goldstone boson was only four dimensional (viz. the same number as the number of independent fields in the two complex scalar field model that they studied). As such, the mass matrix that they obtained is not Hermitian, and given the underlying antilinear CP T symmetry of the model and thus of the mass matrix, the mass matrix is immediately amenable to the full apparatus of the antilinear symmetry program as that apparatus holds equally for fields and matrices. Alexandre, Ellis, Millington and Seynaeve studied just one realization of the antilinear symmetry program, namely the one where all eigenvalues of the mass matrix are real and the set of all of its eigenvectors is complete. In our analysis we obtain the same mass matrix (which we must of course since all we have done is make a similarity transformation on their model), and show that in this particular realization the mass matrix can be brought to a Hermitian form by a similarity transformation, to thus be Hermitian in disguise. However, this same mass matrix admits of the two other realizations of antilinear symmetry as well, namely the nondiagonalizable Jordan-block case and the complex conjugate eigenvalue pair case. And in all of these cases we show that there is a massless Goldstone boson. In this regard the Jordan-block case is very interesting because it permits the Goldstone boson itself to be one of the zero norm states that are characteristic of Jordan-block matrices. That these cases can occur at all is because while the similarity transformed action that we use appears to be Hermitian it actually is not, something however that one cannot ascertain without first solving the theory. Finally, we extend the model to a local continuous symmetry by introducing a massless gauge boson, and find that the massless Goldstone boson can be incorporated into the massless gauge boson and make it massive by the Englert-Brout-Higgs mechanism in all realizations of the antilinear symmetry except one, namely the Jordan-block Goldstone mode case. In that case we find that since the Goldstone boson then has zero norm, it does not get incorporated into the gauge boson, with the gauge boson staying massless. In this case we have a spontaneously broken local gauge symmetry and yet do not get a massive gauge boson. This option cannot be obtained in the standard Hermitian case where all states have positive norm, to thus show how rich the non-Hermitian antilinear symmetry program can be. In general the fields used in the tree approximation to a quantum field theory are c-number matrix elements of q-number quantum fields. Given the left-and right-eigenstates introduced in Sec. III we can identify which particular states are involved in (8). Specifically, we can identify the c-number tree approximation fields as the matrix elements Ω L |φ|Ω R = Ω R |V φ|Ω R , i.e. c-number matrix elements of the q-number fields between the left and right vacua. In quantum field theory one introduces a generating functional via the Gell-Mann-Low adiabatic switching method. The discussion in the Hermitian case is standard and of ancient vintage, and following the convenient discussion in [28], here we adapt it to the non-Hermitian case. In the adiabatic switching procedure one introduces a quantummechanical Lagrangian density L 0 of interest, switches on a real local c-number source J(x) for some quantum field φ(x) at time t = −∞, and switches J(x) off at t = +∞. While the source is active the Lagrangian density of the theory is given by L J = L 0 + J(x)φ(x). Before the source is switched on the system is in the ground state of the Hamiltonian H 0 associated with L 0 with right-eigenvector |Ω − R and left-eigenvector Ω − L | = Ω − R |V . (Here V implements V H 0 V −1 = H † 0 and is independent of J.) And after the source is switched off the system is in the state with right-eigenvector |Ω + R and left-eigenvector Ω + L | = Ω + R |V . (V again implements V H 0 V −1 = H † 0 .) While |Ω − R and |Ω + R are both eigenstates of H 0 , they differ by a phase, a phase that is fixed by J(x) according to as written in terms of the vacua when J is active. This expression serving to define the functional W (J), with W (J) serving as the generator of the connected J = 0 theory Green's functions with the Γ n 0 (x 1 , ..., x n ) being the one-particle-irreducible, φ C = 0, Green's functions of the quantum field φ(x). Functional variation of Γ(φ C ) then yields to relate δΓ(φ C )/δφ C back to the source J. On expanding in momentum space around the point where all external momenta vanish, we can write Γ(φ C ) as The quantity is known as the effective potential as introduced in [19,29] (a potential that is spacetime independent if φ C is), while the Z(φ C ) term serves as the kinetic energy of φ C . The significance of V (φ C ) is that when J is zero and φ C is spacetime independent, we can write V (φ C ) as in a volume V , where |S R and |N R are spontaneously broken and normal vacua in which S L |φ|S R is nonzero and N L |φ|N R is zero. The search for non-trivial tree approximation minima is then a search for states |S R in which V (φ C ) would be negative. In the non-Hermitian case then the V (φ C ) associated with left and right vacua is the needed effective potential. 23 In reference to the Goldstone theorem, we note that in writing down Ward identities one begins with operator relations for time-ordered products of general fields and current operators of the generic form ∂ µ [T (j µ (x)A(0))] = δ(x 0 )[j 0 (x), A(0)] + T (∂ µ j µ A(0)) , where A(0) is a product of fields at the origin of coordinates. We restrict to the case where ∂ µ j µ (x) = 0, and take matrix elements in the vacuum (normal or spontaneously broken), only unlike in the Hermitian case in the non-Hermitian case we take matrix elements in the left-and right-vacua. Since there is only one four-momentum p µ in the problem in Fourier space we can set where F (p 2 ) is a scalar function. On introducing Q(t) = d 3 xj 0 (x), we integrate both sides of (A11) with d 4 x. Should the right-hand side of (A12) not vanish (i.e. Q(t = 0)|Ω R = 0 or Ω L |Q(t = 0) = 0), there would then have to be a massless pole at p 2 = 0 on the left-hand side. This then is the Goldstone theorem, as adapted to the non-Hermitian case. As we see, by formulating non-Hermitian theories in terms of left-and right-eigenvectors, the extension of the discussion of spontaneously broken symmetry to the non-Hermitian case is straightforward. The specific structure of Ward identities such as that given in (A12) only depends on the symmetry behavior associated with the currents of interest. Since relations such as (A10) are operator identities they hold independent of the states in which one calculates matrix elements of them. In the non-Hermitian but CP T -symmetric situation, in order to look for a spontaneous breaking of the continuous global symmetry associated with the currents of interest one takes matrix elements of the relevant Ward identity in the S L | and |S R states, and as discussed in [13], one looks to see if the consistency of the Ward identity matrix elements in those states requires the existence of massless Goldstone bosons. In regard to the spontaneous breakdown of a continuous local symmetry in the non-Hermitian but CP T -symmetric case, the authors of [13] had left open the question of whether one could achieve the Englert-Brout-Higgs mechanism if one uses their non-standard variational procedure. 24 Since we use a standard variational procedure and standard Noether theorem approach and continue to use the same S L | and |S R states, we can readily extend our approach to the local symmetry case. And we find that in all realizations of the antilinear symmetry we can achieve the Englert-Brout-Higgs mechanism just as in the standard Hermitian case, save only for the particular Jordan-block situation in which the Goldstone boson itself has zero norm, a case in which, despite the spontaneous symmetry breaking, the gauge boson stays massless.
16,314.2
2018-08-01T00:00:00.000
[ "Physics" ]
Quantum computing with Bianchi groups It has been shown that non-stabilizer eigenstates of permutation gates are appropriate for allowing $d$-dimensional universal quantum computing (uqc) based on minimal informationally complete POVMs. The relevant quantum gates may be built from subgroups of finite index of the modular group $\Gamma=PSL(2,\mathbb{Z})$ [M. Planat, Entropy 20, 16 (2018)] or more generally from subgroups of fundamental groups of $3$-manifolds [M. Planat, R. Aschheim, M.~M. Amaral and K. Irwin, arXiv 1802.04196(quant-ph)]. In this paper, previous work is encompassed by the use of torsion-free subgroups of Bianchi groups for deriving the quantum gate generators of uqc. A special role is played by a chain of Bianchi congruence $n$-cusped links starting with Thurston's link. Introduction A Bianchi group Γ k = PS L(2, O k ) < PS L(2, C) acts as a subset of orientation-preserving isometries of 3-dimensional hyperbolic space H 3 with O k the ring of integers of the imaginary quadratic field I = Q( √ −k). The quotient space 3-orbifold PS L(2, O k ) \ H 3 has a set of cusps in bijection with the class group of I [5][6][7]. A torsion-free subgroup Γ k (l) of index l of Γ k is the fundamental group π 1 of a 3-manifold defined by a link such as the figure-of-eight knot [with Γ −3 (12)], the Whitehead link [with Γ −1 (12)] or the Borromean rings [with Γ −1 (24)]. The fundamental group of the knot complement in the three sphere S 3 also called the knot group, or similarly the link group [5,6], was used to construct appropriate d-dimensional fiducial states for universal quantum computing (uqc) [3] thanks to some of their degree d coverings. In this paper, one starts by upgrading these models of uqc by using other torsion-free subgroups of Bianchi groups and their corresponding 3-manifold such as the Bergé manifold that comes from the Bergé link L6a2 [with Γ −3 (24)] or the so-called magic manifold that comes from the link L6a5 [with Γ −7 (6)]. The latter link is a small congruence link and belongs to a chain of eight links starting with Thurston's eight-cusped congruence link [with Γ −3 and ideal (5 + √ −3)/2 [8,9] and ending with the Whitehead link and then the figureeight knot. In Sec. 2, one reminds the permutation based model of universal quantum computing developed by the authors and its relationship to minimal informationally complete POVMs or MICs [1][2][3]. In Sec. 3, important small index torsion-free subgroups of Bianchi groups and the corresponding substructure of 3-manifolds are derived. In Sec. 4, one specializes in the connection of the aforementioned Bianchi subgroups to our version of uqc. One features on a remarkable chain of n-cusped n-links, n = 8..1 obtained thanks to (±1, 1)-slope Dehn filling starting with congruence Thurston's link. Their possible role for uqc and the relevant 3-manifolds is discussed. In our approach [1,2], minimal informationally complete POVMs (MICs) are derived from appropriate fiducial states under the action of the (generalized) Pauli group. The fiducial states also allow to perform universal quantum computation [10]. Minimal informationally complete POVMs with permutations A POVM is a collection of positive semi-definite operators {E 1 , . . . , E m } that sum to the identity. In the measurement of a state ρ, the i-th outcome is obtained with a probability given by the Born rule p(i) = tr(ρE i ). For a minimal IC-POVM, one needs d 2 onedimensional projectors Π i = |ψ i ψ i |, with Π i = dE i , such that the rank of the Gram matrix with elements tr(Π i Π j ), is precisely d 2 . A SIC-POVM (the S means symmetric) obeys the , that allows the explicit recovery of the density matrix as in [11, eq. (29)]. New MICs (i.e. whose rank of the Gram matrix is d 2 ) and with Hermitian angles ψ i |ψ j i j ∈ A = {a 1 , . . . , a l } have been discovered [2]. A SIC is equiangular with The fiducial states for SIC-POVMs are quite difficult to derive and seem to follow from algebraic number theory [12]. Except for d = 3, the MICs derived from permutation groups are not symmetric and most of them can be recovered thanks to subgroups of index d of the modular group Γ [2, Table 1]. For instance, for d = 3, the action of a magic state of type (0, 1, ±1) results into the Hesse SIC shown in Fig 1a, arising from the congruence subgroup Γ 0 (2) of Γ. For d = 4, the action of the two-qubit Pauli group on the magic/fiducial state of type (0, 1, −ω 6 , ω 6 − 1) with ω 6 = exp( 2iπ 6 ) results into a MIC whose geometry of triple products of projectors Π i , arising from the the congruence subgroup Γ 0 (3) of Γ, turns out to correspond to the commutation graph of Pauli operators, see Fig. 1b and [2, Fig. 2c]. For d = 5, the congruence subgroup 5A 0 of Γ is used to get a MIC whose geometry consists of copies of the Petersen graph (see [2,Fig. 3c]. For d = 6, all five congruence subgroups Γ ′ , Γ(2), 3C 0 , Γ 0 (4) or Γ 0 (5) point out the geometry of Borromean rings (see The modular group Γ first served as a motivation for investigating the trefoil knot manifold 3 1 in relation to uqc and the corresponding MICs, then the uqc problem was put in the wider frame of Poincaré conjecture, the Thurston's geometrization conjecture and the related 3manifolds. E.g. MICs may follow from hyperbolic or Seifert 3-manifolds as shown in Tables 2 to 5 of [3]. A further step is obtained here by restricting our choice of 3-manifolds to low degree coverings of Bianchi subgroups. This is a quite natural procedure to base our uqc models on Bianchi groups Γ k defined over the ring O k of imaginary quadratic integers [13] since they are a natural generalization of the modular group Γ. Table 1 provides a short list of low index torsion-free subgroups of Bianchi groups Γ k , k ∈ {−1, −2, −3, −7, −15} [13] obtained from the software Magma [14] and their identification as fundamental groups of 3-manifolds obtained from the software SnapPy [15]. In particular, one recovers the Whitehead link and the Borromean rings (k = −1 with index l = 12 and 24, respectively) as well as the figure-eight knot (k = −3 with index 12) whose relationship to uqc was explored in [3]. Other links of importance in our paper are the link L6a1, the Bergé link L6a2 and the magic link L6a5 whose connections to uqc is summarized in Table 2. According to [16], a d-fold covering of the fundamental group π 1 (M) of a 3-manifold M is uniquely determined by the conjugacy class of a subgroup of index d of π 1 . To recognize the d-covering from the conjugacy class and conversely, one makes use of the cardinality signature of subgroups of π 1 (M) denoted η d (π 1 (M)), d = 1..d max and one identifies η d (π 1 (M)) both in a particular d-covering of M (with SnapPy) and in the representative of a particular conjugacy class of subgroups of π 1 (M) (with Magma). A Bianchi factory for quantum computing First one provides a reminder about the concept of Dehn filling that will be useful in Sect. 4.2. Let us start with a Lens space L(p, q) that is 3-manifold obtained by gluing the boundaries of two solid tori together, so that the meridian of the first solid torus goes to a (p, q)-curve on the second solid torus [where a (p, q)-curve wraps around the longitude p times and around the meridian q times]. Then we generalize this concept to a knot exterior, i.e. the complement of an open solid torus knotted like the knot. One glues a solid torus so that its meridian curve goes to a (p, q)-curve on the torus boundary of the knot exterior, an operation called Dehn surgery [6, p. 275],[7, p 259], [17]. According to Lickorish's theorem, every closed, orientable, connected 3-manifold is obtained by performing Dehn surgery on a link in the 3-sphere. One shows below how the Bianchi subgroups are related to uqc and the related MICs and how useful is Dehn filling in our context. Universal quantum computing from the Bianchi factory In this subsection, one specializes on uqc based on the qutrit Hesse SIC (shown in Fig. 1a) and on the two-qubit geometry of the generalized quadrangle GQ(2, 2) (shown in Fig. 1b) obtained with the 'magic' link L6a5. The qutrit uqc follows from a link called 'L14n63788' with Poincaré polyhedron shown in Fig. 2a and the two-qubit uqc follows from a 3-manifold with Poincaré polyhedron shown in Fig 2b. The former case corresponds to the 3-fold irregular covering the fundamental group π 1 (L6a5) whose 3-manifold has first homology Z ⊕5 , 5 cusps and volume ≈ 16.00. The latter case corresponds to the 4-fold irregular covering whose 3-manifold has first homology Z/2 ⊕ Z ⊕4 , 4 cusps and volume ≈ 21.34. For the other links investigated in Table 2 one proceeds in the same way. Congruence links in the Bianchi factory As announced earlier, there is an interesting sequence of links starting at the Thurston's link and ending at the figure-eight knot that one obtains by applying (±1, 1)-slope Dehn fillings. The Dehn fillings of last five links called M i , i = 5..1 were studied in [18]. Observe that M 3 is the magic link L6a4, M 4 is the 4-link L8n7 and M 5 is the 5-link L10n113 of Table 2. The full sequence is Thurston ( and is pictured in Fig. 3. Applying (1, 1)-slope Dehn filling on the figure-eight knot, the sequence terminates to the spherical manifold M 0 that is the Poincaré homology sphere (also known as Poincaré dodecahedral space) [5,6]. Many of the links of the sequence correspond to 'congruence' manifolds [19]. The 3-manifolds that we could identify as connected to MICs are in Table 2. This becomes a cumbersome task for links larger than the 6-link. For the link L8n7 the cardinality sequence is η d = {63, 794, 23753, 280162, · · · }, for the link L10n113 it is η d = {31, 176, 1987, 7628, 11682 · · · } and for the 6-link it is η d = {63, 580, 12243, 94274 · · · }. For the latter link, SnapPy is able to build the Dirichlet domain (not shown) of symmetry group D 6 , corresponding to the Hesse SIC while it is not the case for the Hesse SICs attached to the 5-link. We expect that the full sequence should have a meaning in the context of uqc. Conclusion It is not yet known which vista a practical quantum computer will follow. The authors of [3] developed a view of universal quantum computing based on 3-manifolds. In a nutshell, there exists a connection between the Poincaré conjecture -it states that every simply connected closed 3-manifold is homeomorphic to the 3-sphere-and the Bloch sphere S 3 that houses the qubits. According to this approach, the dressing of qubits in S 3 leads to 3-manifolds (that have been investigated in much details by W. P. Thurston and led a proof of the Poincaré conjecture) many of them corresponding to MICs (minimal informationally complete POVMs) and the related uqc (universal quantum computing). In [2], uqc, was based on the modular group Γ that corresponds to the trefoil knot manifold in [3]. In the present paper, the 3manifolds under investigation derive from subgroups of Bianchi groups, a generalization of Γ. There seems to exist a nice connection between some congruence subgroups of Bianchi groups through the chain (1) and quantum computing that the author started to investigate. At a more practical level, 3-manifolds can also be seen as (three-dimensional) quasiparticles beyond anyons [22]. [18].
2,840.6
2018-08-21T00:00:00.000
[ "Physics", "Computer Science" ]
Fabricating a PDA-Liposome Dual-Film Coated Hollow Mesoporous Silica Nanoplatform for Chemo-Photothermal Synergistic Antitumor Therapy In this study, we synthesized hollow mesoporous silica nanoparticles (HMSNs) coated with polydopamine (PDA) and a D-α-tocopheryl polyethylene glycol 1000 succinate (TPGS)-modified hybrid lipid membrane (denoted as HMSNs-PDA@liposome-TPGS) to load doxorubicin (DOX), which achieved the integration of chemotherapy and photothermal therapy (PTT). Dynamic light scattering (DLS), transmission electron microscopy (TEM), N2 adsorption/desorption, Fourier transform infrared spectrometry (FT-IR), and small-angle X-ray scattering (SAXS) were used to show the successful fabrication of the nanocarrier. Simultaneously, in vitro drug release experiments showed the pH/NIR-laser-triggered DOX release profiles, which could enhance the synergistic therapeutic anticancer effect. Hemolysis tests, non-specific protein adsorption tests, and in vivo pharmacokinetics studies exhibited that the HMSNs-PDA@liposome-TPGS had a prolonged blood circulation time and greater hemocompatibility compared with HMSNs-PDA. Cellular uptake experiments demonstrated that HMSNs-PDA@liposome-TPGS had a high cellular uptake efficiency. In vitro and in vivo antitumor efficiency evaluations showed that the HMSNs-PDA@liposome-TPGS + NIR group had a desirable inhibitory activity on tumor growth. In conclusion, HMSNs-PDA@liposome-TPGS successfully achieved the synergistic combination of chemotherapy and photothermal therapy, and is expected to become one of the candidates for the combination of photothermal therapy and chemotherapy antitumor strategies. Introduction Breast cancer is still the leading cause of cancer death in women worldwide [1]. The traditional treatment methods (including chemotherapy, surgery, and radiotherapy) are still the main options for the treatment of breast cancer. However, the results of these therapeutic methods are dissatisfactory, due to the drug resistance caused by long-term chemotherapy, damage to normal tissues and organs caused by radiotherapy, and high recurrence rate after surgery. Simultaneously, due to the complex, diverse, and heterogeneous characteristics of tumors, single treatments (radiotherapy, chemotherapy, etc.) cannot achieve the ideal effect of tumor suppression. To overcome the limited therapeutic effect of single chemotherapy, the synergistic treatment of tumors by combining multiple antitumor strategies has attracted increasing attention [2][3][4][5][6][7][8]. More importantly, nanotechnology can enable multimodal synergistic therapies by assembling various therapeutic elements into a nanoplatform, thereby forming multifunctional nanomaterials [9][10][11]. In this regard, various synergistic nanoplatforms have been proposed, such as chemo-photothermal therapy (chemo-PTT) [12,13], chemophotodynamic therapy (chemo-PDT) [14], chemoimmunotherapy [15,16], and PTT/PDT [17]. derivative of natural vitamin E, has been widely used in tumor therapy in recent years. TPGS can act as an effective surfactant to emulsify hydrophobic molecules and stabilize nanoparticles. Furthermore, TPGS has been shown to improve the drug encapsulation efficiency, cellular uptake, and in vitro cytotoxicity of cancer cells and the reversal of multidrug resistance (MDR) [59]. More importantly, the PEG chain of TPGS can prolong the reaction time and systemic circulation time in the blood stream after intravenous injection, enabling the drug carrier to better accumulate in the tumor site via the EPR effect. Therefore, TPGS-modified mixed lipids have been widely used as drug carriers to improve the blood circulation time and promote drug enrichment at tumor sites. Song et al. formulated TPGS-modified long-circulating liposomes to load ZgI (ziyuglycoside I). Compared with blank liposomes, the ZgI-TPGS liposomes exhibited a significantly longer mean residence time (MRT) and significantly lower clearance (CL) rate [60]. Here, we constructed a hollow mesoporous silica nanodrug delivery system (DOX/HMSNs-PDA@liposome-TPGS) double-coated with polydopamine (PDA) and a hybrid lipid membrane (liposome-TPGS) for chemotherapy-photothermal synergistic therapy. As shown in Scheme 1, we coated a PDA shell and hybrid lipid film on the surfaces of the HMSNs loaded with the anticancer drug doxorubicin (DOX), and the structure not only effectively prevented DOX leakage under physiological cycling conditions (pH 7.4), but also exhibited sustained-release behavior in the tumor microenvironment (pH 5.0). Meanwhile, the coating of liposome-TPGS can prolong the circulation time of the carrier and improve the blood compatibility of the carrier. Additionally, the PDA coating showed excellent photothermal conversion efficiency (η = 16.7%). Furthermore, the DOX/HMSNs-PDA@liposome-TPGS exhibited good chemo-PTT synergistic antitumor effects through in vitro and in vivo antitumor experiments. Our results all proved that the DOX/HMSNs-PDA@liposome-TPGS could be a promising nanoplatform for drug delivery and combination chemo-PTT therapy for cancer. Synthesis of HMSNs-PDA and HMSNs-PDA@Liposome-TPGS The procedures for the HMSNs were conducted via a self-template etching method based on previous reports with a slight adjustment [61]. Here, 5 mL of F127 solution (5 mg/mL) and 7 mL of 2 M NaOH were added to a solution of 1 g of CTAB that had been dissolved in 475 mL of water. The reaction mixture was then stirred while being heated to 80 • C (600 rpm). Then, TEOS (6 mL) was rapidly added to this solution and the reaction mixture was stirred for 1 h. Next, TEOS (5 mL) was slowly added to this suspension in a dropwise manner. The reaction mixture was then agitated for a further hour. The above product was centrifuged, washed with water and absolute ethanol, and dried. The obtained powder was dissolved in PBS (pH 7.4) at a concentration of 0.5 mg/mL at 65 • C for 18 h under gentle stirring. The HMSNs were collected by centrifugation at 9000 rpm and washed with absolute ethanol three times. Then, the amino-modified HMSNs (denoted as HMSNs-NH 2 ) were fabricated using the after-grafting method. In brief, 0.2365 g of HMSNs was placed in 23.65 mL of ethanol and the mixture was sonicated to be dispersed. Then, APTES (710 µL) was added dropwise and the reactant was stirred for 24 h to synthesize the HMSNs-NH 2 . Afterwards, 10 mg of HMSNs-NH 2 was placed in 5 mL of HEPES (pH 7.4). Next, 2 mg of dopamine and 2.4 mg of ammonium persulfate were added and reacted for 12 h. The product was collected and recorded as HMSNs-PDA. The liposome-TPGS was synthesized using the thin-film hydration method [62]. Firstly, SPC, cholesterol, and TPGS (w/w/w, 8:1:1) were dissolved in 3 mL of chloroform and the organic solvent was removed by rotary evaporation under vacuum (−0.1 MPa) at 37 • C, after which a layer of thin film existed on the bottom. Then, the film was hydrated with deionized water and sonicated to obtain liposome-TPGS. Finally, HMSNs-PDA@liposome-TPGS was fabricated by mixing HMSNs-PDA and liposome-TPGS (w/w, 1:2), followed by sonication for 1 min. Transmission Electron Microscopy (TEM) We prepared the sample into a 2 mg/mL ethanol solution, then added the prepared sample dropwise on the copper grid coated with carbon and let it stand for 1 min. Then, it was dried under an infrared lamp before being photographed. For HMSNs-PDA@liposome-TPGS samples coated with liposome-TPGS mixed lipid film, we first let the sample sit for 1 min. Then, the grid was incubated with 2% phosphotungstic acid for 1 min and dried under infrared light to shoot. The morphology was observed by TEM (JSM-6510A, JEOL, Tokyo, Japan) at an acceleration voltage of 200 kV. N 2 Adsorption/Desorption Approximately 0.1 g of finely ground sample was placed in a sample tube. The sample was then pretreated at the appropriate temperature. After the pretreatment, the sample was weighed again to obtain the exact mass of the sample. Finally, a nitrogen adsorp-tion/desorption isotherm test was performed. The nitrogen adsorption and desorption capacity was measured on an SA3100 surface with a pore size analyzer (Beckman Coulter, Brea, CA, USA). Fourier Transform Infrared Spectroscopy (FT-IR) We took appropriate amounts of the samples to be tested. The samples were prepared using the KBr pellet method. Fourier transform infrared spectroscopy (FT-IR) was performed using a spectrometer (Spectrum 1000, PerkinElmer, Waltham, MA, USA). For the test conditions, the wavelength range was 4000~400 cm −1 and the resolution was 4 cm −1 . Small-Angle X-ray Scattering (SAXS) The powder was placed in the cuvette and measured after the instrument had reached vacuum conditions. The test conditions were a scanning range of 0 •~6• (2θ), scanning step size of 0.02 • , and scanning speed of 0.6 • /min. An SAXS study was carried out to investigate the state of the drug (crystalline/amorphous) in the HMSNs-PDA. Size Distribution and Zeta Potential (ζ) We diluted the samples and put them into the sample cell. The size and charge of the NPs were measured using a Zetasizer Nano ZS90 instrument (Malvern Instruments Ltd., Malvern, UK). Photothermal Conversion Property Test To determine the photothermal effects of different samples, water, HMSNs-NH 2 , HMSNs-PDA, and HMSNs-PDA@liposome-TPGS at different concentrations (25,50,100,200, and 400 µg/mL) were subjected to NIR irradiation at 2.0 W/cm 2 for 5 min (808 nm) and the HMSNs-PDA@liposome-TPGS (400 µg/mL) sample suspensions were irradiated with various power densities for 5 min. To investigate the photothermal stability of HMSNs-PDA@liposome-TPGS, the sample suspensions were continuously irradiated with an NIR laser for 5 min (808 nm, 2 W/cm 2 ) and then cooled naturally for 10 min without irradiation. The temperature changes of the HMSNs-PDA@liposome-TPGS with three on/off cycles of laser irradiation were recorded. Additionally, the photothermal conversion efficiency of the HMSNs-PDA@liposome-TPGS was calculated by referring to the relevant literature [63][64][65]. Drug Loading Capacity and Encapsulation Efficiency Here, 10 mg of HMSNs-PDA and 5 mg of DOX were dispersed in 3 mL of phosphatebuffered solution (PBS, pH 7.4) and stirred for 24 h in the dark, then the precipitate was collected and washed three times with PBS. The washed PBS was collected and a UV spectrophotometer was used to determine the concentration of DOX in the PBS at 480 nm. The precipitates were dried to obtain drug-loaded HMSNs-PDA (denoted as DOX/HMSNs-PDA). To synthesize the drug-loaded HMSNs-PDA@liposome-TPGS, the DOX/HMSNs-PDA was coated with a lipid membrane as described above, and the HMSNs-PDA was replaced with DOX/HMSNs-PDA. Meanwhile, all other operations remained unchanged. The formula of the drug loading (DL%) was as follows: where M drug in NPs is the mass of DOX-loaded in nanoparticles, M total drug is the initial mass of DOX, and M NPs is the dry weight of the different nanoparticles. In Vitro Drug Release Experiment The dialysis method was conducted to study the release profiles of drug-loaded NPs. The DOX/HMSNs-PDA@liposome-TPGS (0.5 mg/mL) suspension was added to dialysis bags (MWCO = 10,000 Da) and placed in separate flasks containing 30 mL of PBS solution at different pH levels (pH 7.4, pH 5.0). The flasks were then shaken in a gas bath shaker (37 • C, 100 rpm). At predetermined time points, 3 mL of release medium was withdrawn and an equal amount of corresponding blank release medium was supplemented. Meanwhile, the irradiation groups were exposed to NIR (808 nm, 2 W/cm 2 ) for 5 min each hour. Finally, the amount of released DOX was determined at 480 nm using ultraviolet spectroscopy (UV-756PC, Sunny Hengping Instrument Co., Ltd., Shanghai, China). MTT Assay The 4T1 cells were cultured into 96-well plates at a density of 1×10 4 cells/well. Different concentrations of blank carriers (HMSNs-NH 2 , HMSNs-PDA, and HMSNs-PDA@liposome-TPGS) and DOX-loaded carriers (DOX/HMSNs-NH 2 , DOX/HMSNs-PDA, and DOX/HMSNs-PDA @liposome-TPGS) were co-incubated with cells for 24 h. The DOX/HMSNs-PDA@liposome-TPGS + NIR group was incubated for 4 h and irradiated for 5 min (808 nm, 2 W/cm 2 ), then cultured for another 20 h. The culture medium was removed and washed with PBS after incubation. Then, 100 µL of MTT solution (0.5 mg/mL) was added to each well and incubated for an additional 4 h. Finally, the MTT solution was discarded and 200 µL of DMSO solution was added to each well and shaken for 10 min at 37 • C to fully dissolve the formazan crystals. The absorbance was measured at 490 nm using a microplate reader. Hemolysis Test Hemolysis caused by NPs was evaluated using a hemolysis test. Samples (HMSNs-NH 2 , HMSNs-PDA, HMSNs-PDA@liposome-TPGS were dispersed in saline) containing a 0~800 µg/mL concentration gradient were incubated with 2% erythrocyte suspensions in an equal volume for 3 h at 37 • C. The 2% erythrocyte suspensions mixed with saline or deionized water were regarded as negative control or positive control groups, respectively. The mixture was centrifuged at 2000 rpm to collect the supernatant. The absorbance of the supernatant was measured at 540 nm and the hemolysis ratio was calculated by the following equation: where A S is the absorbance of each sample, A N is the absorbance of the negative control, and A P is the absorbance of the positive control. Non-Specific Protein Adsorption Test Bovine serum albumin (BSA) was chosen as a model protein to evaluate the adsorption of NPs. HMSNs-NH 2 , HMSNs-PDA, and HMSNs-PDA@liposome-TPGS were incubated with BSA solution (PBS, 0.5 mg/mL) for 6 h at 37 • C, respectively. After incubation, the mixture was centrifuged to obtain the supernatant. Then, 200 µL of supernatant was added to 2 mL of Coomassie brilliant blue dye, shaken for 30 s, and placed at room temperature for 3 min. The absorbance of the solution was measured at 595 nm, and the adsorption rate Q (%) was calculated using the following equation: where C 0 and C are the initial and remaining concentrations of the BSA solution, V is the volume of the solution, and m is the quality of the nanoparticle sample. Each experiment was carried out in triplicate. Cellular Uptake Evaluation The 4T1 cells at the logarithmic growth stage were cultured into 12-well plates at a density of 1×10 5 cell/well and incubated overnight in an incubator with 5% CO 2 at 37 • C for adherence. DOX, DOX/HMSNs-PDA, DOX/HMSNs-PDA@liposome-TPGS, and DOX/HMSNs-PDA@liposome-TPGS + NIR were incubated with cells for 4 h. For the DOX/HMSNs-PDA@liposome-TPGS + NIR group, after 2 h of incubation, the sample was irradiated with an 808 nm laser at a power density of 2 W/cm 2 for 5 min and cultured for another 2 h. The medium was removed and washed with PBS. Next, the cells were fixed with 4% paraformaldehyde for 15 min and stained with 200 µL DAPI for 10 min. Eventually, the samples were washed with PBS and observed under a confocal laser scanning microscope (CLSM). In Vivo Antitumor Effect Study and H&E Staining Analysis The tumor-bearing BALB/c mice were randomly divided into five groups, including a saline group, DOX group, DOX/HMSNs-PDA group, DOX/HMSNs-PDA@liposome-TPGS group, and DOX/HMSNs-PDA@liposome-TPGS + NIR group (n = 5). Sample solutions at a 5.0 mg/kg equivalent dose of DOX were intravenously administrated and the mice were weighed every other day. For the DOX/HMSNs-PDA@liposome-TPGS + NIR group, the group was irradiated for 5 min (808 nm, 2 W/cm 2 ) after 6 h of administration. At the end of the experiment, the mice were sacrificed and the heart, liver, spleen, lung, kidney, and tumor tissues were collected, weighed, and fixed with a 4% paraformaldehyde solution. After dehydration, paraffin embedding, and sectioning, the samples were stained with hematoxylin and eosin (H&E). The pathological changes of various tissues and organs were observed under an optical microscope. Biodistribution Behavior In Vivo and Internal Long-Circulation Performance To investigate the distribution of the nanoformulations in mice, ICG was loaded into the nanoplatform and stirred in the dark for 24 h, then the product was collected by centrifugation. When the tumor volume reached about 200 mm 3 , 100 µL of ICG/HMSNs-PDA and ICG/HMSNs-PDA@liposome-TPGS (administered dose 50 mg/kg) were injected via the tail vein. The fluorescence imaging system (IVIS Lumina) was used to perform PA imaging at 3 h and 24 h. Then, the mice were sacrificed to observe the fluorescence of the heart, liver, spleen, lung, kidney, and tumor samples (excitation/emission wavelength 720 nm/790 nm). In order to reflect the long-cycle capacity of the carrier, SD rats were injected intravenously with ICG-labeled nanocarriers. The SD rats were anesthetized and blood samples were collected from the orbit at predetermined time points (5 min, 30 min, 1 h, 2 h, 4 h, 6 h, 8 h, 12 h, 24 h, and 48 h). The blood samples were centrifuged for 10 min at 4000 rpm and the supernatant was taken for fluorescence detection using a microplate reader. To quantify the relative percentage of remaining NPs in the blood circulation after injection at each blood sample time point, the relative fluorescence signal (the ratio of the fluorescence intensity of the blood sample at each time point to the fluorescence intensity of the blood sample at 5 min) was used to represent the systemic capacity of the carrier. Statistical Analysis The experiments were conducted at least in triplicate. Statistical differences between the two groups were analyzed by t-test. When comparing multiple groups of data, statistical differences between groups were tested by one-way ANOVA. The statistical significance levels were set at * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. Synthesis and Characterization of NPs The HMSNs-PDA and HMSNs-PDA@liposome-TPGS were synthesized as described in Section 2.2. The TEM images (Figure 1A-D) and particle size distribution curves (Figure 2A-C) of different samples were shown here. The TEM images indicated that the bare HMSNs displayed a spherical shape and hollow mesoporous structure. The hydrated particle size of the HMSNs was about 122.5 ± 14.93 nm (PDI 0.221) ( Table 1). The outer shell was about 7 nm thickness. After coating with PDA, the surface of the HMSNs-PDA became rough and mesoporous channels were partially masked by a thin PDA film. After further coating with a lipid film, a lipid layer with a thickness of about 10 nm appeared on the surface of the HMSNs-PDA@liposome-TPGS. Meanwhile, the particle sizes of HMSNs-PDA and HMSNs-PDA@liposome-TPGS increased to 153.8 ± 25.18 nm (PDI 0.128) and 220 ± 16.3 nm (PDI 0.216), respectively. Since the TPGS embedded in the lipid membrane had a PEG hydrophilic end, it could form a hydration layer on the surface of the HMSNs-PDA, thereby making it difficult for the particles to aggregate. Therefore, the HMSNs-PDA@liposome-TPGS exhibited better dispersion. Similarly, the average surface charges of the different samples also changed significantly ( Figure 2D). The surface charge of the bare HMSNs was (−18.13 ± 3.17 mV), due to the large number of silane alcohols on silica surface. The surface charge of HMSNs-NH 2 then changed from negative to positive after the surface amination step. The HMSNs-NH 2 showed a positive potential of 14.23 ± 3.01 mV. When the HMSNs-NH 2 was coated with PDA, the potential of HMSNs-PDA decreased to −8.99 ± 0.02 mV, due to the presence of hydroxyl groups of PDA. After being coated with liposome-TPGS, the surface ζ potential of HMSNs-PDA@liposome-TPGS (−13.22 ± 3.09 mV) was close to the lipid membrane potential (−14.75 ± 1.06 mV). These results indicated that the PDA layer and liposome-TPGS hybrid lipid film were successfully wrapped on the surface of the HMSNs. Pharmaceutics 2023, 15, x FOR PEER REVIEW 9 of 22 silica surface. The surface charge of HMSNs-NH2 then changed from negative to positive after the surface amination step. The HMSNs-NH2 showed a positive potential of 14.23 ± 3.01 mV. When the HMSNs-NH2 was coated with PDA, the potential of HMSNs-PDA decreased to −8.99 ± 0.02 mV, due to the presence of hydroxyl groups of PDA. After being coated with liposome-TPGS, the surface ζ potential of HMSNs-PDA@liposome-TPGS (−13.22 ± 3.09 mV) was close to the lipid membrane potential (−14.75 ± 1.06 mV). These results indicated that the PDA layer and liposome-TPGS hybrid lipid film were successfully wrapped on the surface of the HMSNs. In addition, the N 2 adsorption-desorption isotherms with the corresponding BJH pore size distribution were examined and were shown in Figure 2E,F. The specific surface area, pore volume, and most probable pore size values of different HSMNs samples were also shown in Table 2. The HMSNs displayed a type IV isotherm with a H2 hysteresis loop, which exhibited an obvious mesoporous feature. After coating with PDA, the specific surface area, pore volume, and pore diameter of HMSNs-PDA were significantly decreased due to the blocking of the PDA layer. After loading the drugs, the relevant parameters of DOX/HMSNs-PDA further decreased. These results confirmed that DOX occupied the mesoporous channels and the surface was modified by the PDA film successfully. Next, the surface characterization of prepared nanoparticles was also evaluated based on the FT-IR spectra ( Figure 2G) In addition, the N2 adsorption-desorption isotherms with the corresponding BJH pore size distribution were examined and were shown in Figure 2E,F. The specific surface area, pore volume, and most probable pore size values of different HSMNs samples were also shown in Table 2. The HMSNs displayed a type IV isotherm with a H2 hysteresis loop, which exhibited an obvious mesoporous feature. After coating with PDA, the specific surface area, pore volume, and pore diameter of HMSNs-PDA were significantly decreased due to the blocking of the PDA layer. After loading the drugs, the relevant parameters of DOX/HMSNs-PDA further decreased. These results confirmed that DOX occupied the mesoporous channels and the surface was modified by the PDA film successfully. Next, the surface characterization of prepared nanoparticles was also Table 2. BET surface area (S BET (m 2 /g)), pore diameter (W BJH (nm)), and pore volume (Vt (cm 3 /g)) values of samples. Moreover, the pore orders of the HMSNs, HMSNs-NH 2 , and HMSNs-PDA were characterized by SAXS. In Figure 2H, it can be seen that the SAXS curves of the three carriers do not show the maximum peak value, which suggests that the channels of the three carriers were disordered. An XRD study was carried out to investigate the state of the drug (crystalline/amorphous) in HMSNs-PDA ( Figure 2I). The DOX and physical mixture groups showed sharp and intense crystalline diffraction peaks at 2θ values of 12.94 • , 14.80 • , 15.56 • , 18.40 • , 22.46 • , and 25.02 • . Meanwhile, no crystal diffraction peaks were observed in the pattern of HMSNs-PDA. Surprisingly, the crystalline diffraction peaks of DOX were also not observed in the pattern of DOX/HMSNs-PDA. This phenomenon showed that the DOX was altered from a crystalline to an amorphous form during the preparation process. This was due to the adsorption of the drug into the pores, and the nanoscale pore size limited the long-range ordered structure associated with the existence of drug crystallization, thereby inhibiting the crystallization of the drug and making the drug exist in an amorphous state [66]. Photothermal Conversion Property Test of NPs As shown in Figure 3A, the temperature gradient (∆T) increased gradually to 6.5, 7.3, 7.7, 10.3, and 14.1 • C under 5 min of irradiation (808 nm, 2.0 W/cm 2 ) with the increased NPs concentrations from 25 to 400 µg/mL. Meanwhile, the carrier exhibited a powerdependent temperature increase ( Figure 3C). Moreover, the photothermal curves of water and different carriers with NIR irradiation (808 nm, 2.0 W/cm 2 ) were exhibited in Figure 3B. After 5 min of irradiation at a power density of 2.0 W/cm 2 , the temperature gradient (∆T) values observed in the HMSNs-PDA@liposome-TPGS and HMSNs-PDA were 13.8 • C and 13.9 • C, respectively. Conversely, no significant temperature fluctuations were detected in the HMSNs-NH 2 suspension and water. This showed that the PDA-coated HMSNs had a good photothermal effect, and the coating of the liposome-TPGS did not affect the photothermal conversion performance of the HMSNs-PDA. To assess the photostability of the HMSNs-PDA@liposome-TPGS nanocarrier, we tracked the temperature changes over three cycles of laser on/off operations. As shown in Figure 3D, we observed no significant decreases in temperature fluctuations during repeated photothermal heating (300 s)-natural cooling cycles. This indicated that the HMSNs-PDA@liposome-TPGS nanocarrier was stable throughout the photothermal conversion process. In addition, based on the obtained data ( Figure 3E,F), we calculated the photothermal conversion efficiency (η) to be 16.7%. Drug Loading and In Vitro Release The DL values of DOX/HMSNs-NH 2 , DOX/HMSNs-PDA, and DOX/HMSNs-PDA@ liposome-TPGS were 29.96%, 32.20%, and 27.83%, respectively. The relatively low DOX release levels from DOX/HMSNs-NH 2 (28.92%), DOX/HMSNs-PDA (20.74%), and DOX/ HMSNs-PDA@liposome-TPGS (17.45%) were observed under physiological conditions (pH 7.4) ( Figure 3G). However, in acidic conditions (pH 5.0), to simulate the endo-lysosomal environment of the cancer cells, the release of DOX in the three carriers increased significantly (90.84%, 83.36%, and 71.64%, respectively) ( Figure 3H). This pH-responsive drug release behavior was caused by two factors. On the one hand, the solubility of DOX increased with decreasing pH [67]. On the other hand, the interaction between the DOX and mesoporous silica became weaker under acidic conditions, which also provided more favor-able diffusion conditions for the DOX [68]. In addition, compared with DOX/HMSNs-NH 2 and DOX/HMSNs-PDA, the drug release rate of DOX/HMSNs-PDA@liposome-TPGS decreased slightly under two different pH conditions. The cumulative release rates of DOX after 48 h were 21.35% and 69.82% at pH 7.4 and 5.0, respectively. This highlighted that the HSMNs coated with liposome-TPGS showed a slower drug release rate compared to the bare HMSNs. The sustained release could be attributed to the PDA and liposome film coating of the NPs, which prevented the premature release of DOX from the inner pores. After the NIR laser irradiation, the cumulative release amount of DOX reached 82% in acidic conditions (pH 5.0), which was higher than the 70% rate without laser irradiation at pH 5.0, and also higher than the 33.5% upon NIR laser irradiation in neutral conditions ( Figure 3I). This property of NIR-responsive drug release was mainly attributed to the fact that the heat generated by the PDA under NIR laser irradiation destroyed the interaction between the DOX and HMSNs. Therefore, the HMSNs coated with PDA and hybrid lipid membrane can not only reduce the release of DOX in normal cells but also endow the nanoplatform with pH/NIR-responsive drug delivery capability, thereby achieving specific drug release at tumor sites. Cell Toxicity Test An MTT assay was carried out to evaluate the cell viability and study the chemophotothermal therapeutic effect of DOX/HMSNs-PDA@liposome-TPGS. As can be seen from Figure 4A, the HMSNs-NH 2 treatment group showed obvious cytotoxicity at a high concentration (120 µg/mL), and the cell survival rate was only 68.28%. However, the survival rate of the 4T1 cells remained above 80% after incubation with HMSNs-PDA or HMSNs-PDA@liposome-TPGS for 24 h at a concentration range of 15-120 µg/mL, which clearly demonstrated the negligible toxicity against tissues and cells. This showed that the HSMNs double-coated by PDA and a lipid membrane had good biocompatibility. Upon drug loading, the 4T1 cells were incubated with various concentrations of NPs at a series of DOX concentrations. As shown in Figure 4B, all formulation groups exhibited concentration-dependent killing effects. Since the free DOX group existed in the form of solution, it could quickly cross the cell membrane and enter the nucleus to play its role, so the cell survival rate was only 19.78% when the dosage was 20 µg/mL. However, there was a time delay in the release of DOX from HMSNs-NH 2 and HMSNs-PDA groups, so the cell viability levels of both groups (45.54%, 63.07%) were higher than the free DOX group, and the cell survival rate of the DOX/HMSNs-PDA group was further increased because the pores were covered by the polydopamine coating. Nevertheless, the cell viability of the DOX/HMSNs-PDA@liposome-TPGS decreased to 39.45%, which may have been related to the mitochondrial pathway of TPGS regulating apoptosis [69,70]. It is worth noting that the cytotoxicity of the DOX/HMSNs-PDA@liposome-TPGS increased dramatically, with a cell viability level of 24.30% at a DOX concentration of 20 µg/mL after NIR laser irradiation. The above results could be quantitatively illustrated in terms of the IC 50 values by using the GraphPad Prism software, and these IC 50 values are listed in Table 3. As anticipated, the IC 50 value of the DOX/HMSNs-PDA@liposome-TPGS decreased rapidly (by approximately 40%) with NIR irradiation compared to the group without it. This outcome can be attributed to the outstanding photothermal effect and enhanced drug release triggered by mild hyperthermia. In conclusion, the DOX/HMSNs-PDA@liposome-TPGS has good tumor cell growth inhibition ability under the assistance of near-infrared light. Hemolysis Test The hemocompatibility of nanoparticles must be considered for intravenous drug delivery systems. Therefore, we evaluated the hemocompatibility of HMSNs-PDA@liposome-TPGS using rat RBCs. As shown in Figure 4C,D, the hemolytic activity of Hemolysis Test The hemocompatibility of nanoparticles must be considered for intravenous drug delivery systems. Therefore, we evaluated the hemocompatibility of HMSNs-PDA@liposome-TPGS using rat RBCs. As shown in Figure 4C,D, the hemolytic activity of three NPs were displayed in a dose-dependent manner within the concentration range (50-800 µg/mL). Among them, the hemolysis of the HMSNs-NH 2 was the most significant, reaching 16.14 ± 2.65% at a concentration of 200 µg/mL. Meanwhile, complete hemolysis was observed once the concentration of HMSNs-NH 2 exceeded 400 µg/mL. As reported by other studies, the hemolysis activity of bare HMSNs-NH 2 was attributed to the interaction of the surface residual silanol (Si-OH) with quaternary ammonium ions in the erythrocyte membrane [71]. After wrapping the PDA and liposome-TPGS layers, the HMSNs-PDA@liposome-TPGS-treated samples did not cause any visible hemolysis at any tested concentration (50-800 µg/mL). The hemolysis rate was only 10% at concentrations up to 800 µg/mL. Therefore, the enhanced hemocompatibility of the HMSNs-PDA@liposome-TPGS resulted from the shielding of silanol by the PDA and liposome-TPGS layers. Non-Specific Protein Adsorption Test When carrier materials enter the body, the non-specific proteins will be adsorbed on the surface of the material, resulting in the failure of the nanocarrier function [72,73]. Therefore, carrier materials should have a low protein adsorption rate to ensure the reliability of the nanocarrier transport in vivo. Here, we chose bovine serum albumin (BSA) as a model protein to investigate the non-specific protein adsorption of HMSNs-NH 2 , HMSNs-PDA, and HMSNs-PDA@liposome-TPGS, quantified by the BSA standard curves. The results are shown in Figure 4E. The protein adsorption capacity levels of HMSNs-NH 2 , HMSNs-PDA, and HMSNs-PDA@liposome-TPGS were 180.5 µg/mg, 208.0 µg/mg, and 146.0 µg/mg, respectively. The strong protein adsorption levels of the HMSNs-NH 2 and HMSNs-PDA were attributed to the hydrogen bonds formed between the amino groups on the mesoporous silica surface and proteins, and the covalent bonding between the benzene rings contained in the PDA layer and aromatic amino acids, respectively. However, the addition of a lipid film shielded the influence of the amino group and PDA layer, resulting in a decrease in the adsorption capacity. Therefore, the coating of liposome-TPGS is beneficial to improve the biocompatibility of the carrier and reduce the possibility of its clearance by immune cells. Cellular Uptake Evaluation To observe the uptake of free DOX and DOX/HMSNs-PDA@liposome-TPGS nanoparticles by cells, confocal laser scanning microscopy (CLSM) was utilized. The images in the first row of Figure 4F show that after 4 h of incubation, the red fluorescence of the free DOX was primarily found in the nucleus. This could be attributed to the fact that free small DOX molecules are able to easily penetrate biological membranes through passive diffusion. However, endocytosis is generally considered one of the main entry mechanisms for various drug nanocarriers, which is slower than diffusion, so the DOX/HMSNs-NH 2 and DOX/HMSNs-PDA treatment groups only showed weak red fluorescence. In comparison, the red fluorescence of the DOX/HMSNs-PDA@liposome-TPGS treatment group was significantly enhanced. This was because the lipid membrane had good fluidity and the affinity between the HMSNs and cell membrane increased after being encapsulated by a lipid membrane, resulting in enhanced uptake into the cell [74]. After exposure to the NIR laser, there was a notable increase in DOX fluorescence compared to the group that was not exposed to NIR illumination. It was shown that the temperature elevation has a positive impact on the drug release when near-infrared light irradiation is utilized, which is consistent with the results for in vitro release. In addition, it was also observed that the co-localization of DOX and the nucleus is more significant. This may be attributed to the increase in the permeability of the cell membrane with the increase in temperature, providing a superior condition for NPs to penetrate the cell membrane [75]. Taken together, this indicated that DOX/HMSNs-PDA@liposome-TPGS could be internalized and NIR could accelerate the drug release by enhancing the cell membrane penetrability and sensitivity, synergistically exerting photothermal-chemotherapy effects to kill tumor cells. The average fluorescence intensity of DOX in each group shown by the Image Fiji analysis ( Figure 4G). The DOX uptake of the laser irradiation group was 2.60 times that of the non-irradiated group, indicating that light irradiation can promote the intracellular drug release of the carrier. Pearson's coefficient (Pearson's R) is widely used to analyze the degree of correlation between two variables, and the closer the value is to 1, the higher the positive correlation between the two. Therefore, the colocalization of the nucleus and DOX was evaluated using the Pearson coefficient as an index. As shown in Figure 4H, the Pearson coefficient of DOX/HMSNs-PDA@liposome-TPGS + NIR group was 0.76, which was 1.52 times that of the non-irradiated group. This showed that light was favorable for DOX to enter the nucleus, so as to combine with deoxynucleotides in the nucleus to exert its medicinal effect. In Vivo Antitumor Effect Study and H&E Staining Analysis The 4T1 cells were implanted into the subcutaneous tissue of BABL/c mice. After tumor formation, the mice were peritumorally injected with solutions of saline, DOX, DOX/HMSNs-PDA, and DOX/HMSNs-PDA@liposome-TPGS, respectively. As shown in Figure 5A,B, each mouse group's body weight did not significantly change throughout the course of the two-week experiment. On the contrary, there was a significant difference in tumor volume among the experimental groups. The DOX/HMSNs-PDA@liposome-TPGS group had a better antitumor effect than the DOX/HMSNs-PDA group, which might because the encapsulation of the lipid membrane prolonged the in vivo circulation time and increased the uptake of the nanodrug delivery system. It is worth noting that the DOX/HMSNs-PDA@liposome-TPGS + NIR group exhibited the most superior antitumor ability due to the excellent light-to-heat conversion effect of PDA, and there was almost no change in tumor volume. At the end of the pharmacodynamic experiments, the 4T1 tumor-bearing mice were executed and the tumor tissues were removed, weighed, and photographed ( Figure 5C,D). The results showed a consistent trend in the growth of the tumor mass and volume. The tumor mass of the DOX/HMSNs-PDA@liposome-TPGS + NIR group was about 1/5 that of the saline group. In addition, the tumor inhibition rate of the DOX/HMSNs-PDA@liposome-TPGS + NIR group reached 87.39%. This indicated that the DOX/HMSNs-PDA@liposome-TPGS achieved effective accumulation in the tumor site and a prominent tumor retention effect in the blood, showing an outstanding chemo-PTT therapeutic effect. Moreover, the H&E staining of tumor slices indicated that DOX/HMSNs-PDA@liposome-TPGS + NIR caused more extensive apoptosis and necrosis than other treatments. Meanwhile, no discernible histological damage was found when the major organs were H&E-stained ( Figure 5E). The above results confirmed the superior biocompatibility of the nanoplatform. Biodistribution Behavior In Vivo and Internal Long-Circulation Performance As shown in Figure 5F, the relative fluorescence signal of the ICG/HMSNs-PDA@ liposome-TPGS was 48.7% at 24 h post-injection. However, only 31.4% of the ICG/HMSNs-PDA was retained in the blood circulation. The relative fluorescence intensity of ICG/HMSNs-PDA@ liposome-TPGS group was 1.55 times that of the ICG/HMSNs-PDA group. Meanwhile, to investigate the distribution of the preparation in mice, the mice were sacrificed at 3 h and 24 h after the administration of ICG labeling carriers, and heart, liver, spleen, lung, kidney, and tumor tissues were collected for in vivo tissue imaging ( Figure 5G). Compared with ICG/HMSNs-PDA, ICG/HMSNs-PDA@liposome-TPGS was more enriched in the tumor site, and a large number of carriers still existed in the tumor site after 24 h. Furthermore, the fluorescence intensity of ICG/HMSNs-PDA@liposome-TPGS group was 1.73 times that of the ICG/HMSNs-PDA group. These results showed that the PEG hydrophilic chain in TPGS can effectively decrease plasma protein adsorption and prolong the circulation time of nanoparticles in vivo, thereby enriching the tumor sites. Biodistribution Behavior In Vivo and Internal Long-Circulation Performance As shown in Figure 5F, the relative fluorescence signal of the ICG/HMSNs-PDA@liposome-TPGS was 48.7% at 24 h post-injection. However, only 31.4% of the ICG/HMSNs-PDA was retained in the blood circulation. The relative fluorescence intensity of ICG/HMSNs-PDA@liposome-TPGS group was 1.55 times that of the ICG/HMSNs-PDA group. Meanwhile, to investigate the distribution of the preparation in mice, the mice were sacrificed at 3 h and 24 h after the administration of ICG labeling carriers, and heart, liver, spleen, lung, kidney, and tumor tissues were collected for in vivo tissue imaging ( Figure 5G). Compared with ICG/HMSNs-PDA, ICG/HMSNs- Conclusions We developed a multifunctional nanoplatform, DOX/HMSNs-PDA@liposome-TPGS, for the controlled and precise delivery of the antitumor drug DOX, thereby achieving a chemo-photothermal synergistic antitumor effect. In this design, the HMSNs exhibited high drug loading (27.83%) due to their uniform mesoporous pore and cavity structures. After loading the DOX, the DOX/HMSNs-PDA@liposome-TPGS exhibited the expected pH/NIR-responsive drug release performance. Then, a PDA shell and hybrid lipid mem-brane were used as gatekeepers to cap the surface of the nanoparticles to achieve excellent photothermal properties and prolong the blood circulation time. The in vitro and in vivo antitumor experiments showed that DOX/HMSNs-PDA@liposome-TPGS not only had good biocompatibility but also exhibited excellent tumor suppression under NIR irradiation through synergistic chemotherapy-PTT effects. In brief, the hollow mesoporous silica nanodrug delivery system with a dual coating of polydopamine and hybrid lipid membrane was prepared to not only fully combine the advantages of chemotherapy and PTT, but also to prolong the blood circulation time to promote the effective enrichment of chemotherapeutic drugs at the tumor site. Therefore, this is a promising nanoplatform for synergistic chemo-PTT antitumor effects. Institutional Review Board Statement: The animal study was undertaken following review approved by the Committee on the Ethics of Animal Experiment of Shenyang Pharmaceutical University (CSXK(Liao)2020-0001).
8,684.2
2023-04-01T00:00:00.000
[ "Biology", "Engineering", "Materials Science" ]
On The Solutions of Some Difference Equations Systems and Analytical Properties Absract. In this study, we investigated the global asymptotic behaviors of their solutions by taking the second-order difference equation system. According to the given conditions, we obtained some asymptotic results for the positive balance of the system. We have also worked on q-fast changing functions. Such functions form the class of q-Caramate functions. We have applied q-Caramate functions to linear q-difference equations and We have also learned about the asymptotic behavior of solutions. In addition, we have studied the problem of initial and boundary value for the q-difference equation Introduction Difference equations (ordinary differential and q-differential) since ancient times are equations of interest to mathematicians and physicists.These equations solve the differential equations by shaping them into mathematical models of many other practical phenomena.Difference equations can be easily algorithmized and solved on a computer they are very suitable for. Studies on q-difference equations appeared already at the beginning of the last century in intensive works especially by Adams [1], Atkinson [2], Bairamov [3,4], Jackson [5], Elaydi [6], Guseinov [7,8], Carmichael, Trjitzinsky and others.Unfortunately, from the thirties up to the beginning of the eighties, only nonsignificant interest in the area was observed.Since years eighties, an intensive and somewhat surprising interest in the subject reappeared in many areas of mathematics and applications including mainly new difference calculus and orthogonal polynomials, q-combinatorics, q-arithmetics, q-integrable systems and variational q calculus. Preliminaries We have investigated the global asymptotic behavior of the solutions of the system of the difference equation (1) , where N ≥ 2 is a fixed integer and q> 1 is a fixed real number. (2) find the solution and real parametres, complex, equation ( 2) is the real number of the limit condition , meets the following conditions. If a solution to the problem (1), ( 3) , (2) boundary conditions taking into account, they are obtained, (5) This system, together with the initial conditions (3), , , we can write.Thus, the initial boundary value problem ( 1) -( 3), the initial value problem (8), ( 9) the problem becomes equivalent [9].Let (10) be a non-trivial solution of the equation, where α is the complex constant, .Equation ( 10), (8) as well, they are obtained Let us record (11) the equation (13) it is equivalent to the boundary value problem.If the solution of the problem ( 12), (13) is , the vector provides the equation (11) . Asymptotics of regular solutions of q-difference equations For 1> q > 0, we define the q-derivate of a real valued function u as (14) The higher order qderivatives are given by (15) we will describe asymptotics of solutions of q-difference equations.A q-difference equation for a sequence (y(1), y(2), y(3), . ..) of smooth functions of q has the form: (16 where are smooth functions and .In the usual analytic theory of q-difference equations, q is a complex number inside or outside the unit disk.With this in mind, ǫ-difference equations are obtained from q-difference equations by the substitution where ǫ is a small nonnegative real number, that plays the role of Planck's constant.[10].The characteristic polynomial of the q-difference equation ( 16) is We will say that equation ( 16) is regular if for all v ∈ S 1 , where is the discriminant of ), which is a polynomial in the coefficients of ).Let 1 denote the roots of the characteristic polynomial, which we call the eigenvalues of eq. ( 16).It turns out that eq.( 16) is regular iff the eigenvalues 1 . . ., never collide and never vanish, for every v ∈ S 1 .Moreover, it follows by the implicit function theorem that the roots are smooth functions of v ∈ S 1 .Let S 1 = denote a partition of S 1 into a finite union of closed arcs (with nonoverlapping interiors), such that the magnitude of the eigenvalues does not change in each arc.In other words, for each , there is a permutation σ p of the set {1, .., d} such that The following definition introduces a locally fundamental set of solutions of q-difference equations. Fix a partition of I as above.A set {ψ 1 , . .., ψ d } is a locally fundamental set of solutions of eq.( 16) iff for every solution ψ for every p ∈ P and for every there exist smooth functions c p m such that for all (k, q) : q k ∈ I p . Theorem 1.([11] ) Assume that eq.( 16) is regular.Then, there exists a locally fundamental set of solutions {ψ 1 , . .., ψ d } such that • For every and such that q k ∈ I p we have ψ m ( k, ) = exp( Φ m ( , )). • For some smooth functions Φ m with uniform (with respect to ) asymptotic expansion We find this last one by continuous limit (or "confluence") of coefficients.when q → 1, with the constraints We'll see further in which precise direction one can confuse the functional equation ( 1) it self to a differential equation with . it admits for singularities 0, 1 and ∞ only and that these singularities are regular.The Frobenius Method provides fundamental systems of solutions in 0 and in ∞: The analytical extension of a fundamental system solutions in the neighborhood of a singularity still provides a fundamental system solutions and therefore expresses itself from another fundamental system by an array of constants. To determine the behavior of the solutions we give the following lemmas.Lemma 2. For define the operator by Then and equation ( 6) can be written as . Proof.From the q-derived formula q (17) where φ m,s ∈ C ∞ (I) for all s • and leading term φ m,0 (x)= (e 2πit ))dt where we have chosen a branch for the logarithm of λ m .Definition 1. ([12]) We say that a formal series is a solution to (8) iff ∞ (18) It is easy to see that if is a formal solution to (8), then the leading term φ 0 satisfies the equation where λ(x) is an eigenvalue of (8).The hypergeometrical basic series: (19) is a convergent series, and it is the solution of the q-difference equation rational linear order two, ITM Web of Conferences 22, 01050 (2018) https://doi.org/10.1051/itmconf/20182201050CMES-2018 with .It's a q-analogue (and a deformation) of Euler Gauss's classical hypergeometric symmetry, = Lemma 1 . Define the function f : (0, ∞) → R by Then is strictly increasing for and strictly decreasing for .Proof.The statement follows from the equality f ′ (t) = -).
1,538.6
2018-01-01T00:00:00.000
[ "Mathematics" ]
Graph Convolutional Neural Networks for Body Force Prediction Many scientific and engineering processes produce spatially unstructured data. However, most data-driven models require a feature matrix that enforces both a set number and order of features for each sample. They thus cannot be easily constructed for an unstructured dataset. Therefore, a graph based data-driven model to perform inference on fields defined on an unstructured mesh, using a Graph Convolutional Neural Network (GCNN) is presented. The ability of the method to predict global properties from spatially irregular measurements with high accuracy is demonstrated by predicting the drag force associated with laminar flow around airfoils from scattered velocity measurements. The network can infer from field samples at different resolutions, and is invariant to the order in which the measurements within each sample are presented. The GCNN method, using inductive convolutional layers and adaptive pooling, is able to predict this quantity with a validation $R^{2}$ above 0.98, and a Normalized Mean Squared Error below 0.01, without relying on spatial structure. Introduction Due to recent advances in data-driven methods and the proliferation of scientific data, there has been a significant amount of attention towards data-driven inference to model or predict system properties.This is particularly relevant in fluid mechanics, where large amounts of data are needed to understand potentially complex, multiscale flow phenomena.The success of Deep Learning (DL) in computer vision has inspired its application in studying physical phenomena.Physics informed Neural Networks are used to learn the physics behind and solutions to high-dimensional Partial Differential Equations (PDEs) [1, 2] [3-5].Deep Neural Networks (DNNs) have been of particularly high interest in surrogate modeling and predicting complex transport phenomena [6,7].Wiewel et al. used a latent space learning to efficiently simulate the temporal evolution of the pressure field [8].Farimani et al. applied conditional Generative Adversarial Networks (cGAN) to solve the physics of transport phenomena without knowledge of the governing equations [9]. Machine learning has seen success in generating flow fields based on data collected from experiments, and noisy data from numerical simulations [10][11][12][13].For example, Particle Image Velocimetry (PIV), where velocity fields are generated by tracking the movement of tracer particles, has been introduced as a non-intrusive technique for analyzing flow behavior and measuring the forces interacting with an immersed object [14,15].The analysis of PIV data using Machine Learning has allowed for more efficient flow field reconstruction and prediction.Rabault et al. used a Convolutional Neural Network (CNN) architecture to surrogate PIV by cross-correlating point locations between two frames, therefore predicting the flow velocity field [16].Morimoto et al. applied a CNN to artificial PIV data to develop a method for reconstructing flow field from snapshots with missing regions [17]. Various machine learning algorithms have recently been applied to structured flow field data to facilitate predictions based on immersed flows around streamlined objects, such as airfoils.For example, the leading-edge suction parameter (LESP) and angle of attack (AoA) are of high importance for discrete vortex methods to be effective.Hou et al. used a combination of convolutional neural networks and recurrent neural networks to predict these parameters based on time-dependent surface pressure measurements [18].In related work, Provost et al. used the same method to optimize the number of sensors necessary for LESP and AoA prediction [19]. Zhang et al. applied convolutional neural networks on image representations of various airfoils and their surrounding flow to predict the lift coefficient.The airfoils are immersed in different flow conditions, and the parameters of the flow (e.g.Mach Number) are encoded as pixel intensities [20].Viquerat and Hachem used an optimized CNN to estimate the drag coefficient of several arbitrary 2D geometries in laminar flow.A large training sample of random 2D shapes along with their drag forces computed by immersed mesh method were used to increase the prediction accuracy of realistic geometries such as NACA airfoils [21].Yilmaz and German used a CNN to predict airfoil performance directly from the geometry of the airfoil, replacing cumbersome surrogate methods that required manual parameterizations of the airfoil using shape functions [22]. Guo et al. trained a deep CNN to make fast but less accurate visual approximations of the steady state flow around 2D objects which improves the design process by expediting the alternatives generation [23].Miyanawala and Jaiman used a CNN to predict aerodynamic coefficients for several bluff body shapes at low Reynolds numbers.They used structured data of an encoded distance function to predict unsteady fluid forces [24].Bhatnagar et al. also used a signed distance function as well as a limited range of both Reynolds numbers and angles of attack to predict flow field velocities and pressure for several airfoils [25]. These methods, however, are limited in their ability to generalize to unstructured data.As traditional machine learning methods require the creation of a feature matrix with both a specific size and order of input samples, they cannot be applied to unstructured data.Flow field data, however, can be highly unstructured due to the use of irregular meshes to define curved or complex geometries. Recent interest in manipulating unstructured data has led to the development of both mesh-free inference methods for point cloud representations and reduced-order models based on graph-based representations of fluid data [26].Several works have used graph theory-based methods to identify coherent structures within turbulent flow [27,28].Hadjighasem et al. examined the generalized eigenvalue problem of the graph Laplacian to develop a heuristic for determining the locations of coherent structures [29].This work is extended to extract coherent structures from the vortical behavior of the flow by Meena et.al, where a graph is constructed to represent the mutual interaction of individual vortex elements, and larger vortex communities are identified with network theory-based community detection algorithms [30].To perform mesh-free inference, Trask et al. introduce the idea of GMLS-Nets.GMLS-Nets parameterize the generalized moving least-squares functional regression technique for application on mesh-free, unstructured data.Trask abstracts the GMLS operator to perform convolution on point clouds.They demonstrated success in both uncovering operators governing the dynamics of PDEs and predicting body forces associated with flow around a cylinder based on point measurements [31]. In this paper, we present a method for data-driven prediction from flow fields defined on irregular and unstructured meshes, using a Graph Convolutional Neural Network (GCNN) framework.GCNNs have been applied to problems dealing with unordered data points where specific relationships between the points encode important information and thus are often used in applications such as natural language processing, traffic forecasting, and material property prediction [32][33][34][35][36][37][38][39].Graph Neural Networks have also previously been used to model spatiotemporal phenomena and surrogate physics simulators [40,41].Our approach exploits the irregular mesh that the velocity field is defined on to regress from unordered data across both varying resolutions and non-uniform spatial distributions. In section 2, we first present our methodology of using graph representation of the unstructured mesh.The learning algorithm based on an inductive graph convolution method is then discussed and finally, the structure of the implemented network is explained.In section 3, The data which is used in our study and the results for the corresponding experiments are discussed and compared to other traditional methods.A conclusion of the work and suggestions for some possible future directions are also provided in section 4. Figure 1: a) The velocity information defined on the unstructured mesh is represented as a graph, where the mesh nodes are vertices of the graph, and the connectivity of the mesh is taken as the edges of the graph.An 2XN matrix of node features contains the velocity in each dimension, and an N XN adjacency matrix encodes the connectivity of the matrix.b) The Graph Convolution operation.(left) The graph before a convolution operation is performed on the center node (red).(right) During graph convolution, the information in each of the rings of N -order neighbors, where N ≤ k, is aggregated to the center node.In this application, k = 2. c) The architecture of the Graph Convolutional Neural Network.'GC' refers to the graph convolution operation in b), 'TKP' refers to Top-K Pooling.The feature map output of each top-K pooling layer undergoes both mean pooling and max pooling, and the outputs of each operation are concatenated together.The concatenated output from each layer is added together and passed to fully connected layers for regression. Graph Representation The key idea behind our approach is to use a graph representation to describe the connectivity of unstructured data points.In order to resolve flow fields around complex geometries in computational fluid dynamics, an irregular mesh around the immersed object is created.Next, numerical methods are applied to calculate the flow field data.Any mesh structure around the immersed object in the flow field can be represented as a graph, considering mesh nodes as vertices and using edges to connect the neighbors.Therefore, one can construct a graph representation of the unstructured mesh with different complexities or number of nodes around arbitrary objects. Specifically, we define an undirected graph G = (V, E), with V vertices describing the nodes of the mesh, and E edges representing the connectivity of the mesh.The flow field data is defined on mesh nodes, resulting in a feature matrix that contains the input features for V graph vertices.The edge connections are encoded as the adjacency matrix, a binary V × V matrix indicating whether any given pair of vertices are connected. Graph Convolution Graph Convolutional Neural Networks (GCNNs) are the generalization of Convolutional Neural Networks (CNNs) for operation on graphs.GCNNs, like CNNs, are able to extract multi-scale spatial features through the use of shared weights and localized filters [42].However, as discussed earlier, traditional CNNs are unable to work with unstructured data.GCNNs can bypass this limitation by defining the convolution operation based on the structure of the graph.By propagating information through each node's local neighborhood as defined by the adjacency matrix, GCNNs are invariant towards the order in which the nodes are specified in the feature matrix.GCNNs are often used for tasks such as node classification, link prediction [43], and graph classification. GCNNs can be described in terms of a general framework for learning on graphstructured data, called Message Passing Neural Networks (MPNNs) [44].MPNNs develop hidden state embeddings, h v for each node v during the training process.Supervised training of a graph neural network aims to learn a state embedding from the features defined on the nodes and edges of the graph to have the best possible mapping to the output.The training process consists of two phases, the message passing phase where hidden states aggregate information from their surrounding nodes, and the readout phase where a feature vector for the graph is computed from the hidden states.The message passing process is parameterized by two functions, the message function M t and the node update function U t , while the readout function is given by R. R, M t , and U t are all learned differentiable functions that are updated during the training process. Defining e vw as the edge connecting node v to node w and N(v) as the neighborhood of node v, the message passing phase can be formalized as: Based on h v , the readout phase computes a feature vector as: A standard framework used is the Laplacian based GCNN, detailed in [42], where M t and U t are defined as the following: where Ãvw is the adjacency matrix describing the connectivity of the graph, assuming that each node is connected to itself, and deg(v) is the number of nodes connected to node v [44]. The GraphSAGE method, introduced by [45] is in close relation to the Laplacian based GCN layers described in the message passing formalization above.In the version of GraphSAGE implemented in this paper, the GraphSAGE algorithm is the inductive variant of this message passing network, with specific modifications to improve the accuracy and efficiency of the model.GraphSAGE acts in an inductive manner, operating on each node rather than on the entire graph, as it uses each node's local neighborhood in order to learn a function that can generate appropriate node embeddings.This method first samples a fixed number of nodes from the k nearest neighbors of each node and then applies an aggregation operator to transfer information to the node itself (Algorithm 1).The aggregator can be a weighted averaging operation with trainable parameters.This inductive framework is especially helpful in the case of large graphs where low-dimensional embeddings of the nodes are more important [45].In this regression application, the readout phase consists of a fully connected neural network that predicts a single global value for each graph based on the hidden state embeddings. Algorithm 1: GraphSAGE embedding generation algorithm, reproduced from [45] Input: Graph G = (V, E); input features {x v , ∀v ∈ V}; depth K; weight matrices W k , ∀k ∈ {1, ..., K}; non-linearity σ; neighborhood function , ∀v ∈ V Flow field meshing usually results in a relatively large graph around the objects in comparison to other common applications such as molecular graphs.Hence, we use a node level embedding graph convolution operator that is based on the average aggregator in the GraphSAGE framework.Since the mesh has a specific edge connection pattern in which each node is only connected to a few neighbors, the sampling operator is not used.In our convolution operation, the features in the k nearest rings in the neighborhood of each node are transferred to the center node by a trainable aggregation operation (Figure 1b, line 4 in Algorithm 1).Here, we use the Pytorch Geometric [46] library to load the data and implement the graph convolutional layers and pooling. Network Architecture For this problem, we implement a Graph Convolutional Neural Network using Graph-SAGE convolutional layers and top-K pooling steps, similar to the approach described in [47].Different from [47], we use an inductive convolution as opposed to a transductive convolution.Inductive methods are capable of generalizing to graphs with different structures, here allowing for prediction on meshes with varying resolutions.The inputs of the network are graphs with node level velocity features, while the output is the value of the predicted drag force.Specifically, we use two GraphSAGE layers, each followed with a top K pooling layer.Top K pooling is a downsampling method to reduce the size of the layers by selecting the most important features.In top K pooling layer, a learned score is assigned to each node, and the nodes with the K highest scores are selected to be passed to the next layer [48].The output of each pooling layer is pooled twice, once using global mean pooling and then using global max pooling, and the output from each pooling operation are concatenated together.While the input size to the network can vary in different samples as they have various number of nodes, the output size should be the same.By pooling along the dimension of vertices, the global pooling operations result in the same output size.The pooled, concatenated vectors are added together as a "skip connection", to reinforce the information contained in the sparse convolved feature maps.The output from this step is then fed to a fully connected network with three hidden layers, that predicts the drag force (Figure 1c).The training details of the GCNN are provided in Table 1 and the parameters are defined in [46,49] for interested readers. Data In order to test the performance of our method, we aim to predict the drag force on the airfoils directly from the unstructured flow field velocity.To generate the airfoils, coordinate files are extracted from the UIUC airfoil database which contains the cartesian coordinates outlining the shape of the airfoil [50].The incomplete or non-meshable samples are then removed from the dataset.In addition, the geometries are normalized to have a unit chord length.Next, each airfoil coordinate file is imported to the open-source mesh generator GMSH [51].Meshes are created to reflect the variation in the density of information contained in the domain, with a finer mesh on the area close to the airfoil that resolves the complex boundary layer effects, and a coarser mesh further from the airfoil, where the flow is minimally affected by the presence of the airfoil. To compute the velocities at mesh nodes around each object and the corresponding drag force, we perform CFD simulations using the FEniCS [52] package.FEniCS supports the DOLFIN PDE solver, which is used to solve the incompressible Navier-Stokes equations with an Incremental Pressure Correction Scheme (IPCS) method [53].The boundary conditions for these CFD simulations are a uniform velocity input of 1.5 m/s at the inlet (left), a far-field pressure condition at the outlet (right) and slip conditions at the top and bottom interfaces (Figure 2).The viscosity and the density of the flow are 0.001 Pa • s and 1 kg m −3 respectively.Selected airfoil samples with their velocity magnitude field along with their drag force are provided in Table 2.There is a positive correlation between the thickness of the airfoil and the corresponding drag force.The drag on an airfoil A is calculated as follows: where σ is the Cauchy stress tensor, e x is the horizontal unit vector, n is the unit vector normal to the airfoil surface. A grid convergence study is used to choose a specific mesh size to fully resolve the boundary layer effect while minimizing the computational time required.The resulted meshes contain 900-1500 mesh points and 3000-4000 edges, generated in effectively random spatial positions surrounding the airfoil. The incompressible Navier-Stokes equation are given by To predict the velocity field at the next time-step (u n+1 ) from an existing time-step (u n ) while enforcing mass conservation, an Incremental Pressure Correction Scheme (IPCS) is used to iteratively solve Equation 6.A detailed description of the IPCS method can be found in [53]. The CFD outputs of each sample are then processed to generate the matrix of node level horizontal and vertical velocities as well as the adjacency matrix.However, storing an N × N adjacency matrix is memory-intensive.To bypass this issue, we instead store a matrix of dimension 2 × E, where E is the number of edges, compactly encoding the adjacency matrix.This compact representation only stores the two nodes that each edge connects in the graph.This representation is specifically helpful where the adjacency matrix is sparse which is true for the graphs in our dataset, as there are over 1200 nodes in the graph, and each node is only connected to five other nodes on average.We perform our approach on two different sets of data.The first study is supposed to only examine the geometry of the airfoils.Therefore, the dataset consists of 1550 different airfoils.The second dataset, however, covers not only different geometries but also various angles of attack.21 angles of attack are considered for 522 airfoils (10962 samples in total).Angles of attack are changed in the range of −10°to 10°with an increment of 1°.Given the relatively low velocity in the domain and small angles of attack, the flow regime is laminar, and no significant flow separation occurs. Experiments Before implementing our method on the airfoil dataset, we perform a study of the relationship between the airfoil geometry and its drag force while other parameters are held constant.Without considering the effect of the angle of attack, we show the drag force to have a positive correlation with the thickness of the airfoil (Figure 3a).However, this correlation, on its own, cannot cover a sufficient portion of the variance in the drag force.In order to determine the geometric features influencing the magnitude of the drag force, we conduct a principal component analysis on the geometry of the airfoil and label the samples by their drag forces.By Principal Component Analysis (PCA), which is a linear dimensionality reduction technique, we extract components that can mostly describe the data variance in an unsupervised manner.While PCA is not generally interpretable, here we can observe the correlation of main components with the drag labels (Figure 3b). For the first experiment, we use the aforementioned GCNN architecture to predict the drag force for the dataset of airfoils with zero angles of attack.80% of the samples are randomly selected for training and the remainder are used as a test set.Two complementary metrics of mean squared error (MSE) of drag prediction and the coefficient of determination (R 2 ) are used to quantitatively evaluate the performance.Figure 4a shows the evolution of the loss metric as training progresses.The use of skip-connections in the architecture improves the model's accuracy.Using node level velocities as the input to the network, the graph convolutional neural network is anticipated to detect the most important features from the flow field data to accurately estimate the drag force.To illustrate the node embeddings produced by the convolution network, we analyze the values at the input to the first fully connected layer in the trained network which is the averaged output of the convolution layers. To do so, we perform a principal component analysis on the features and to detect the two most important geometric components that determine the drag force.It is noteworthy to emphasize that there is no geometrical feature directly encoded in the input of the network.Smooth transition of drag values with two components and depiction of the geometry of samples indicate that the network could learn meaningful geometrical features from flow field data.Here, the first two principal components can explain more than 90% of the variance in the dataset.The first component encodes a measure of airfoil thickness and the second component is an approximate measure of how quickly the airfoil tapers (Figure 4b). The network is also implemented on the second dataset, containing airfoils that vary in geometry and angle of attack, adding complexity to the prediction task.A comparison of the ground truth values of the drag forces from CFD result and the predictions from the network qualitatively shows the high accuracy of the model for both datasets (Figure 5). In addition to GCNNs, other machine learning and neural network algorithms can be applied on the flow field velocity data to solve a regression problem of drag prediction.Notice that the graph size and node order is not a matter of importance for the GCNN as we pass the adjacency and feature matrix directly to the model.However, non-graph based methods require a specified input size, as well as an identical node order between samples.Since the model perceives each input element as a different feature, it cannot be trained unless the order of the node elements are consistent between samples. In order to benchmark the GCNN's performance against traditional, structured machine learning methods, we construct a node ordering that is consistent across samples.Traditional machine learning methods require the construction of a feature matrix, where the information described by a single feature -i.e., a single column of the feature matrix -must be consistent from sample to sample.In this formulation, feature i in sample x represents the same information as feature i in sample y.The node ordering is provided by the mesh generation software based on the order in which the nodes are generated, and is the same for each individual sample.Therefore, since the spatial density of the nodes in each sample is similar, this creates a matrix where Node i in sample x is relatively close in space to Node i as it appears in all other samples.To create a matrix with the same number of features for each sample, the closest 1000 nodes to the center of the airfoil are taken as the feature vector, as there are at least 1000 node measurements in each individual sample.To quantify the spatial similarity of nodes of the same index in this dataset, the distance between nodes of the same index are computed.After comparing the distance from node i in sample x with node i across all other samples, 98% of these distances are smaller than 5 × 10 −3 L, where L is the length of the domain.This indicates that the position of an arbitrary node is approximately consistent across samples. To test the performance of non-graph based methods, we select a variety of the most used methods for performing prediction.Therefore, we compare the performance of Gradient Boosted Random Forest regression, a fully connected neural network, and a twodimensional convolutional neural network on predicting the drag force based on a matrix of node features that adhere to the previously defined structure.Some basic details of these models are provided in Table 3.A comparison shows that the GCNN approach outperforms the non-graph based methods (Figure 6). Conclusion We have introduced a novel approach based on graph convolutional neural networks for data-driven prediction using unstructured field information.This method is able to take advantage of the properties of convolution, such as automatic feature detection and parameter sharing while being applied to unstructured data.Flow field properties are usually measured on sparsely scattered points, leading to unstructured data that are incompatible with traditional machine learning algorithms as they only can be applied to structured data.To evaluate the proposed model, the drag force of two-dimensional airfoils are estimated based on the horizontal and vertical components of the flow velocities, measured on the nodes of the irregular mesh around the airfoils.The result of this experiment demonstrates the capability of this approach for global property prediction based on flow field data in similar scenarios.Our model can potentially be extended to experimental cases where access to certain flow information is not readily available.With the currently implemented framework, only velocity information is used to calculate the drag.For instance, the required velocity information can be determined experimentally by analyzing the motion of a sparse set of tracer particles in the flow.By formalizing the edge connection between tracer observations as a connection from each measurement to the k nearest measurements, one can extend the framework of the Graph Convolutional Neural Network to predict body forces. The proposed idea of graph representation of the flow field data can be further used for prediction or classification of other field properties whether they are global, such as the drag force, or locally defined on the field.The algorithm can also be used for optimizing the desired properties for design and control applications. Funding Sources The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.This work is supported by the start-up fund provided by CMU Mechanical Engineering and funding from Sandia National Laboratories. Figure 2 : Figure 2: a) A sample airfoil from the dataset, at a 10°angle of attack.The flow velocity past the airfoil is U ∞ = 1.5 m/s b) A schematic of the domain used to generate the data, where c refers to the chord length of the airfoil.The airfoil is placed in a domain with a constant horizontal inflow velocity of U ∞ = 1.5 m/s, and a pressure based boundary condition is used at the outlet of the domain.A free-slip boundary condition is used at the walls of the domain. Figure 3 : Figure 3: a) Drag force plotted against the corresponding thickness of 1500 airfoils at a 0°angle of attack.b) Principal component analysis on the coordinates of 1500 UIUC airfoils at zero angle of attack, with samples colored by the drag force of the corresponding airfoil. Figure 4 : Figure 4: a) The training process of the model.Skip connections increase the accuracy of the model, by reinforcing the information at the output of the final convolutional layer with the embedding created earlier in the model.NMSE: Normalized Mean Square Error.b) Principal component analysis on the node features of 1500 airfoils at zero angle of attack, after the graph convolution process.Samples are colored by their drag force.Specific airfoils at the extreme of either Principal Component are visualized. Figure 5 : Figure 5: A comparison of graph convolutional neural network predictions with the ground truth drag force.The first dataset contains 1500 samples of different airfoils at zero angles of attack.a) Data from the training set, b) Data from the test set.The second dataset consists of 5000 samples from 500 airfoils at angles of attack ranging from −10°to 10°, with the intervals of 1°.c): Data from the training set, d): Data from the test set. Figure 6 : Figure 6: A comparison of the performance of different prediction algorithms.The dataset consists of 1500 airfoils at zero angle of attack (GB: gradient boosted random forest, MLP: Multilayer perceptron, CNN: Convolutional neural network, GCNN: graph convolutional neural network). Table 2 : The velocity field and drag force for four different airfoil samples from the dataset Table 3 : Details of the trained models
6,563.2
2020-12-03T00:00:00.000
[ "Computer Science" ]
Hepatitis B Reactivation Rate and Fate Among Multiple Myeloma Patients Receiving Regimens Containing Lenalidomide and/or Bortezomib Objective: Reactivation of the hepatitis B virus (HBV) refers to an increase in HBV replication in a patient with inactive or resolved HBV. In this retrospective study, our aim is to present and compare HBV reactivation in multiple myeloma (MM) patients who received lenalidomide and/or bortezomib at any time during treatment, evaluate the factors associated with reactivation, and demonstrate the outcome of patients. Materials and Methods: We evaluated 178 MM patients who received lenalidomide (n=102) and/or bortezomib (n=174) during their treatment schedules. The HBsAg, anti-HBc, anti-HBs, HBeAg, and anti-HBe were detected by chemiluminescence by ARCHITECT lab analyzers using commercially available kits (Abbott, USA). HBV-DNA titers were determined by quantitative PCR. The results were evaluated by IBM SPSS Statistics for Windows, Version 20.0 (IBM Corp., Armonk, NY, USA). Results: HBV reactivation was diagnosed in 6 patients (3%) after bortezomib and in 8 patients (8%) after bortezomib and lenalidomide. Three of the patients in each group had HBsAg+, HBeAg+, AntiHBeAg-, AntiHBc-, and AntiHBS+ status, whereas 5 patients in the bortezomib- and lenalidomide-treated group and 3 patients in the bortezomib-treated group had HBsAg-, HBeAg-, AntiHBeAg-, AntiHBc-, and AntiHBS+ status prior to treatment. There were no statistical differences observed between HBV reactivation in the bortezomib-treated or bortezomib- and lenalidomide-treated groups in terms of age at diagnosis, sex, International Staging System subtype, frequency of extramedullary disease, dialysis requirement, or receiving of autologous stem cell transplantation. In patients who received antiviral prophylaxis, a higher incidence of HBV reactivation was detected in HBsAg-positive patients compared to HBsAg-negative patients (4/4, 100% vs. 2/7, 29%; p=0.045). The 3-year and 5-year overall survival rates were similar in patients with or without HBV reactivation (83% vs. 84%, 73% vs. 74%, p=0.84). Conclusion: Close follow-up is recommended for not only HBsAg-positive but also HBsAg-negative patients. Introduction The hepatitis B virus (HBV) represents a serious health concern worldwide. HBV is intermediately endemic in Turkey, where seropositivity of the hepatitis B surface antigen (HBsAg) has been reported to range between 2% and 7% [1,2]. When there is an increase in HBV replication in a patient with inactive or resolved HBV, this is referred to as reactivation of HBV. Commonly, it occurs in HBsAg-positive cancer patients; HBsAg-negative patients with positive anti-hepatitis B core antibody (anti-HBc) and/or anti-hepatitis B surface antibody (anti-HBs) also carry an increased risk [3,4,5,6]. Cytotoxic chemotherapy, monoclonal antibody treatments, and bone marrow transplantation have been demonstrated as risk factors for HBV reactivation [7,8,9,10]. HBV infection may result in severe hepatic dysfunction and fulminant hepatitis [11,12]. In current treatment guidelines, a prophylactic nucleoside analogue is recommended to be continued for at least 6 months after discontinuation of immunosuppressive therapy [13,14]. Multiple myeloma (MM) is characterized by malignant proliferation of plasma cells. Bortezomib, a proteasome inhibitor that disrupts the cell-signaling pathways, has shown antimyeloma activity and has been recommended as a standard treatment in patients with newly diagnosed and relapsed MM [15]. Lenalidomide is a potent oral immunomodulatory drug with direct tumoricidal, anti-angiogenic, and immunostimulatory effects [16]. Both bortezomib and lenalidomide show remarkable activity in MM patients with manageable toxicity profiles. There are several case reports and studies on MM showing HBV reactivation under bortezomib treatment [17,18,19], but the literature is scarce regarding HBV reactivation after lenalidomide treatment. In this retrospective study, our aim is to present and compare HBV reactivation in our MM patients who received lenalidomide and/or bortezomib at any time during treatment, evaluate the factors associated with reactivation, and demonstrate the outcome of patients. Materials and Methods We retrospectively included 178 MM patients who were diagnosed between 2002 and 2015 at the Ankara University Faculty of Medicine's Department of Hematology. Informed consent was obtained from all participants. International Staging System (ISS) scores, counts of hemoglobin and lymphocytes, extramedullary involvement, and plasma cell percentage in bone marrow were recorded at the initiation of chemotherapy. The patients' data were analyzed via electronic medical records. All patients received lenalidomide and/or bortezomib during their treatment schedules, whether for induction, relapse, or post-induction maintenance. Hepatitis B surface antigen (HBsAg), hepatitis B core antibody (anti-HBc), hepatitis B surface antibody (anti-HBs), hepatitis B e-antigen (HBeAg), and hepatitis B e-antibody (anti-HBe) were detected by chemiluminescence by ARCHITECT lab analyzers using commercially available kits (Abbott, USA) before each line of chemotherapy. HBV DNA titers were determined by quantitative PCR. Patients with active hepatitis B prior to chemotherapy were excluded from the study. If a patient was HBsAg-positive before chemotherapy or HBsAgnegative but positive for anti-HBc, HBeAg, and/or anti-HBe, a prophylactic antiviral drug was administered during and for at least 6 months after chemotherapy. Hepatitis B serologies were closely monitored in patients who were HBsAg-negative but seropositive for anti-HBc and/or anti-HBs, both before autologous peripheral stem cell transplantation and if liver enzyme abnormality occurred, to determine reactivation. Reactivation was defined as 1) loss of anti-HBs and reoccurrence of HBsAg in HBsAg-negative and/or anti-HBs-positive patients and 2) increase of HBV DNA level by at least a factor of 10 or an absolute count of HBV DNA reaching 1x10 9 copies/mL. Antiviral treatment was initiated as soon as reactivation was detected. None of the patients had received hepatitis B vaccinations. Statistical Analysis The results were evaluated by IBM SPSS Statistics for Windows, Version 20.0 (IBM Corp., Armonk, NY, USA). All numerical values are given as medians with distribution ranges. We used the Pearson chi-square test or the Fisher exact test to compare categorical variables. The Kaplan-Meier method was used for survival curves. In evaluating the results, p<0.05 was considered statistically significant. Among all subjects, HBsAg was positive in 4 patients (2%) at diagnosis. Among HBsAg-positive patients, 3 patients had HBV DNA levels of >1000 IU/mL. For prophylaxis, patients received either 100 mg of lamivudine (n=2) or 245 mg of tenofovir (n=2) daily, which continued for 6 months after termination of treatment for MM, except in 1 patient who died of infection in the second month of chemotherapy. Among HBsAg-negative patients who were positive for anti-HBc, anti-HBe, or HBeAg (n=7), 6 patients received 100 mg/daily lamivudine, and 1 patient had entecavir at 0.5 mg/daily for prophylaxis that was prolonged for 6 months after treatment of MM. All HBsAgnegative patients had HBV DNA levels of <500 IU/mL. No significant differences were observed in sex, age at diagnosis, ISS stage, subtype, frequency of extramedullary disease, or dialysis requirements between HBsAg-positive and HBsAgnegative patients. Hepatitis B reactivation was observed in 14 patients (8%). The patients' HBV and prophylaxis statuses at diagnosis are summarized in Table 2. The median time from diagnosis to hepatitis B reactivation was 32 months (range: 2-78). Of 174 bortezomib-treated patients, 6 had HBV reactivation (3%). HBV reactivation was detected in 8 patients out of the 98 patients who received lenalidomide and bortezomib (8%). Reactivation developed in 4 patients (100%) who were HBsAg-seropositive at diagnosis, while 10 patients (6%) were initially HBsAg-negative. HBsAg-positive patients who received prophylaxis had significantly higher incidence of 1). Details of patients with HBV reactivation are given in Tables 3 and 4. Patient number 5 in Table 4 had HBV reactivation under lamivudine prophylaxis and died because of bacterial infection following 2 months of chemotherapy. Chemotherapies were suspended until liver function tests and HBV DNA levels were decreased. Baseline characteristics including MM subtype, extramedullary disease, median age, sex, ISS, incidence of herpes infection, and auto-HSCT did not differ between the bortezomib-and lenalidomide-treated vs. bortezomib-treated groups that had HBV reactivation. Lenalidomide treatment was interrupted in 4 (50%) of the patients due to progression of disease. Except for 1 patient, all patients underwent autologous stem cell transplantation (ASCT), and 1 patient who received a second ASCT for a secondary refractory disease had progression to cirrhosis following high-dose melphalan. After treatment with tenofovir, HBV DNA titers decreased in all patients and became undetectable in 4 of the 8 patients. In patients treated with only bortezomib, all patients received dexamethasone, and 4 of 6 patients underwent ASCT. Progression of disease after bortezomib was detected in 2 patients. Among these 6 patients, 4 patients were treated with tenofovir (2 achieved HBV DNA negativity), and the other 2 were treated with lamivudine. The response could not be evaluated for patient number 5, because she died of infection within 2 months of the initiation of chemotherapy (Tables 3 and 4). Discussion Generally, HBV reactivation has been documented in HBsAgpositive cancer patients [20]. In one study, the rate of HBsAg seropositivity in MM cases was higher than in patients with acute leukemia [21]. Antiviral prophylaxis is the critical step in managing HBsAg-positive patients undergoing systemic chemotherapy [13,22]. Clinical studies showed a reduction of HBV activation rate, severity of hepatitis, and mortality with prophylaxis [23,24]. The American Gastroenterological Association suggests antiviral drugs with high barriers to resistance rather than lamivudine for at least 6 months in high-risk patients [14]. Previously, in our experience, because HBV reactivation in a lamivudine-untreated group occurred 12 months after the individual's chemotherapy had been discontinued, lamivudine prophylaxis was maintained for a year following discontinuation of any chemotherapy [25,26]. The choice of lamivudine or a shorter duration of prophylaxis might have caused the HBV reactivation that occurred in all HBsAgpositive patients who received prophylaxis in this cohort. One patient with HBV reactivation died under lamivudine prophylaxis within 2 months of chemotherapy. Recent data have shown HBV reactivation in HBsAg-negative lymphoma patients who received rituximab plus steroid combination chemotherapy [3,4,27]. Lee et al. [28] demonstrated HBV reactivation in 5.2% of 230 MM patients. All of these patients had HBsAg-negative/anti-HBcpositive serology. Similarly, we found that the incidence of HBV reactivation in HBsAg-negative patients was 6%. The preferred prophylaxis was lamivudine in HBsAg-negative patients. This is the first study of the recently developed agents lenalidomide and bortezomib in MM, and we observed an incidence of HBV reactivation of 8%. HBV reactivation after bortezomib was described in previous case reports [17,18,19]. Mya et al. [29] found an incidence of HBV reactivation of 5.5% in 273 MM patients after bortezomib and dexamethasone salvage therapy; one of the HBV reactivation cases was HBsAg negative initially. Li et al. [30] conducted one of the largest retrospective studies of HBV reactivation in patients who received regimens containing bortezomib. HBV reactivation was observed in 6 HBsAg-positive and 2 HBsAg-negative cases from a total of 139 patients. OS and progressionfree survival were shorter in HBsAg-positive MM patients compared to HBsAg-negative patients (p<0.01) [30]. We did not detect any survival advantage in HBsAg-negative patients in our study. Bortezomib dysregulated the cell-mediated immunity that played an important role in the suppression of varicella zoster virus reactivation [31]. HBV is another DNA virus that remains dormant in human hosts. Bortezomib may promote HBV reactivation by altering the number and functions of CD8 T cells and CD56 NK cells [29]. In addition, MM itself causes immunodeficiency that involves various parts of the immune system, including B, dendritic, T, and NK Since the patients in our study were heavily pretreated, and there was no control group assigned for patients not receiving either bortezomib or lenalidomide, it is not clear whether the HBV reactivation was driven by bortezomib and/or lenalidomide. Multiple lines of treatment may cause severe immunosuppression that results in an increased risk of HBV reactivation [33]. Auto-HSCT was shown to be a risk factor for HBV reactivation in several reports. Uhm et al. [34] retrospectively analyzed changes in HBV serology prior to and following auto-HSCT and concluded that 6 of 129 HBsAg-negative MM patients became HBsAg-positive, possibly related to dysfunction of humoral immunity. Lee et al. [28] determined auto-HSCT to be an independent risk factor (p=0.025) for HBV reactivation and suggested that regular monitoring should be considered in patients who underwent auto-HSCT [28]. However, we did not find a significant correlation between HBV reactivation and auto-HSCT. HBV reactivation may be variable, from mildly clinical to hepatic failure. Development of fatal hepatitis following HBV reactivation was reported in CD20-positive lymphoma patients who received rituximab and steroid combination treatment [7,27]. Yoshida et al. [35] described HBV reactivation in 2 HBsAgseronegative MM patients resulting in liver damage. Similarly, one of our heavily pretreated patients with HBV reactivation had disease with liver damage progressing to cirrhosis following a second ASCT treatment. Conclusion We found that the incidence of HBV reactivation was notable in patients who received lenalidomide-and/or bortezomib-based chemotherapy. Most of the patients were heavily pretreated, which might have caused immune deficiencies. HBV reactivation was diagnosed in both HBsAg-positive and HBsAg-negative
2,889.6
2019-07-31T00:00:00.000
[ "Medicine", "Biology" ]
LINC00858 promotes colon cancer progression through activation of STAT3/5 signaling by recruiting transcription factor RAD21 to upregulate PCNP The purpose of our investigation is to explore the putative molecular mechanisms underpinning LINC00858 involvement in colon cancer. The expression of LINC00858 in TCGA data was identified using the GEPIA website. Colon cancer cancerous tissues were clinically collected. The expression of LINC00858, RAD21, and PCNP in colon tissues or cells was determined using RT-qPCR. The interactions among LINC00858, RAD21, and PCNP promoter region were determined by means of RNA pull down, RIP, and ChIP assays. Cell proliferative, apoptotic, invasive, and migrated capabilities were evaluated. Western blot was conducted to determine RAD21, PCNP, phosphorylated (p)-STAT3, STAT3, p-STAT5 and STAT5 and apoptosis related proteins. A nude mouse model of colon cancer was constructed and tumorigenesis of colon cancer cells was observed. LINC00858 was upregulated in cancerous tissues and cells. LINC00858 recruited the transcription factor RAD21. Overexpression of LINC00858 promoted the binding of RAD21 and PCNP promoter region, which increased the expression of PCNP. Silencing of RAD21 or PCNP reversed the promoting effect of LINC00858 on the disease initiation and development. PCNP silencing inhibited proliferative ability and promoted apoptotic ability of cancerous cells via STAT3/5 inhibition, which was reversed by colivelin-activated STAT3. In vivo experiments further verified that LINC00858 enhanced the tumorigenicity of colon cancer cells in vivo by regulating the RAD21/PCNP/STAT3/5 axis. It indicated the promoting role of LINC00858 in colon cancer progression though activating PCNP-mediated STAT3/5 pathway by recruiting RAD21. INTRODUCTION Colon cancer is recognized as the third most frequently occurring cancer, with high incidence and mortality [1]. Dietary pattern is regarded as an important risk factor for the development of colon cancer [2]. Approximately 20% of all patients with colon cancer are diagnosed at a metastatic stage [3] and unfortunately, the mainstay of therapy, surgery in combination with subsequent adjuvant chemotherapy presents significant side effects and risks [4]. Thus, the discovery of novel molecular targets for colon cancer therapy is of high clinical importance. Long non-coding RNAs (lncRNAs) are considered to be mRNAlike non-protein-coding RNAs transcribed throughout eukaryotic genomes and possess the capability for regulating gene expression [5]. As reported previously, LINC00858 is upregulated in colon cancer, which can enhance the proliferative and invasive abilities and metastatic potential [6]. RAD21 is identified as a part of the cohesin complex, which is crucial for chromosome segregation as well as error-free DNA repair [7]. Cohesin RAD21 haploinsufficiency was found to affect a variety of tumor initiating events and was found to be a crucial transcriptional regulator of pivotal genes in colorectal cancer [8]. PEST-containing nuclear protein (PCNP), a novel nuclear protein, is involved in cell proliferative capability and tumorigenic ability [9]. Notably, PCNP was discovered as a differentially expressed gene associated with lymph node involvement in colon cancer [10]. Overexpressed PCNP is shown to upregulate the signal transducer and activator of transcription (STAT)3/5 pathway and inhibit lung adenocarcinoma cell apoptosis [11]. STAT proteins are regarded as latent transcription factors residing in the cytoplasm of a variety of cells [12]. In colon carcinoma, STAT3 and STAT5 were found abnormally expressed and the STAT3/STAT5 expression ratio was suggested as a potential independent prognostic marker [13]. Considering the aforementioned findings together, we formulated and investigated a hypothesis that the lncRNA LINC00858 may affect the progression of colon cancer through the PCNP and RAD21regulated STAT3/5 pathway. RESULTS LINC00858 expressed highly in colon cancer and its silencing suppressed proliferative, migrated, and invasive potentials of colon cancer cells while inducing their apoptosis Using web-based bioinformatics tool GEPIA, highly-expressed LINC00858 was found to in colon cancer (Fig. 1A). This finding was verified by RT-qPCR results (Fig. 1B) that colon cancer tissues presented an obvious higher LINC00858 expression than adjacent tissues. Moreover, we observed higher LINC00858 expression in the four used colon cancer cell lines (HCT166, SW480, Caco2, and SW620) relative to that in NCM460 normal colon cell line. SW480 and HCT166 cells were selected for subsequent experiments due to their higher LINC00858 expression among the colon cancer cell lines (Fig. 1C). For investigating the effects of LINC00858 on the cancerous cell function, we knocked down LINC00858 in SW480 and HCT166 cells. RT-qPCR results showed that, relative to cells used as blank control and sh-NC-treated cells, lower expression of LINC00858 was identified in the cells following treatment of sh-LINC00858#1 or sh-LINC00858#2 (Fig. 1D). As the effects of sh-LINC00858#1 knockout were more robust, this sequence was used in the following experiments. As identified by CCK-8 assay and EdU staining, Relative to the cells following treatment of sh-NC, the proliferative ability of the SW480 and HCT166 cells after sh-LINC00858 transfection was lower (Fig. 1E, F). TUNEL staining results revealed that sh-LINC00858 promoted the cancerous cell apoptotic ability of SW480 and HCT166 cells relative to sh-NC treatment (Fig. 1G). As depicted by Western blot assay, as compared with SW480 and HCT166 cells with sh-NC treatment, the cleaved caspase-3/total Caspase-3 and cleaved PARP/total PARP levels were increased by sh-LINC00858 (Fig. 1H, and Supplementary Fig. 1A). In addition, as detection by Transwell assay, the migrated and invasive abilities of SW480 and HCT166 cells were reduced by sh-LINC00858 treatment (Fig. 1I). In sum, these results indicated an upregulation of LINC00858 expression in colon cancer, and its knockdown can reduce the proliferative, migrated, and invasive abilities of colon cancer cells while increasing cancer cell apoptotic ability. LINC00858 recruited RAD21 in colon cancer cells For elucidating the mechanism of LINC00858 in colon cancer, LncMAP was used to predict the possible binding proteins of LINC00858 ( Fig. 2A). Among those, RAD21 has been documented as highly expressed in multiple malignant tumors, including colon Fig. 1 LINC00858 is highly expressed in colon cancer and silencing of LINC00858 inhibits proliferation, migration, and invasion of colon cancer cells. A LINC00858 expression in colon cancer determined in-silico using GEPIA. Red boxes represent CC samples, and the gray boxes represent normal samples. B Expression of LINC00858 in cancer tissues and normal adjacent tissues of patients with colon cancer (n = 50) quantified by RT-qPCR, *p < 0.05 vs. normal adjacent tissues. C LINC00858 expression in human normal colon cell line (NCM460) and colon cancer cell lines (HCT166, SW480, Caco2 and SW620) quantified by RT-qPCR. *p < 0.05 vs. *p < 0.01 vs. NCM460 cell line. D The expression of LINC00858 in SW480 and HCT166 cells quantified by RT-qPCR. E CCK-8 assay for measuring the viability of SW480 and HCT166 cells in each group. F EdU staining to determine the proliferation of SW480 and HCT166 cells in each group. G TUNEL staining to detect the apoptosis of SW480 and HCT166 cells in each group. H Western blot assay to detect the expression of apoptosis related proteins cleaved caspase-3/total Caspase 3 and cleaved PARP/total PARP in SW480 and HCT166 cells from each group. I Transwell assay to assess the migration and invasion abilities of SW480 and HCT166 cells in each group. *p < 0.05. Statistical comparisons between adjacent tissues and cancer tissues were conducted using paired t test and other two group data were compared using unpaired t test. Data from multiple groups were compared using one-way ANOV with Tukey's post hoc tests. Repeated measures ANOVA with Bonferroni's post hoc test was used to compare data obtained at different time points,. All cell experiments were repeated three times. T. Xu et al. cancer, endometrial cancer, and prostate cancer, and is shown to promote cancer occurrence and development (Fig. 2B) [14][15][16]. As shown by Western blot assay, the overexpression or knockout of LINC00858 in SW480 and HCT166 cells did not affect the expression of RAD21 (Fig. 2C, and Supplementary Fig. 1B), suggesting that LINC00858 did not directly regulate the expression of RAD21 to participate in the colon cancer development. To examine if LINC00858 interacts with RAD21, we assessed the intracellular localization of LINC00858 and RAD21 by RNA-FISH. The results showed that LINC00858 and RAD21 were co-located in the nucleus (Fig. 2D). RIP and RNA-pull down assays detected that LINC00858 could bind to RAD21 protein (Fig. 2E, F). These results reflected that LINC00858 could bind to RAD21 in colon cancer cells. LINC00858 promoted PCNP transcription by recruiting RAD21 RAD21 is an important transcription factor, which regulates the expression of downstream genes by regulating their transcriptional activities. Therefore, we speculated that although LINC00858 does not affect the expression of RAD21, it might regulate the expression of downstream proteins by affecting its transcriptional activity. Thus, we predicted the downstream target genes of transcription factor RAD21. We found 51 downstream target genes via the LncMAP website and 24,282 target genes of RAD21 via the hTFtarget website, and obtained 48 candidate genes by considering the intersection of target genes derived from the two sources (Fig. 3A). To further screen candidate genes, we constructed the co-expression network of genes using GeneMANIA (Fig. 3B), and selected 16 candidate genes with high gene evaluation scores (Supplementary Table 1). Next, expression heatmaps of 16 candidate genes in colon cancer included in the TCGA dataset were constructed using UALCAN (Fig. 3C). The analysis showed that as compared with normal colon tissues, colon cancer tissues showed high expression of PCNP (Fig. 3D). Concurring results were obtained by analyzing the PCNP protein level in cancerous tissues using Western blot analysis (Fig. 3E). In addition, as compared with normal colon cell line, the PCNP expression was elevated in the HCT166 and SW480 cells (Fig. 3F). Notably, the overexpression of LINC00858 upregulated the PCNP level in HCT166 and SW480 cells, while LINC00858 knockout resulted in the opposite result (Fig. 3G, and Supplementary Fig. 1C). Based on these findings, LINC00858 appears to promote the transcription of PCNP by recruiting RAD21. To confirm this notion, we used RAD21 specific antibody to obtain chromatin complex from SW480 cells and detected the enrichment degree of RAD21 in the promoter region of PCNP. ChIP assay revealed the enrichment of RAD21 in the PCNP promoter region. LINC00858 overexpression increased the combination of the two, while knockout of LINC00858 produced the opposite result (Fig. 3H). These data verified that LINC00858 is able to recruit RAD21 to promote PCNP transcription. LINC00858 promoted the malignant phenotype of colon cancer cells and inhibited apoptosis by regulating the RAD21/ PCNP axis Next, we explored the involvement of the LINC00858/RAD21/ PCNP axis in colon cancer pathogenesis. Relative to oe-NC + si-NC treatment, the expression of LINC00858 and PCNP in HCT166 and SW480 cells was higher after oe-LINC00858 + si-NC treatment, while RAD21 expression showed no obvious difference. In comparison to the oe-LINC00858 + si-NC treatment, LINC00858 expression exhibited no obvious difference, whereas PCNP and RAD21 expression was reduced in the HCT166 and SW480 cells after oe-LINC00858 + si-RAD21 treatment. The oe-LINC00858 + si-PCNP treatment presented no obvious influences on the LINC00858 and RAD21 expressions in HCT166 and SW480 cells but PCNP expression was lower when compared with the oe-LINC00858 + si-NC treatment ( Fig. 4A-D, and Supplementary Fig. 1D). Moreover, as suggested by CCK-8 and flow cytometry assays, relative to cells following oe-NC + si-NC transfection, the proliferative capability of HCT166 and SW480 cells following treatment of oe-LINC00858 + si-NC was greater, while the apoptotic rate was decreased. As compared with the oe-LINC00858 + si-NC treatment, the proliferative rate was lower in the cells after oe-LINC00858 + si-RAD21 and oe-LINC00858 + si-PCNP treatments, while the apoptotic rate was higher (Fig. 4E, F). Western blot assay data presented that the expression of cleaved caspase 3/total Caspase 3 and cleaved PARP/total PARP was lower in the HCT166 and SW480 cells in the oe-LINC00858 + si-NC- treated cells relative to the oe-NC + si-NC treatment. However, the opposing results were noted in the oe-LINC00858 + si-RAD21treated cells and oe-LINC00858 + si-PCNP-treated cells relative to those following treatment of oe-LINC00858 + si-NC (Fig. 4G, and Supplementary Fig. 1E). As shown in Transwell assay results, the oe-LINC00858 + si-NC treatment enhanced invasive and migrated abilities relative to oe-NC + si-NC treatment, while the oe-LINC00858 + si-RAD21 and oe-LINC00858 + si-PCNP treatments weakened these abilities (Fig. 4H). These results suggested that the overexpression of LINC00858 promotes the expression of PCNP by recruiting RAD21, thereby promoting a malignant phenotype of colon cancer cells and reducing their apoptosis. PCNP activated the STAT3/5 signaling pathway to promote the malignant phenotype of colon cancer cells and inhibit apoptosis In order to investigate whether the PCNP mediated STAT3/ 5 signaling pathway is involved in the pathogenesis of colon cancer, we transfected HCT166 and SW480 cells with sh-NC and sh-PCNP, followed by treatment with DMSO and 1 μm colivelin (STAT3 activator) for 24 h. As results of RT-qPCR and Western blot assays shown, relative to cells treated with sh-NC + DMSO, the expression of PCNP, and phosphorylation levels of STAT3 and STAT5 were lower in those treated with sh-PCNP + DMSO. There was no obvious difference in the expression of PCNP in the cells after sh-PCNP + DMSO treatment and sh-PCNP + colivelin treatment, while phosphorylation levels of STAT3 and STAT5 were enhanced in the cells by sh-PCNP + colivelin treatment relative to the sh-PCNP + DMSO treatment (Fig. 5A, B, and Supplementary Fig. 1F). In addition, as compared with sh-NC + DMSO treatment, the sh-PCNP + DMSO treatment reduced the HCT166 and SW480 cell proliferative, migrated, and invasive capabilities, with higher cell apoptotic rate observed. Relative to the cells following sh-PCNP + DMSO treatment, those following sh-PCNP + colivelin treatment displayed increased HCT166 and SW480 cell proliferative, migrated, and invasive abilities, with lower cell apoptotic rate ( Fig. 5C-F). Western blot assay results indicated that the expression of cleaved caspase 3/total Caspase-3 and cleaved PARP/total PARP in the cells was increased after sh-PCNP + DMSO treatment relative to sh-NC + DMSO treatment. However, the expression of cleaved caspase 3/total Caspase-3 and cleaved PARP/total PARP was lower in the cells treated with sh-PCNP + colivelin as compared with those treated with sh-PCNP + DMSO (Fig. 5G, and Supplementary Fig. 1G). These results suggested that silencing PCNP can inhibit the STAT3/5 signaling pathway to reduce cell malignant phenotype. This effect can be reversed by using colivelin to activate STAT3. LINC00858 promoted the tumorigenicity of colon cancer cells in mice by regulating the RAD21-PCNP-STAT3/5 axis In order to study the effects of LINC00858 on the tumorigenicity of colon cancer cells by regulating the RAD21/PCNP/STAT3/5 axis in vivo, we injected SW480 cells transduced with adenovirus carrying oe-NC + sh-NC, oe-LINC00858 + sh-NC, oe-LINC00858 + sh-RAD21, oe-LINC00858 + sh-PCNP into nude mice. As displayed by RT-qPCR, as compared with the oe-NC + sh-NC treatment, the expression of LINC00858 in tumor tissues was higher after oe-LINC00858 + sh-NC treatment. Relative to the oe-LINC00858 + sh-NC treatment, the LINC00858 expression showed no obvious difference after oe-LINC00858 + sh-RAD21 and oe-LINC00858 + sh-PCNP treatments (Fig. 6A). Western blot assay results revealed that relative to oe-NC + sh-NC treatment, there was no obvious change in the expression of RAD21 in the tumor tissues after oe-LINC00858 + sh-NC treatment, while the expression of PCNP and phosphorylation levels of STAT3 and STAT5 were increased in the tumor tissues following oe-LINC00858 + sh-NC treatment. Relative to the treatment of oe-LINC00858 + sh-NC, the expression levels of RAD21 and PCNP, and phosphorylation levels of STAT3 and STAT5 in the tumor tissues were lower after treatment of oe-LINC00858 + sh-RAD21. In the tumor tissues following oe-LINC00858 + sh-PCNP treatment, RAD21 expression was not altered while PCNP expression and phosphorylation levels of STAT3 and STAT5 were decreased in those following oe-LINC00858 + sh-NC treatment (Fig. 6B, and Supplementary Fig. 1H). Additionally, relative to the oe-NC + sh-NC treatment, the volume and weight of tumor were higher after the oe-LINC00858 + sh-NC treatment. When compared with the oe-LINC00858 + sh-NC treatment, the volume and weight of tumor after treatments of oe-LINC00858 + sh-RAD21 and oe-LINC00858 + sh-PCNP were lower (Fig. 6C, D). These results indicated that LINC00858 promotes the tumorigenicity of colon cancer cells in vivo through regulation of the RAD21/PCNP/STAT3/ 5 axis. DISCUSSION Colon cancer is a leading reason for cancer-related death on a global scale and has a poor survival rate at less than 10% in the H The migration and invasion of HCT166 and SW480 cells examined by Transwell assay. *p < 0.05. Data from multiple groups were compared using one-way ANOVA with Tukey's post hoc tests. Repeated measures ANOVA with Bonferroni's post hoc test was used to compare data from multiple time points,. All cell experiments were repeated three times. we set out to establish if LINC00858 affected the progression of colon cancer and revealed that LINC00858 could recruit RAD21 to regulate PCNP-mediated STAT3/5 signaling pathway, thereby promoting the development of colon cancer. Initially, LINC00858 was found to be upregulated in colon cancer and its silencing was observed to inhibit proliferative, migrated, and invasive capabilities of colon cancer cells. Consistently, accruing evidence has documented the oncogenic role of LINC00858 in colon cancer. LINC00858 reportedly regulates HNF4α and WNK2 to produce a tumor-promoting function in colon cancer [6]. In addition, LINC00858 was found to regulate the microRNA-25-3p/SMAD7 axis, thereby contributing to progression of TP53-wild-type colorectal cancer [18]. Moreover, a previous study reported that LINC00858 could mediate PAK2 by sponging miR-4766-5p, which facilitated colorectal cancer development [19]. These reports can concur with our results demonstrating the promoting function of LINC00858 on colon cancer development. Furthermore, we demonstrated that LINC00858 could upregulate PCNP transcription in colon cancer by recruiting the transcription factor RAD21. It is noteworthy that a regulatory relationship between LINC00858 and RAD21 and that between RAD21 and PCNP have been rarely reported. In the current study, bioinformatic analysis in combination with RNA-FISH verified RAD21 as a target gene of LINC00858 in colon cancer. It has been previously reported that PCNP is a differential gene implicated in lymph node involvement during colon cancer [10]. Additionally, PCNP has been revealed to facilitate the progression of ovarian cancer through elevation of β-catenin nuclear accumulation and induction of epithelial to mesenchymal transition [20]. Of note, Cohesin RAD21 haploinsufficiency regulates multiple initiating events in colorectal cancer and serves as a key transcriptional modulator for important genes in colorectal cancer [8]. The upregulation of RAD21 is also found to predict poor survival of non-small-cell lung cancer patients [21]. Another important finding of this study was that increased PCNP expression contributed to activation of the STAT3/5 pathway, which promoted the progression of colon cancer. The regulatory relationship between PCNP and STAT3/STAT4 has not been previously reported, although upregulated STAT3 and STAT5 were detected in adenocarcinoma cells overexpressing PCNP, and were found accountable for enhanced cell proliferative, migrated, and invasive abilities [22]. Others have shown that overexpressed PCNP could result in activation of the STAT3/5 signaling pathway, thereby suppressing lung adenocarcinoma cell apoptosis [11]. The possible participation of STAT3 and STAT5 in colon cancer has been highlighted, and STAT3 and STAT5 have been associated with colon cancer survival [23]. Moreover, the IL6/JAK/STAT3 pathway has been found to upregulate CXCL1, CXCL2, CXCL3, and CXCL11 in patients with colon cancer, which was associated with prognosis [1]. Overall, the results obtained in the current study demonstrated that LINC00858 affects the progression of colon cancer and revealed that LINC00858 is able to recruit RAD21 to upregulate PCNP, which contributes to STAT3/5 signaling activation, thereby promoting the development of colon cancer (Fig. 7). This finding bears the potential to unravel a novel direction for treating colon cancer. MATERIALS AND METHODS Clinical sample collection Tumor tissues (50 cases) and corresponding adjacent non-cancerous tissues by pathological confirmation (50 cases) were harvested from patients with colon cancer (aged 39-77 years, with an average age of 56.9 ± 8.3 years) who underwent tumor resection at The Affiliated Huai'an No.1 People's Hospital of Nanjing Medical University from January 2018 to January 2019. The following inclusion criteria were adopted: colon cancer confirmed by clinical, imaging, and pathology [24]; subjects having received standard diagnosis and treatment procedures, who treated without any preoperative radio-or-chemotherapy. Patients not diagnosed and treated according to the standard procedures or a history of other malignant tumors were excluded from this study. All collected tissue samples were sliced into small pieces and then stored in liquid nitrogen in a cryopreservation tube, until storage in a −80°C refrigerator. Cell treatment Human normal colon cell line (NCM460, Ningbo Mingzhou Biological Technology Co., Ltd., Ningbo, Zhejiang, China) and four colon cancer cell lines (HCT166, SW480, Caco2 and SW620; American Type Culture Collection, ATCC; Manassas, VA, USA) were cultured at 37°C with 5% CO 2 in Dulbecco's modified Eagle medium (DMEM) (190040, Gibco, Carlsbad, CA, USA) containing 10% fetal bovine serum (FBS), 100 U/mL penicillin sodium and 100 mg/mL streptomycin sulfate. The medium was placed in moist air and changed every 2-3 days depending on the cell growth situation. The cells were used upon reaching the logarithmic growth stage. RNA quantification RNA Extraction Kit (10296010, Invitrogen) was utilized for total RNA extraction. After identification of RNA purity and integrity, reverse transcription of RNA into complementary DNA (cDNA) was conducted with the use of PrimeScript RT kit (RR014A, Baoriyi Biotechnology, Beijing, China). Primers used are listed in Supplementary Table 2, which were designed and synthesized by Takara (Dalian, China). RT-qPCR was carried out using a PCR kit (KR011A1, Tiangen Biochemical Technology Co., Ltd., Beijing, China), with GAPDH utilized as the internal reference for measuring the relative gene expression levels on the basis of 2 −ΔΔCT method. Transwell assay Transwell chambers (Corning Glass Works, Corning, NY, USA) with 8-μm wells were employed for migration and invasion tests. For migration detection, the prepared cell suspension was added to the apical Transwell chamber for 12 h, the migrating cells that attached to the surface of the membrane were fixed in 4% paraformaldehyde and stained with 0.1% crystal violet for 5 min. For invasion detection, the Transwell chambers were coated with Matrigel (BD Biosciences, Franklin Lakes, NJ, USA) following the same procedures above mentioned. After that, the numbers of migrating and invading cells were statistically analyzed in five randomly selected visual fields after microscopical observation. TUNEL staining Cell apoptosis was detected using In Situ Cell Death Detection Kits (11684795910, Roche, Basel, Switzerland). The cells in logarithmic phase of growth from each group were seeded in a 6-well plate at the density of 1 × 10 6 cells/mL. The cell suspension was coated on coverslips, and 4% paraformaldehyde was added for 1 h to fix the cells. Then, fixed cells and tissue sections were permeabilized with 0.1% Triton X-100 at 4°C for 3 min, and incubated with 50 μL TUNEL solution in darkness at 37°C for 1 h. The cells were sealed with anti-fading reagent, and observed under an inverted microscope (CX23, Olympus). The number of TUNEL-positive cells was calculated from five randomly selected high-power visual fields. Flow cytometry After 24 h of transfection, the cells were digested with ethylenediaminetetraacetic acid (EDTA)-free trypsin (27250018, Thermo Fisher Scientific Inc.) and centrifuged at 3000 r/minute for 30 min, and the supernatant was discarded. The Annexin-V-FITC apoptosis detection kit (Sigma-Aldrich Chemical Company, St Louis, MO, USA) was used according to the manufacturer's instructions. HEPES buffer, Annexin-V-FITC, and PI (50:1:2) were mixed into Annexin-V-FITC/PI staining solution. Next, 100 μL dye solution was added to the mixture, followed by mixing and incubation at room temperature for 15 min, followed by addition of 1 mL HEPES buffer solution (Thermo Fisher Scientific Inc.). Cell apoptosis was detected using a flow cytometer at 488 nm. RNA-FISH Locked nucleic acid modified oligonucleotide probes (Vedbaek, Denmark) targeting LINC00858 were used for RNA-FISH. We used immunofluorescence assay to detect RAD21 protein in the colon cancer cell line SW480. When co-localization was detected, RAD21 antibody (ab217678, Abcam, 1:100) was used to detect RAD21 protein by the immunofluorescence technique, followed by observation with confocal microscopy (CX23, Olympus) (RAD21 protein showed red spots). 4′6-diamidino-2-phenylindole (DAPI) was used to label nuclear DNA (shown in blue). RNA signals were detected by incubation with biotinylated anti-DIG antibodies and amplified by SABC-FITC (LINC00858 showed as green spots). RNA-pull down assay RNA labeled probes were constructed using RNA probe Mix (Roche) and T7RAN polymerase (Promega, Madison, WI, USA), cleaned with RNase-free DNase (Promega), and purified by RNeasy Mini Kit (Qiagen, Hilden, Germany). 5 μg LINC00858 probe was heated at 95°C for 5 min and placed on ice for 5 min, and then placed at room temperature for 20 min to form a secondary structure. Next, the folded RNA was mixed with the cell extracts for 2 h. Subsequently, 50 μL rinsed streptomycin agarose beads (Invitrogen) were added to each binding reaction and incubated for 1.5 h. The beads were washed, treated with ribonuclease, and dissolved in sodium dodecyl sulfate buffer. The recovered RAD21 protein was detected by Western blot assay. RIP assay The binding of LINC00858 and RAD21 protein was detected using RIP kit (Millipore, Billerica, MA, USA). HCT166 and SW480 cancer cells were washed with precooled phosphate buffered saline (PBS) and the supernatant was discarded. The cells were lysed with the same volume of lysate in ice bath for 5 min, centrifuged at 12000 g at 4°C for 10 min, and the supernatant was extracted. One part of the cell extract was used as Input, and the other part was incubated with antibody for coprecipitation. The specific steps were as follows: 50 μL magnetic beads were taken from each coprecipitation reaction system, and then suspended in 100 μL RIP wash buffer, and 5 μg antibody was added to incubate the beads for binding according to the experimental groups. After cleaning, the magnetic beadsantibody complex was suspended in 900 μL RIP wash buffer and incubated overnight at 4°C with 100 μL cell extract. The sample was placed on a magnetic stand to collect the bead protein complex. The samples and input were digested by protease K to extract RNA, which was used to detect the expression of LINC00858 by subsequent PCR. The antibody used in RIP was anti-RAD21 (ab217678, Abcam), which was mixed for 30 min at room temperature, and the IgG group (ab172730, 1:1000, Abcam) was used as the negative control. ChIP The cells were fixed with formaldehyde for 10 min to produce DNA-protein crosslinking. The cells were disrupted using an ultrasonic breaker (set at 10 s each time with an interval of 10 s) to break the chromatin into fragments. After that, IgG (ab172730, 1:1000, Abcam) and target protein specific antibody anti-RAD21 (ab217678, 2 μL per 500 μg of extract, Abcam) were incubated overnight at 4°C for full binding. The DNA-protein complex was precipitated by Protein Agarose/Sepharose, and the supernatant was discarded after centrifugation for 5 min at 12000 g. The nonspecific complex was de-crosslinked overnight at 65°C. The DNA fragments were extracted and purified using phenol/chloroform extraction. Specific primers for PCNP promoter region (F 5ʹ-AAGATCC-CAGCGTTTCCAGG-3ʹ; R 5ʹ-TGATGTCRRATCGAGTAGCCGC-3ʹ) were used and RT-qPCR was applied in order to explore the binding of RAD21 to PCNP promoter. Tumorigenesis in nude mice Healthy 6-8 weeks female nude mice (Vital River Laboratory Animal Technology Co., Ltd., Beijing, China) were reared in a specific-pathogenfree animal laboratory with environment controlled with 60-65% humidity, 22-25°C temperature, and a 12 h light/dark cycle, with free access to food and water. One week of acclimatization and confirmation of the health status of nude mice were employed prior to the experiment. SW480 cells (5 × 10 6 cells, 0.1 mL PBS) transduced with the adenovirus vectors carrying oe-NC + sh-NC, oe-LINC00858 + sh-NC, oe-LINC00858 + sh-RAD21 and oe-LINC00858 + sh-PCNP were injected subcutaneously into the back of the mice (n = 6 mice per group). After further feeding for five weeks, tumor growth was observed and photographed every seven days to draw the growth curve. The formula applied was as follows: (a*b 2 )/2 (a is the longest diameter of the tumor, B is the shortest diameter of the tumor). After 35 days, the nude mice were euthanized with CO 2 , the tumor body was dissected, and the weight of transplanted tumor was determined. The expression level of LINC00858 was determined by RT-qPCR, and the expression levels of RAD21, PCNP, STAT3 and STAT5, and phosphorylation levels of STAT3 and STAT5 were determined by Western blot assay. Statistical analysis All the data in this study were analyzed utilizing the SPSS 21.0 statistical software (SPSS, IBM Corp., Armonk, NY, USA). The measured data were Fig. 7 Schematic diagram of the mechanism by which LINC00858 affects colon cancer. LINC00858 can recruit the transcription factor RAD21, which in turn binds to the PCNP promoter to promote PCNP transcription and expression, thereby activating the STAT3/5 signaling pathway, promoting the proliferation, migration, and invasion of colon cancer cells, inhibiting their apoptosis, and ultimately promoting colon cancer progression. expressed as mean ± standard deviation. The comparisons between cancer and adjacent control tissues were performed using paired t test and other two-group data were compared using the unpaired t test. Data from multiple groups were compared using one-way analysis of variance (ANOVA) combined with Tukey's post hoc tests. Repeated measures ANOVA with Bonferroni's post hoc test was used to compare the data obtained at different time points,. p < 0.05 indicated statistically significant difference. DATA AVAILABILITY The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
6,767.6
2022-04-25T00:00:00.000
[ "Biology" ]
Sound Event Detection Using Derivative Features in Deep Neural Networks We propose using derivative features for sound event detection based on deep neural networks. As input to the networks, we used log-mel-filterbank and its first and second derivative features for each frame of the audio signal. Two deep neural networks were used to evaluate the effectiveness of these derivative features. Specifically, a convolutional recurrent neural network (CRNN) was constructed by combining a convolutional neural network and a recurrent neural networks (RNN) followed by a feed-forward neural network (FNN) acting as a classification layer. In addition, a mean-teacher model based on an attention CRNN was used. Both models had an average pooling layer at the output so that weakly labeled and unlabeled audio data may be used during model training. Under the various training conditions, depending on the neural network architecture and training set, the use of derivative features resulted in a consistent performance improvement by using the derivative features. Experiments on audio data from the Detection and Classification of Acoustic Scenes and Events 2018 and 2019 challenges indicated that a maximum relative improvement of 16.9% was obtained in terms of the F-score. Introduction Humans can obtain information about their surroundings from nearby sounds. Accordingly, sound signal analysis, whereby information may be automatically extracted from audio data, has attracted considerable attention. The Detection and Classification of Acoustic Scenes and Events (DCASE) 2013-2020 challenges have greatly contributed to increasing the interest in this area, and several competition tasks have been defined. Among these tasks, sound event detection (SED) is aimed at identifying both the existence and occurrence times of the various sounds in our daily lives [1]. It has several applications, such as surveillance [2,3], urban sound analysis [4], information retrieval from multimedia content [5], health care monitoring [6], and bird call detection [7]. Recently, deep neural networks (DNNs) have demonstrated superior performance to that of conventional machine learning techniques in image classification [8], speech recognition [9], and machine translation [10]. In [11][12][13], it was demonstrated that the feedforward neural networks (FNNs) outperformed the traditional Gaussian mixture model and support vector machines in SED. Therefore, current studies on SED primarily focus on DNN-based approaches. Owing to their fixed interlayer connections, FNNs (which are the basic architecture of DNNs) cannot effectively handle signal distortions in image classification. The same phenomenon may occur in SED, which generally uses a two-dimensional time-frequency spectrogram as input to the FNN. Moreover, FNNs have limitations in modeling the long-term time-correlation of the sound signal samples. Accordingly, FNNs are not widely used in SED. (64 ms) with an overlap of 360 (41.5 ms) [19]. Sixty-four bands of the mel-scale filterbank outputs from 0 to 16 kHz were obtained using the STFT and then were log-transformed to produce the same dimensional LMFB for each 41.5 ms frame. The feature extraction process generated 240 frames with 64 dimensions for the 10 s clips used for training and testing. After the LMFB was computed, it was normalized by subtracting its mean and dividing by its standard deviation over the entire training data. Subsequently, it was used as input to the CRNN. Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 11 1024 (64 ms) with an overlap of 360 (41.5 ms) [19]. Sixty-four bands of the mel-scale filterbank outputs from 0 to 16 kHz were obtained using the STFT and then were log-transformed to produce the same dimensional LMFB for each 41.5 ms frame. The feature extraction process generated 240 frames with 64 dimensions for the 10 s clips used for training and testing. After the LMFB was computed, it was normalized by subtracting its mean and dividing by its standard deviation over the entire training data. Subsequently, it was used as input to the CRNN. Derivative Features In speech recognition, which is similar to SED in that time-series signals are involved, the first and second derivative features are calculated from the static feature. In this study, we consider the LMFB extracted in Figure 1 as the static feature and compute the derivative features of the LMFB to use them for training the CRNN. The computation of the derivative features is done as follows: where is the derivative feature at time , and is the static feature. is the number of frames preceding and following the -th frame. When computing the second derivative feature, the computed first derivative feature is considered as the static feature in (1). Network Architecture We used two types of deep neural networks to evaluate the effectiveness of the derivative features in SED: a basic CRNN and a mean teacher model using an attention-based CRNN. As these are considered representative deep neural networks for SED, they may be used to confirm the usefulness of the derivative features. Basic CRNN The architecture of the basic CRNN is shown in Figure 2. It is similar to other CRNNs commonly used for SED [11], but the output of the network is fed to a global average pooling (GAP) layer for training using weakly labeled and unlabeled data. The GAP is used to compute the clip-level output for each class by time-averaging the frame-level sigmoid output of the classification layer. Three convolution blocks (ConvBlocks) consisting of two-dimensional CNNs, one bidirectional GRU layer, and one classification layer implemented as an FNN are cascaded in series. The GAP layer averages the frame-level output of the classification layer for the 240 frames corresponding to the 10 s audio clip. The GPA layer is not used for strongly labeled data. Derivative Features In speech recognition, which is similar to SED in that time-series signals are involved, the first and second derivative features are calculated from the static feature. In this study, we consider the LMFB extracted in Figure 1 as the static feature and compute the derivative features of the LMFB to use them for training the CRNN. The computation of the derivative features is done as follows: where d t is the derivative feature at time t, and o t is the static feature. K is the number of frames preceding and following the t-th frame. When computing the second derivative feature, the computed first derivative feature is considered as the static feature in (1). Network Architecture We used two types of deep neural networks to evaluate the effectiveness of the derivative features in SED: a basic CRNN and a mean teacher model using an attention-based CRNN. As these are considered representative deep neural networks for SED, they may be used to confirm the usefulness of the derivative features. Basic CRNN The architecture of the basic CRNN is shown in Figure 2. It is similar to other CRNNs commonly used for SED [11], but the output of the network is fed to a global average pooling (GAP) layer for training using weakly labeled and unlabeled data. The GAP is used to compute the clip-level output for each class by time-averaging the frame-level sigmoid output of the classification layer. Three convolution blocks (ConvBlocks) consisting of two-dimensional CNNs, one bidirectional GRU layer, and one classification layer implemented as an FNN are cascaded in series. The GAP layer averages the frame-level output of the classification layer for the 240 frames corresponding to the 10 s audio clip. The GPA layer is not used for strongly labeled data. The 64-dimensional log-mel filterbank and its first and second derivative values are used as input to the basic CRNN. They are independently constructed as (240 × 64)-dimensional feature maps. Although the number of weights increases with the additional feature maps due to the first and second derivative features at the input layer of CRNN, this increase is relatively small compared with the total number of weights of the basic CRNN. The number of weights of the proposed model is about 127,000 compared to 126,000 without derivative features at the input layer. In ConvBlock, a 3 × 3 convolutional filter is applied to the context window of the input feature map, and batch normalization is used to normalize the filter output to zero mean and unit variance. A rectified linear unit (ReLU) activation function is applied after batch normalization. Non-overlapping 1 × 4 max pooling is applied only in the frequency domain to reduce the dimensionality of the data and to improve frequency invariance. We preserve the time dimension to make use of the time-correlation information of the sound signal, which will be exploited in the following GRU layer. Dropout is used after max pooling to reduce overfitting in the training. The 64-dimensional log-mel filterbank and its first and second derivative values are used as input to the basic CRNN. They are independently constructed as (240 × 64)-dimensional feature maps. Although the number of weights increases with the additional feature maps due to the first and second derivative features at the input layer of CRNN, this increase is relatively small compared with the total number of weights of the basic CRNN. The number of weights of the proposed model is about 127,000 compared to 126,000 without derivative features at the input layer. In ConvBlock, a 3 × 3 convolutional filter is applied to the context window of the input feature map, and batch normalization is used to normalize the filter output to zero mean and unit variance. A rectified linear unit (ReLU) activation function is applied after batch normalization. Nonoverlapping 1 × 4 max pooling is applied only in the frequency domain to reduce the dimensionality of the data and to improve frequency invariance. We preserve the time dimension to make use of the time-correlation information of the sound signal, which will be exploited in the following GRU layer. Dropout is used after max pooling to reduce overfitting in the training. The output of the last ConvBlock is used as an input to the bidirectional GRU, which has 64 units in each direction and feeds its output into the classification layer, which has 10 units corresponding to the sound classes. These units have a sigmoid activation function, the output of which denotes the posterior probability of the classes for each frame of the sound signal. Mean-Teacher Model The mean-teacher model in this study is similar to that used as the baseline recognizer for SED in the DCASE 2019 challenge [1]. The architecture of the mean-teacher model is shown in Figure 3. The output of the last ConvBlock is used as an input to the bidirectional GRU, which has 64 units in each direction and feeds its output into the classification layer, which has 10 units corresponding to the sound classes. These units have a sigmoid activation function, the output of which denotes the posterior probability of the classes for each frame of the sound signal. Mean-Teacher Model The mean-teacher model in this study is similar to that used as the baseline recognizer for SED in the DCASE 2019 challenge [1]. The architecture of the mean-teacher model is shown in Figure 3. It consists of two CRNNs, the student model on the left, and the teacher model on the right. The student model updates the model parameters by calculating the classification and the consistency cost and by back-propagating the errors using gradient descent. The classification cost is calculated by comparing the output of the student model with the ground truth table using the weakly labeled and strongly labeled audio data, as shown in Figure 3. The consistency cost is calculated by comparing the output of the student model with that of the teacher model by using unlabeled, It consists of two CRNNs, the student model on the left, and the teacher model on the right. The student model updates the model parameters by calculating the classification and the consistency cost and by back-propagating the errors using gradient descent. The classification cost is calculated by comparing the output of the student model with the ground truth table using the weakly labeled and strongly labeled audio data, as shown in Figure 3. The consistency cost is calculated by comparing the output of the student model with that of the teacher model by using unlabeled, weakly, and strongly labeled data. The teacher model does not update its parameters by back propagation, but uses the ensemble moving average weights of the student model [21]. For the test, the teacher model generally produces more correct output and is used for prediction. An attention-based CRNN is used for the mean-teacher model. The mechanism is similar to that in [19]. A gated linear unit (GLU) is used in ConvBlock, and an attention layer at the output. The GLU is shown in more detail in Figure 4; it contains a screening module that passes input signals if related to the sound event of interest, and blocks all other the signals. It consists of two CRNNs, the student model on the left, and the teacher model on the right. The student model updates the model parameters by calculating the classification and the consistency cost and by back-propagating the errors using gradient descent. The classification cost is calculated by comparing the output of the student model with the ground truth table using the weakly labeled and strongly labeled audio data, as shown in Figure 3. The consistency cost is calculated by comparing the output of the student model with that of the teacher model by using unlabeled, weakly, and strongly labeled data. The teacher model does not update its parameters by back propagation, but uses the ensemble moving average weights of the student model [21]. For the test, the teacher model generally produces more correct output and is used for prediction. An attention-based CRNN is used for the mean-teacher model. The mechanism is similar to that in [19]. A gated linear unit (GLU) is used in ConvBlock, and an attention layer at the output. The GLU is shown in more detail in Figure 4; it contains a screening module that passes input signals if related to the sound event of interest, and blocks all other the signals. where T is the total number of time frames in the 10 s audio clip. Database In this study, we used the training and test data of the DCASE 2018 and 2019 challenges. The training set is a combination of the training data from both challenges and consists of weakly labeled, strongly labeled, and unlabeled data. It is presented in Table 1. The weak label provides sound event information at the clip level without timing information at the frame level. Unlabeled data do not have any label information, but we can obtain label information at the clip level by prediction from the CRNN trained using weakly labeled data. The length of each clip is 10 s, and the number of clips for the weakly, strongly and unlabeled data is 1578, 2045 and 14,412, respectively. There are 10 different sound types, usually domestic or household. The details of the testing data are shown in Table 2. The data are divided into the DCASE 2018 and DCASE 2019 test set, which contain 208 and 1168 clips, respectively. The test data contain frame-level label information for the evaluation. Evaluation Metrics The CRNN computes the posterior probability for each class in every time frame and identifies a sound event when this probability exceeds 0.5. To improve reliability, a median filtering is applied to the probabilities across the frames before the final decision. The performance of the CRNN is measured by the F-score and error rate (ER) using an event-based analysis [23], which compares the output of the CRNN with the ground truth table when the output indicates that an event has occurred. The initial decision comprises three different types: true positive (TP), false positive (FP), and false negative (FN). A TP indicates that the period of a detected sound event overlaps with that from the ground truth table. In the decision, a 200 ms onset collar and a 200 ms or 20% of the event length offset collar are allowed. An FP implies that there is no corresponding overlap period in the ground truth table, although the CRNN output indicates an event. An FN implies that there is an event period in the ground truth table, but the CRNN does not produce the corresponding output. The F-score (F) is computed based on the initial three decisions and is the harmonic average of the precision (P) and recall (R). They are computed as follows. The error rate is computes as where N is the total number of sound events active in the ground truth table. Sound events with correct temporal positions but incorrect class labels are counted as substitutions (S), whereas insertions (I) are sound events present in the system output but not in the reference, and deletions (D) are the sound events present in the reference but not in the output [23]. Experimental Results To train the basic CRNN, various combinations of weakly and strongly labeled and unlabeled data were considered. Specifically, four combinations, [weakly + unlabeled], [weakly + unlabeled + strongly], [strongly] and [weakly + strongly] were chosen. Binary cross-entropy was used as the loss function, and the Adam optimizer with a learning rate of 0.001 was used to train the basic CRNN. We applied early stopping with a minimum of five epochs and a patience of 15 epochs. These hyper parameters in this study are based on the baseline systems announced at the DCASE 2018 and 2019 challenges. The classification results on the DCASE 2018 test set are shown in Table 3, where "Single channel" implies that only the static log-mel filterbank was used as the input of the basic CRNN, and "Three channels" implies that derivative features (first and second) were also used as the input. It can be seen that [weakly + unlabeled + strongly] yields the best performance because it has the largest amount of training data. However, the performance is not satisfactory considering that the unlabeled training data constitute 80% of all the training data. This implies that the basic CRNN cannot efficiently use the unlabeled data to update its parameters. Furthermore, using derivative features in all combinations of the training data results in a consistent performance improvement in terms of the F-score. In the [weakly + unlabeled + strongly] combination, which yields the best results, a relative improvement of 7.2% in the F-score is observed. The average relative improvement of all combinations is 11.6%. However, the performance improvement is not sufficiently large to manifest itself in the ER. We compared the performance of our system with the DCASE 2018 baseline system [24] which uses the same training and test data as the [weakly + unlabeled] combination in Table 3. We could find that, although the baseline system employs similar CRNN architecture as ours, it showed 14.06% in F-score and 1.54 in ER. It has a better F-score and worse ER than the [weakly + unlabeled] combination. Slightly different hyper parameters and differences in the learning process may be the main reasons. However, by using the derivative features, we could obtain better results than the baseline system in both F-score and ER, as shown in the first row of the table. In Table 4, the classification results using the DCACSE 2019 test set are shown. A similar trend as in Table 3 is shown. The performance improvement by using the derivative features is rather diminished in the DCASE 2019 test set. On average, a 5.3% relative improvement in the F-score is attained, and [weakly + unlabeled + strongly] yields a 6% improvement. The F-score learning curve during the training of the basic CRNN when only the weakly labeled data are used is shown in Figure 5. Twenty percent of the weakly labeled data were used as validation data. At epoch 45, the optimal performance is attained by early stopping. Further training only increases the F-score of the training data, possibly resulting in overfitting. The F-score learning curve during the training of the basic CRNN when only the weakly labeled data are used is shown in Figure 5. Twenty percent of the weakly labeled data were used as validation data. At epoch 45, the optimal performance is attained by early stopping. Further training only increases the F-score of the training data, possibly resulting in overfitting. Tables 5 and 6, respectively), and the best model on the validation data was accordingly selected for the evaluation of the test data. The Adam optimizer was used for training with a learning rate of 0.0001 and a median filtering of length 5 was applied to the output of the classification layer. The SED results of the mean-teacher model are shown in Table 5. It can be seen that significant performance improvement is obtained by the mean-teacher model (cf . Table 3). Furthermore, the addition of a derivative to the static feature resulted in a 3% relative improvement in the F-score in the DCASE 2018 test set. In the DCASE 2019 test set, a 4.4% relative improvement was attained. In addition, a 2.5% relative improvement was observed when the strongly labeled training data set was used as the test data to demonstrate the effect of the derivative features in case the difference between the testing data and training data is small. It can be concluded that a consistent performance Tables 5 and 6, respectively), and the best model on the validation data was accordingly selected for the evaluation of the test data. The Adam optimizer was used for training with a learning rate of 0.0001 and a median filtering of length 5 was applied to the output of the classification layer. The SED results of the mean-teacher model are shown in Table 5. It can be seen that significant performance improvement is obtained by the mean-teacher model (cf . Table 3). Furthermore, the addition of a derivative to the static feature resulted in a 3% relative improvement in the F-score in the DCASE 2018 test set. In the DCASE 2019 test set, a 4.4% relative improvement was attained. In addition, a 2.5% relative improvement was observed when the strongly labeled training data set was used as the test data to demonstrate the effect of the derivative features in case the difference between the testing data and training data is small. It can be concluded that a consistent performance improvement was attained by using the derivative features in the mean-teacher model regardless of the test set: however, the improvement was rather diminished compared with the basic CRNN. We also compared the performance of our system with the baseline system of the DCASE 2019 [1]. Although the baseline system uses the same audio data and similar mean-teacher model, it showed an F-score of 23.7%, which is worse than our single channel result of 25.95%. As mentioned in the comparison with the baseline of DCASE 2018 in Table 3, the performance difference seems to come from some details in the implementation. However, by using the derivative features, we could further increase the F-score of our system to 27.36%, which is better than the result of the DCASE 2019 baseline system as expected. The SED results when the training epochs increased to 200 are shown in Table 6. For the DCASE 2018 test set, a 5% relative improvement was attained by using the derivative features. The results for the DCASE 2019 test set indicate a 7.5% improvement. When strongly labeled training data were used as test data, a 2.9% relative improvement was attained. It can be concluded that an increase in the number of training epochs resulted in an increase in the relative improvement by the derivative features. Conclusions Recently, among the approaches for SED, CRNNs have been widely used and have exhibited better performance than other neural networks. In this study, we proposed the use of the first and second delta features of the log-mel filterbank to improve the performance of state-of-the-art CRNNs. We used two types of CRNNs, a basic CRNN and a mean-teacher model based on an attention-based CRNN. We also used various combinations of weakly, strongly labeled, and unlabeled data to train the CRNNs to confirm the effect of the derivative features on SED. Regarding the basic CRNN, a performance improvement was always attained by using the derivative features in the various combinations of the training data. On the DCASE 2018 test set, an 11.6% average relative improvement in the F-score was obtained, and a 5.3% improvement was obtained on the DCASE 2019 test set. Regarding the mean-teacher model, consistent performance improvement was observed as we changed the number of epochs and test set type were changed. When 200 epochs were used for training, a 5% relative improvement in the DCASE 2018 test set and a 7.5% improvement in the DCASE 2019 test set were observed. In this study, we used the derivative features of the log-mel filterbank to improve SED. In the experiments using various combinations of training and test data, we observed a consistent performance improvement in state-of-the-art CRNNs, which, however, was not as significant as in speech recognition. Nevertheless, the results appear to be sufficient to indicate the importance of the derivative features in SED. Conflicts of Interest: The authors declare no conflicts of interest.
5,836.6
2020-07-17T00:00:00.000
[ "Computer Science" ]
An Old Brain with New Tricks Scientists continue to look to Einstein's brain in hopes of discovering the biology of intelligence. At the time of Einstein's death, statistical data on individuals of normal intelligence was limited (Herskovitz, 2000). However, our bank of statistical data on individuals in the “normal intelligence” range has since grown; a comparison of this new data to Einstein's brain may provide us with a greater context for understanding the relationship between the brain and intelligence. New Information about Albert Einstein's brain asks new questions about the famous mathematician's brain from a form and function point of view. In-depth studies of cortical surfaces, weight, and sulcal patterns serve as the foundation for this research, but Dr. Dean Falk at Florida State University has made a new discovery by closely examining Einstein's cortical sulci. Using existing neuroanatomy and functional neuroimaging studies, Falk identifies two significant differences on Einstein's brain that shed light on both his superior intelligence and his developmental struggles. Falk closely analyzed photographs of Einstein's brain that were taken within the first few hours of his death. These calibrated photographs, along with direct caliper measurements were the primary materials Falk used to reanalyze Einstein's brain. Two identifications, a long, unnamed sulcus (u), and a knob (K), are the emphasis of Falk's research because they are both unusual and uncommon (see Figure ​Figure11). Figure 1 Photographs of Einstein's brain that were taken in 1955, adapted from Witelson et al. (1999b) with identifications added here. (A) Dorsal view, (B) left lateral view, (C) right lateral view. Sulci: angular (a2), anterior occipital (a3), ascending limb ... The u located in the post-central gyrus suggests greater development in areas of the brain that correspond to the face and tongue (Falk, 2009). As the post-central gyrus has been shown to correlate with auditory “what” and “where” pathways, this unnamed sulcus might similarly correspond to Einstein's musical ability and aural discrimination (Alain et al., 2008). One remarkable feature of Einstein's brain is a K, an omega-shaped curve in the perirolandic region of the right hemisphere. Falk suggests this may be so pronounced in Einstein because of his extensive musical training on the violin. Gromko et al. (2009) reported that children with good aural discrimination skills had similar number recall accuracy. It is not a surprise that Einstein was both gifted in music and a genius in mathematics; the pronounced K may be the testament to his superior abilities in these subjects. In addition to the u and the K, Falk discovered noticeably wider post-central gyri at their lateral ends. This subtle, but important variation is suggested to partially account for Einstein's known language difficulties. Another significant component of Einstein's brain was an unusual symmetry particularly with the post-central inferior sulcus (pti). Violin players often develop ambidexterity (e.g., equal use of both hands) in order to successfully play their instrument, and this may have been the case with Einstein; the symmetry seen in his brain reinforces the long-time argument of Einstein's ambidexterity. Falk's analysis of Einstein's sulci is a new piece of the puzzle that addresses key features that separate Einstein from the mean. One of Falk's most groundbreaking findings is the presence of continuous precentral superior and inferior sulci (pcs and pci). These are present on both right and left hemispheres of Einstein's brain; symmetry not found in 98% of the 50 hemispheres scored by Ono et al. (1990:43; as cited in Falk, 2009). Falk's analysis of Einstein's brain sheds light on the relationship between his biology and his success. The most notable differences seen in his brain appear to have more to do with his training in the violin as a child, and less to do with his genetics. Interestingly, recent findings suggest that early musical training may indeed increase one's aptitude for math; Einstein is testament to this idea. These findings help us not only understand the genius of Albert Einstein, but also lend to understanding individual variation in mental aptitude and the effects of development on neuroanatomy and function. This study is similar to existing studies of patient populations (e.g., mental illness, developmental disorders, autism, and learning disorders) where functional neuroimaging researchers are employing voxel based morphometry to quantify individual variation in neuroanatomy related to illness-related deficits in cognition, affect, and sociality. Scientists continue to look to Einstein's brain in hopes of discovering the biology of intelligence. At the time of Einstein's death, statistical data on individuals of normal intelligence was limited (Herskovitz, 2000). However, our bank of statistical data on indi-viduals in the "normal intelligence" range has since grown; a comparison of this new data to Einstein's brain may provide us with a greater context for understanding the relationship between the brain and intelligence. New Information about Albert Einstein's brain asks new questions about the famous mathematician's brain from a form and function point of view. In-depth studies of cortical surfaces, weight, and sulcal patterns serve as the foundation for this research, but Dr. Dean Falk at Florida State University has made a new discovery by closely examin-ing Einstein's cortical sulci. Using existing neuroanatomy and functional neuroimaging studies, Falk identifies two significant differences on Einstein's brain that shed light on both his superior intelligence and his developmental struggles. Falk closely analyzed photographs of Einstein's brain that were taken within the first few hours of his death. These calibrated photographs, along with direct caliper measurements were the primary materials Falk used to reanalyze Einstein's brain. Two identifications, a long, unnamed sulcus (u), and a knob (K), are the emphasis of Falk's research because they are both unusual and uncommon (see Figure 1). The u located in the post-central gyrus suggests greater development in areas of the brain that correspond to the face and tongue (Falk, 2009). As the post-central gyrus has been shown to correlate with auditory "what" and "where" pathways, this unnamed sulcus might similarly correspond to Einstein's musical ability and aural discrimination (Alain et al., 2008). One remarkable feature of Einstein's brain is a K, an omega-shaped curve in the perirolandic region of the right hemisphere. Falk suggests this may be so pronounced in Einstein because of his extensive musical training on the violin. Gromko et al. (2009) reported that children with good aural discrimination skills had similar number recall accuracy. It is not a surprise that Einstein was both gifted in music and a genius in mathematics; the pronounced K may be the testament to his superior abilities in these subjects. In addition to the u and the K, Falk discovered noticeably wider post-central gyri at their lateral ends. This subtle, but important variation is suggested to partially account for Einstein's known language difficulties. Another significant component of Einstein's brain was an unusual symmetry particularly with the post-central inferior sulcus (pti). Violin players often develop ambidexterity (e.g., equal use of both hands) in order to successfully play their instrument, and this may have been the case with Einstein; the symmetry seen in his brain reinforces the long-time argument of Einstein's ambidexterity. Falk's analysis of Einstein's sulci is a new piece of the puzzle that addresses key features that separate Einstein from the mean. One of Falk's most groundbreaking findings is the presence of continuous precentral superior and inferior sulci (pcs and pci). These are present on both right and left hemispheres of Einstein's brain; symmetry not found in 98% of the 50 hemispheres scored by Ono et al. (1990:43;as cited in Falk, 2009). Falk's analysis of Einstein's brain sheds light on the relationship between his biology and his success. The most notable differences seen in his brain appear to have more to do with his training in the violin as a child, and less to do with his genetics. Interestingly, recent findings suggest that early musical training may indeed increase one's aptitude for math; Einstein is testament to this idea. These findings help us not only understand the genius of Albert Einstein, but also lend to understanding individual variation in mental aptitude and the effects of development on neuroanatomy and function. This study is similar to existing studies of patient populations (e.g., mental illness, developmental disorders, autism, and learning disorders) where functional neuroimaging researchers are employing voxel based morphometry to quantify individual variation in neuroanatomy related to illness-related deficits in cognition, affect, and sociality.
1,929.2
2010-02-26T00:00:00.000
[ "Physics" ]
Ensemble Approach for Detection of Depression Using EEG Features Depression is a public health issue that severely affects one’s well being and can cause negative social and economic effects to society. To raise awareness of these problems, this research aims at determining whether the long-lasting effects of depression can be determined from electroencephalographic (EEG) signals. The article contains an accuracy comparison for SVM, LDA, NB, kNN, and D3 binary classifiers, which were trained using linear (relative band power, alpha power variability, spectral asymmetry index) and nonlinear (Higuchi fractal dimension, Lempel–Ziv complexity, detrended fluctuation analysis) EEG features. The age- and gender-matched dataset consisted of 10 healthy subjects and 10 subjects diagnosed with depression at some point in their lifetime. Most of the proposed feature selection and classifier combinations achieved accuracy in the range of 80% to 95%, and all the models were evaluated using a 10-fold cross-validation. The results showed that the motioned EEG features used in classifying ongoing depression also work for classifying the long-lasting effects of depression. Introduction Depression is a major public health problem, creating a significant burden throughout the world. The World Health Organization (WHO) has predicted depression to be one of the most common causes of work disability [1]. According to disability-adjusted life-years or illness, depression ranks first in many European countries [2,3]. The largest aggregate study of the prevalence of mental disorders in the European population shows that clinically significant depression has been experienced by an average of 6.9% of the population in a 12 mo period [2]. Depression is a mental disorder characterised by a pathologically low mood with a negative, pessimistic assessment of oneself, one's position in the surrounding reality, and one's future. Depression causes emotional, psychological, and physical suffering, which lead to a decrease in the patient's quality of life, family, work, and social adaptation, and often to disability. However, the worst consequence of depression is the increased risk of committing suicide. Currently, the most common way to diagnose depression is an interview conducted by a medical professional. In many cases, the interview is accompanied with a clinical questionnaire assessed by a medical doctor such as the Hamilton Depression Rating Scale (HAM-D), the self-reported Emotional State Questionnaire (EST-Q) [4], or Mini-Mental State Examination (MMSE) [5] to establish the diagnostic criteria. Other questionnaires, such as the Beck Depression Inventory (BDI) [6] and the Hamilton Depression Rating Scale (HDRS) [7], are also used for screening purposes. Besides subjective clinical questionnaires, the brain activity of the patients can be monitored objectively by applying various imaging modalities such as computed tomography (CT), functional magnetic resonance imaging (fMRI), and electroencephalogram (EEG). Out of these techniques, EEG stands out as the simplest and most cost effective. Hence, detecting mental states and disorders by using various EEG feature representations, such as methods based on fast Fourier transform (FFT), discrete wavelet transform (DWT), power spectral analysis (PSA), and others [8][9][10][11][12][13], is an actively researched field showing promising results. Various advanced machine learning algorithms have been utilised in order to analyse different modalities of such data in order to introduce automated assessment of depression [13][14][15][16][17][18][19]. This paper reports the classification results obtained by using various linear and nonlinear features and provides a general insight into the feature calculation. The main contribution of the paper is the feature selection and best-performing feature combinations. This article also describes several classifier configuration that improve the classification accuracy. Related Work According to de Aguiar Neto et al. [20], absolute and relative band powers and various other linear and also nonlinear features described in this section have been recognised as promising biomarkers for characterizing a depressed brain. The absolute band power (ABP) and relative band power (RBP) of EEG signals have been analysed with separate three-way multivariate analysis of variance (MANOVA) and showed that the RBP was greater in depressed patients than in controls at all electrode locations and increased ABP for some of the electrode locations [21]. The use of alpha power variability (APV) and relative gamma power (RGP) was proposed by Bachmann et al. [8]. While APV indicates the power and frequency variations in the alpha band, RGP characterises the high-frequency components. The differences between the depressed and control groups appeared statistically significant in a number of EEG channels, leading to a linear regression classification accuracy of 81%. The spectral asymmetry index (SASI) indicates the relative asymmetry between higher and lower frequency bands. According to Hinrikus et al. [22], SASI values differed significantly in all channels between healthy and depressed patients. Single EEG channel analysis has already shown positive results in the detection of depression [8,23]. The nonlinear Higuchi's fractal dimension (HFD) calculates the fractal dimension of a signal in the time domain [24]. Bachmann et al. [25] applied the HFD method for EEG signals and evaluated this using Student's T-test for two-tailed distributions with twosample unequal variance, to find if a statistical difference existed between depressed and healthy subjects. The alterations were statistically significant in all the EEG channels and indicated 94% of the subjects as depressive in the depressive group, while HFD indicated 76% of the subjects as non-depressive in the control group. The nonlinear Lempel-Ziv complexity (LZC), introduced by Lempel and Ziv [26], measures the complexity of a signal and has been successfully used on EEG signals for the detection of different mental states [27,28]. EEG data from severe Alzheimer's disease patients showed a loss of complexity over a wide range of time scales, indicating a destruction of nonlinear structures in brain dynamics [29][30][31]. Detrended fluctuation analysis (DFA) [32], which indicates long-time correlations of the signal, was applied to evaluate EEG signals and revealed a statistically significant difference between healthy and depressive subjects [33]. In addition, linear discriminant analysis (LDA) reached a classification accuracy of 70.6%, and by combining DFA and the SASI, classification accuracy increased to 91.2% [23]. A comprehensive study by Bachmann et al. [8] showed the diagnostic potential for linear (SASI, APV, RGP) and nonlinear (HFD, DFA, LZC) features to classify depression. Single-channel classification with logistic regression achieved an accuracy of 81% using APV or RGP measures. The combination of two linear measures, the SASI and RGP, reached an accuracy of 88%, and by combining linear and nonlinear measures, a classification accuracy of 92% was achieved [8]. EEG Recording Procedure The Cadwell Easy II EEG (Kennewick, WA, USA) measurement equipment was used for EEG recordings with 18 channels (reference Cz), which were placed on the subject's head according to the international 10-20 electrode position classification system, as shown in Figure 1. During the recordings, the subjects were lying in a relaxed position with their eyes closed. EEG signals within the frequency band of 3-48 Hz were used for further processing. The sample rate was kept at 400 Hz for linear methods, while the downsampled signals with a sample rate of 200 Hz were used for nonlinear methods, due to the high computational load. The 20 min-long EEG recording was segmented into 10 s segments, and an experienced EEG specialist marked the first 30 artefact-free segments (5 min in total) by visual inspection, for the subsequent feature calculation. The gathering of questionnaires and EEG recordings were carried out by Tallinn University of Technology (TalTech), in accordance with the Declaration of Helsinki, and the process was formally approved by the Tallinn Medical Research Ethics Committee. All participants signed a written informed consent. The dataset itself was provided to the authors by Tallinn University of Technology under a legal agreement for research purposes. (Information about obtaining the dataset can be requested by contacting M. Bachmann at<EMAIL_ADDRESS> Dataset The recorded dataset consisted of the EEG signals from 20 subjects, who were selected for further analyses from 55 subjects, who regularly visited the occupational health doctor. The dataset consisted of 14 females and 6 males within the age range of 24-60 y. Half of the subjects selected had been diagnosed with depression at some point in their lives (referred to as depressed subjects for simplicity), while the healthy control group had never had a depression diagnosis. In addition, the healthy control group was chosen considering their low HAM-D and EST-Q scores, to ensure they did not exhibit any signs of depression or other mental disorders (see Table 1). All subjects were gender matched, and the subject age for healthy controls was chosen to be as close as possible to the age of depressed subjects. Hamilton Depression Rating Scale The HAM-D is the most widely used clinician-administered depression assessment scale. Although the rating scale has been criticised for use in clinical practice, in this study, it was used as additional information for selecting healthy subjects. In situations where more than one healthy subject was a match candidate for a depressive subject, the one with the lowest HAM-D score was chosen. The mean HAM-D score among the healthy subjects was 3.1, where the scores of 0-7 indicate no depression and a mean score of 9.3 for the depressive subjects corresponds to mild depression. Emotional State Questionnaire The Emotional State Questionnaire (EST-Q) [34] was originally compiled for use by the lecturers of the psychiatric clinic of the University of Tartu in Estonia. The self-assessed questionnaire consists of 28 statements assessing the major depressive and anxiety disorders and their associated symptoms during the last month. The questionnaire consists of 3 basic scales and 3 additional scales. Major scales include the depression (DEP), general anxiety (AUR), and panic agoraphobia subscales (PAF). Additional subscales include social anxiety (SAR), asthenia (AST), and insomnia, which was not used. The scale's total score can be used as an overall indicator of the severity of emotional symptoms. The EST-Q was used in the current study for selecting healthy subjects. The subscale values of all the selected subjects were below the threshold for the given condition, except for 2 healthy subjects, whose asthenia subscale was greater than 6. Other threshold values can be found in Table 1. If the scale value is greater than the listed threshold, then the subject has the given condition. Features EEG brain signals are nonlinear by nature and linked to particular brain activity, which can be analysed through various linear and nonlinear signal-processing methods. Alpha Power Variability The alpha band signal (8)(9)(10)(11)(12) was obtained by a pass-band filter. Next, the APV was calculated for the artefact-free 10 s segments in three steps. First, the alpha band signal power in time window T for N = 4000 samples was calculated as: where V(r) is the amplitude of the alpha band signal in a sample r and N is the number of samples in the time window T. Afterwards, APV was calculated as: where W 0 is the value of alpha band power averaged over 5 min and σ is the standard deviation of those segments. Spectral Asymmetry Index The SASI evaluates the power in higher and lower frequencies and was calculated as the relative difference between the higher and the lower EEG frequency band power. The balance of the powers characterises the EEG spectral asymmetry [22]. Powers in the frequency bands were calculated as: and: where F c is the central frequency of the EEG spectrum maximum in the alpha band and was calculated for each person individually. The SASI in channel m for a subject n was calculated as: Nonlinear Features Nonlinear methods are used to capture the chaotic behaviour in EEG signals, which occurs due to the underlying physiological activity occurring in the brain [35]. To describe the brain activity of the subjects, we used the Higuchi fractal dimension (HFD), Lempel-Ziv complexity (LZC), and detrended fluctuation analysis (DFA). Higuchi Fractal Dimension The fractal dimension provides a measure of the complexity of time series, such as EEG, and describes the fractal dimension of time series signals. The values of the HFD for each electrode were calculated according to Higuchi [24] with the parameter k max = 8. Lempel-Ziv Complexity The complexity of the signal can be quantified by the LZC [36], describing the spatiotemporal activity patterns in high-dimensional nonlinear systems. This can reveal the regularity and randomness in EEG signals. For LZC calculation, each signal segment was converted into a binary sequence s(n) as follows, where x(n) is the signal segment, n is the segment's sample index from 1 to N (segment length), and m is the threshold value. The binary sequence s(n) was scanned from left to right counting the number of different patterns. The complexity value c(n) was increased every time a new pattern was encountered. LZC values were calculated as follows: where b(N) is the upper bound of c(n): which was used to normalise LZC values to avoid variations in segment length. Detrended Fluctuation Analysis DFA is applied to evaluate the presence and persistence of long-range correlations in time in EEG signals. It has been discovered that the resting EEG of healthy subjects exhibits persistent long-range correlation over time [33]. DFA was calculated in the time domain according to the steps described by Peng et al. [32]. All methods were evaluated using a 10-fold cross-validation; in addition, to keep the training data as balanced as possible, each fold had an equal number of healthy and depressed subjects. In the case of predictions for the weighted and boosted ensemble, the training set in each fold underwent an additional 9 iteration procedures (see Figure 2), to obtain prediction results for all samples in the training fold. Afterwards, the weights W of the classifier votes were fit according to the results in the training set. Similarly, AdaBoost used predicted class results from the training set to calculate weights for each of the classifiers in the ensemble. Feature Selection It is known that cognitive disorders can introduce observable change in measured EEG recordings. Depending on the feature calculations used, each brain region might have a statistically significant difference when compared to cognitively normal patients' brains. Therefore, to select the most relevant electrode locations, we used feature subset selection methods that were applied in a preprocessing step before machine learning algorithms were applied. In particular, we used the F-test, which is widely used for showing a statistical significance between two classes, and ReliefF, which is a rank-based feature selector. Univariate Feature Ranking Using F-Tests The univariate feature ranking algorithm helps to understand the significance of each feature by examining the importance of each predictor individually using an F-test. Each F-test tests the hypothesis that the response values grouped by predictor variable values are drawn from populations with the same mean against the alternative hypothesis, such that the population means are different [37]. ReliefF The base algorithm Relief, created by Kira and Rendell [38], is an inductive learning system that was initially developed for classifying binary problems using discrete and numerical features. The algorithm penalises the predictors that give different values to neighbours of the same class and rewards predictors that give different values to neighbours of different classes. ReliefF, which is an extended version of Relief algorithm, was developed by Kononenko et al. [39], by proposing the L1 distance for finding near-hit and nearmiss instances. Machine Learning Algorithms The supervised learning algorithms used in this study have been widely used in various EEG classification tasks according to survey papers published by Lakshmi et al. [40] and other articles, which describe the use of the following algorithms for binary classification: • Support vector machine (SVM) [41] with the radial basis function (RBF) kernel; • Linear discriminant analysis (LDA) [42] with the diagonal covariance matrix for each class; • Naive Bayes (NB) [43]; • K-nearest neighbours (kNN) [44] with 4 neighbours; • Decision tree (D3) [43]. In addition to individually evaluating the results for the listed classifiers and feature types, an ensemble approach was also implemented, where classifiers trained on all 9 feature types vote to predict the class label. Ensemble Methods The implemented ensemble [45] votes were weighted according to majority voting, where all weights are equal, and weighted voting, where weights are set according to classifier test set accuracy, which was obtained by the procedure shown in Figure 2. The ensemble assigns Label to a given sample according to the following equation: where m indicates the number of classifiers, w i is the classifier weight, and d i is the classifier The class label is decided as follows, As a third ensemble method, we chose adaptive boosting (AdaBoost) [46], to see if it was possible to find a more optimal weight combination, in comparison to the majority and weighted voting. The aim of AdaBoost is to convert a set of weak classifiers into a strong classifier. Results and Discussion The baseline accuracy was established by individually evaluating all the feature types. In Table 2, the results for classifiers reached acceptable accuracy, where the HFD and LZC reached above 80% with at least one of the classifiers. For other feature types, selecting all electrodes from a feature type did not guarantee the best classification results. For some of the feature types, it is shown that only a few electrodes provided statistically relevant information and the remaining electrodes could be considered as not relevant or redundant [8]. A brute-force approach can be used to check all feature combinations to find which feature sets perform better than others, but this would be a time-consuming process. Therefore, the most relevant features were determined according to feature ranking provided by the F-tests and ReliefF algorithm. The selected feature evaluation started with the most relevant feature, and in each iteration, the next-less-relevant feature was added to the feature set used in classification. The ranking of the features was provided by the feature-selection algorithms. Each iteration underwent 10-fold cross-validation. The most optimal feature set was selected according to the highest root-mean-squared (RMS) value calculated from the accuracy of all five classifiers for each feature type. Figure 3 shows an example of feature selection according to the described procedure, where electrodes {O2, O1} were selected as the best option for the B rbp feature type, as the highest RMS value was at the O1 electrode. Similarly, the procedure was repeated for all feature types to obtain the best-performing features shown in Table 3. As a limitation, all proposed feature combinations had to be classified; therefore, it can be considered as a computationally heavy process when EEG with more electrodes or large datasets are used. Compared to the baseline results (Table 2) and selected feature classification results from Tables 4 and 5, it can be observed that on average, the selected features based on the F-test ranking outperformed the baseline results, and ReliefF had the best overall classification results. In addition, to reduce the effect of subject order in the dataset, the obtained classification results represent the mean results of 100 iterations where the subject location in the training and testing set was randomised. Table 3). Table 3). A more robust solution can be achieved using an ensemble approach where many weak classifiers contribute to the predicted class by voting. Each result shown in Table 6 was the result of combining nine classifiers of the same type. The features used in each feature type were selected according to Table 3. The ensemble approach further improved the results when F-tests and ReliefF feature selection algorithms were used. On average, the ReliefF classification results outperformed ensembles whose features were selected according to F-tests. The use of AdaBoost for classifier weight selection in most of the cases did not significantly improve the results compared to the majority and weighted voting ensemble. Due to the nature of AdaBoost, during the weight calculation process, the algorithm can reach optimal weights using only a few of the classifiers and ignore the rest, which can hinder the robustness of the ensemble. Instead of focusing on the classification of feature types individually, combined features were also evaluated. Table 7 clearly shows the benefit of feature selection when compared to using all 162 features. For the most part, the classification results for features selected based on F-tests and ReliefF were higher than the baseline results for selecting all features. In addition, feature selection from all features gave promising results, especially, while using only the top-ranked features based on ReliefF. Features used in Table 7 (last row) were selected according to the same procedure used for feature types. Conclusions This study showed the results for linear (RBP, APV, SASI) and nonlinear (HDF, LZC, DFA) EEG features in various combinations for classification of long-lasting effects of depression. The described feature types and classification methods (RBF SVM, LDA, NB, kNN, D3) were used to classify 20 age-and gender-matched subjects. The 10 healthy and 10 subjects who had depression were classified with 82.55% accuracy with the HDF using D3 and 80.70% with the LZC using the RBF SVM binary classifier. The results improved when the algorithms such as univariate feature ranking using F-tests and ReliefF were used, which improved the classification accuracy up to 91.5%. In addition, the ensemble setup with a majority voting reached 93.30% using the NB classifier. The results also suggest that electrodes A rbp .O1, A rbp .O2, and B rbp .O2 selected from all available features according to ReliefF were sufficient to classify the subjects with 80-95% accuracy. The best combination, which achieved significantly high accuracy among all classifiers, was an ensemble using ReliefF-selected features with equally weighted predictions for all feature types. The study shows that EEG features used in classifying patients with depression at the time of the recording can also be used to measure and classify the long-lasting effects of depression. The obtained results give reasonable justification for further gathering of EEG data according to the currently used protocol to measure the long-lasting effects of depression. As future work, we are planning to raise funding for a large-scale study and further test the proposed approach with the aim of using it in assisted diagnostics.
5,066.6
2021-03-07T00:00:00.000
[ "Medicine", "Computer Science" ]
The headings of the Psalms in Aquila, Theodotion and Symmachus and 56 (55). Research to date demonstrates that Field did not use these notes in Codex Ambrosianus to their full extent. As far as the three elements under investigation are concerned, this paper demonstrates that the Three frequently differ from the LXX in their rendering of certain aspects of the headings. In some instances, the Three reflect a rendering much closer to the Hebrew. In others, they contain a rendering that is dependent on the Hebrew, but which displays a lack of understanding of especially some of the technical terms in the Hebrew. Contribution: The research shed new light on the variants recorded in the margin of Codex Ambrosianus and their value for the text-critical study of the headings in the Psalms in the MT and the Septuagint. Textual Criticism is one of the core disciplines for the study of the text of the Hebrew Bible and its translation and transmission in different ancient languages. Introduction In Codex Ambrosianus of the Syro-Hexapla, marginal readings related to the headings of some of the Psalms occur (Ceriani 1874). The importance of these variants for the history of the Greek and Syriac Psalm headings warrants further discussion. In his study of the Syro-Hexaplaric Psalter, Robert Hiebert (1989:260-261) discusses the marginal notes in the different manuscripts he studied for his edition of this psalter. He compares them with the readings noted by Frederick Field (1875) and makes some corrections and additions. Hiebert did not make a comprehensive study of these notes, but his additions and corrections remain very valuable for the study of the Syro-Hexapla. This paper will undertake a comparative study of the headings of these Psalms, with attention to the notes referring to Aquila, Theodotion and Symmachus. These notes do not occur for all headings and only rarely do variants from all three the witnesses occur (such as in Ps 7). The following Psalms have variants: 3-15, 17-23, 28, 29, 33, 35, 37-41, 43-58, 60-69, 75-80, 82-84, 86, 87, 91, 97, 99, 101-103, 107, 110, 111, 119-122, 126, 130, 131, 138-141 and 144. These variants will be compared to the readings of the headings of the LXX and the Syro-Hexapla. For the purpose of this paper, only three matters will be considered, namely the rendering of the technical term ַ ‫֥ח‬ ‫ַּצֵ‬ ‫נ‬ ְ ‫ַמ‬ ‫ל‬ in the Three, the references to the name of David and some instances where the LXX has a substantial plus in comparison to the Masoretic Text (MT), such as in Psalms 98 (97), 104 (103), 43 (42) and 56 (55). Introductory remarks on the Psalter in the Syro-Hexapla It is impossible to discuss the Syro-Hexapla in detail here. Ignacio Carbajosa Pérez presents a good survey of important issues (2016; see also Hiebert 2001). Hiebert (2005) discusses the In Codex Ambrosianus of the Syro-Hexapla, marginal readings related to the headings of some of the Psalms occur. The importance of these variants for the history of the Greek and Syriac Psalm headings warrants further discussion. To this end, this paper undertakes a comparative study of the marginal notes that accompany the headings of the Psalms in the Syro-Hexapla. These notes do not occur for all headings and only rarely do variants from all three occur (as is the case for Ps 7). These variants are compared to the readings of the headings of the Septuagint (LXX) and the Syro-Hexapla. Three matters are investigated in this paper, namely the rendering of the technical term ַ ‫֥ח‬ ‫ַּצֵ‬ ‫נ‬ ְ ‫ַמ‬ ‫ל‬ in the Three (Aquila, Theodotion and Symmachus), references to the name of David and some instances where the LXX has a substantial plus in comparison to the Masoretic Text (MT), such as in Psalms 98 (97), 104 (103), 43 (42) and 56 (55). Research to date demonstrates that Field did not use these notes in Codex Ambrosianus to their full extent. As far as the three elements under investigation are concerned, this paper demonstrates that the Three frequently differ from the LXX in their rendering of certain aspects of the headings. In some instances, the Three reflect a rendering much closer to the Hebrew. In others, they contain a rendering that is dependent on the Hebrew, but which displays a lack of understanding of especially some of the technical terms in the Hebrew. (Hiebert 1989:5-14). Hiebert discusses whether the Psalms in the Syro-Hexapla can be considered as a witness to the hexaplaric tradition dating back to Origen. Although he finds more hexaplaric influence in the text than Rahlfs (1979), he states that it cannot be regarded as a primary witness to that recension (Hiebert 1989:235, 247). Hiebert (1989) discusses the evidence for this claim in detail in Chapter III. In a later publication, he discusses the Syro-Hexapla and other later Syriac translations of the Psalms (Hiebert 2017). Norton (1995:194;see also Fraenkel 2000:317) states that the Syro-Hexapla is the most complete witness to the Origenic recension and agrees with Rahlfs when he states that it is not faithful to that recension. Jenkins (1998:86) discusses the marginal notes in the Syro-Hexaplaric Psalms (and Job) with a view to the possibility of a Tetrapla in addition to a Hexapla. His view is that the text of the Syro-Hexapla is based on the Tetrapla and that this is the reason why it differs from the hexaplaric evidence in, for example, the Gallicanum. Fraenkel (2002:309-310) is reluctant to enter the debate about the Hexapla and Tetrapla. As the notes in the margin of the Syro-Hexaplaric Psalter are not in the first instance related to the recension of Origen, the possibility that the Syro-Hexapla represents a different version of the Origenic recension is not that important for the discussion of the notes to the headings. As far as the origin of the marginal notes are concerned, they were probably added by Paul of Tella (Hiebert 1989:261). With regard to the marginal notes in the Syro-Hexaplaric Psalter, Fraenkel (2000:317) notes that because the text of the Syro-Hexapla is not Origenic, it raises questions about the hexaplaric fragments in the marginal notes. This is complicated by the lack of Greek notes in some of the hexaplaric manuscripts. . Manuscript E is closer to the tradition of A, B and C but presents its own unique character in many headings. Manuscript F has quite a number of unique variants (Van Rooy 2005:126). The variants in these manuscripts will be discussed only in instances where the marginal notes agree with any of the variants in the specific heading. The headings of the Psalms in the LXX are paramount in the study of the headings in the Syro-Hexapla. It is impossible to discuss the headings of the Psalms in detail, but in the examples discussed below, the headings in the LXX must be taken into consideration. In this regard, three publications of Albert Pietersma deserve specific attention. Where necessary, they will be used in the discussion below (Pietersma 1980(Pietersma , 2013a(Pietersma , 2013b). Manuscript F of the Syro-Hexapla is intriguing in this regard. Some of the variants are significant, because they demonstrate that Manuscript F is unique in many respects. This manuscript is discussed in more detail by Van Rooy (2005), and the remarks below are summarised from that contribution. With regard to ‫ܒܫܘܠܡܐ‬ in SyrPs, the variant that appears frequently in manuscripts H and J ‫)ܠܫܘܠܡܐ(‬ has already been noted. ‫ܠܫܘܠܡܐ‬ also occurs in Manuscript F in Psalms 9, 11 (10), 19 David in the three In 1980, Pietersma published a seminal article on David in the LXX. In this article, he draws a number of significant conclusions. He demonstrates that the phrase ‫ֽד‬ ‫וִ‬ ‫ְדָ‬ ‫ל‬ is consistently rendered as τᾠ Δαυιδ in the Old Greek and that this is frequently changed to τοῦ Δαυιδ in the course of the transmission of the text. This phrase is also frequently added to psalm titles, with the result that more psalms are ascribed to David. It is interesting to consider this situation in the Syro-Hexapla as well as in the notes to the Three in Codex Ambrosianus. As far as the Syro-Hexapla is concerned, one might suspect that τᾠ Δαυιδ would have been rendered as ‫,ܠܕܘܝܕ‬ and τοῦ Δαυιδ as ‫.ܕܕܘܝܕ‬ These two forms are indeed encountered in the Syro-Hexapla: together they appear 80 times. Of these instances, the form ‫ܕܕܘܝܕ‬ appears in only 11 instances ( Field neglects to note that the two words of the heading of Psalm 43 (42) are in the same order in the Syro-Hexapla as in Codex Sinaiticus. As far as the references to the Three in the margin of Codex Ambrosianus are concerned, in many cases, no notes appear and notes to all three appear in only a very few instances. The name David occurs in 39 notes to Aquila, seven with the preposition and 32 with the relative. In Symmachus, the preposition occurs once and the relative 21 times. In Theodotion, two prepositions and one relative occur because of the paucity of notes to Theodotion in the margin. Field does not note that in the margin to the heading of Psalm 19 (18), the reading of Aquila and Symmachus has the name with the relative particle, in contrast with the preposition in the Syro-Hexapla (Hiebert 1989:265-266). The LXX has the dative, with no variant in any of the witnesses (see also Pietersma 1980:215). In this regard, the Syro-Hexapla agrees with the LXX. There is no good reason for Aquila and Symmachus to have the relative particle, which indicates that they were not consistent with the rendering of the Hebrew name of David with the preposition. Only a few of these notes are truly significant. In Psalm 58 (57), Aquila omits the reference to David in the note, disagreeing with the MT, LXX and Syro-Hexapla. In Psalm 108 (107), the Syro-Hexapla has the relative, and the Three the preposition. In this instance, the Three agrees with the LXX (see also Pietersma 1980:215). In Psalm 132 (131), the Syro-Hexapla follows some LXX witnesses in adding the reference to David. This reference is explicitly omitted in the margin by Aquila and Symmachus. Field does not provide much additional information in this regard. When one considers the Psalms where the LXX adds David to the heading, Field frequently provides a reference to Origen that states the Psalm is without a heading in the Hebrew. In most of these instances, there are no references to the Three at all. In the case of Psalm 33 (32), Field notes that Origen includes a note that states this Psalm is without a heading in the Hebrew and the Three (Field 1875:137). In most of the other instances, Field maintains that Origen states that the Psalm is without a heading in the Hebrew and without any reference to the Three (see also It would seem that in the cases where Origen has a reference that states the Hebrew does not have a heading, the marginal notes in Codex Ambrosianus give a reference to the Three only infrequently, as they would normally agree with the Hebrew in this regard. However, in a number of instances, Field does not indicate the variant readings of Aquila and Symmachus. A selection of major variants In his article published in 1980, Pietersma deals extensively with those Psalms containing extra-MT Davidic ascriptions in the Old Greek. For the purpose of this paper, it would have been ideal to discuss those 13 headings in detail. Unfortunately, only two of them have marginal notes in the Syro-Hexapla that is 98 (97) and 104 (103), while only one other has a reference to the Three in Field (43 [42]). In the case of Psalm 98 (97), the Hebrew heading is merely ‫ור‬ ֡ ֹ ‫ְמ‬ ‫ז‬ ִ ‫,מ‬ while the majority of the manuscripts of the LXX add τῷ Δαυιδ. A number of Lucianic manuscripts switch the two elements and a further few add that the Psalm does not have a heading in the Hebrew. Codex Ambrosianus contains a note giving the heading without the addition of the name of David in Symmachus. For this Psalm, this is the only reference to the Three in Field. In the case of Psalm 104 (103), the MT does not have a heading, the LXX has τῷ Δαυιδ, which agrees with 11QPs a and Codex Alexandrinus contains the genitive. Some witnesses to the LXX have further additions and the Syro-Hexapla also has a long addition. The first part agrees with the additions in some manuscripts of the LXX (about the creation of the world), with the following further addition: ‫‪('because‬ܡܛܠܕܗܠܝܢܠܟܘܢܥܒܕܬ‬ of what she did for you'). The note in Codex Ambrosianus only contains the heading ‫ܕܕܘܝܕ‬ for Aquila, agreeing with the reference in Field. Can this be an indication that Aquila used a Hebrew manuscript related to the scroll from Qumran? Pietersma (1980:225) thinks it is possible, and it indeed possible. When one considers a selection of other Psalms, it is clear that the references to the Three in the margin of Codex Ambrosianus frequently agree with the MT in instances where the Greek has a plus. For example, the short plus to the heading of Psalm 29 (28), ἐξοδίου σκηνῆς ('at the festival of the tabernacle'), appears in the Syro-Hexapla ‫ܡܦܩܢܐܕܡܫܩܢܐ(‬ -'at the departure of the tabernacle'), but is omitted by Aquila, as noted in the margin of Codex Ambrosianus. The codex contains a note to the expression in the Syro-Hexapla, stating that it refers to the 8th day of the feast of tabernacles (see also Pietersma 2013a:192). Pietersma discusses this text in some detail, stating that the addition in the LXX occurred during the transmission of the text in Greek (see also Pietersma 2013a:193-194). Similar examples occur in Psalms 38 (37) and 44 (43). In Psalm 38 (37), the LXX adds 'about the Sabbath'. Pietersma is also of the opinion that this addition is an exegetical one added in the transmission in Greek (see also Pietersma 2013aPietersma :199-200, 2013b. The Syro-Hexapla also contains this addition. The marginal note states that according to Aquila, the heading should only be 'a Psalm of David' and it uses the relative and not the preposition as in the Syro-Hexapla. The remark about this in Field ascribes this reading to the Syro-Hexapla, marked with an asterisk in Codex Ambrosianus. The remark is actually the reading of Aquila according to the marginal note. The asterisk in Codex Ambrosianus is only related to 'of the Sabbath'. Field's remark is confusing. In Psalm 44 (43), the LXX adds 'a psalm' at the end of the heading. This addition is neither in Codex Sinaiticus nor some of the other witnesses to the LXX. The addition is also present in the Syro-Hexapla although it is omitted in Manuscript F. This is also omitted by Symmachus. Field has a similar reading for Aquila, but that reading is not in the marginal note in Codex Ambrosianus. It is quite clear that Aquila understood the reference to the dove, but connected the word ‫ֶם‬ ‫ל‬ ֣ ‫אֵ‬ to its root meaning of 'speechless'. It contains the same text as the Hebrew but presents a different interpretation. Following this, it connected the Hebrew ‫ים‬ ִ ‫ֹק‬ ‫ח‬ ‫רְ֭‬ to David and not to the oak trees as in the Hebrew. Although the rendering of Aquila differs from the sense of the Hebrew, it is clearly a rendering dependent on the Hebrew. The reading for Symmachus in the margin is as follows: ‫ܕܙܟܘܬܐܚܠܦܝܘܢܐܟܪܡܢܫܪܒܬܗܪܚܝܩܗܘܐܕܘܝܕܗܘܡܟܝܟܬܪܥܝܬܐܘܕܐܠ‬ ‫ܡܘܡܐܡܬܝܕܐܚܕܘܗܝ‬ ('Of victory. On the dove. When David was far from his family, he was humble of intelligence and flawless. When they caught him'.) In this rendering, it is evident that the reference to the dove was understood, but the reference to the distant oak trees was not, and, as a result, the phrase was connected to David. In the last part, the reference to Gath is omitted. It seems that this rendering is dependent on a faulty interpretation of the Hebrew. As far as the word ‫ܪܚܝܩ‬ is concerned, the marginal note seems to read the first consonant as a ‫,ܕ‬ not a ‫.ܪ‬ This must be either an error or a misreading of the punctuation. Field (1875:181, Note 2) quotes the Syriac as having a ‫.ܕ‬ In a previous note about the Cyrus, Field gives in brackets a variant reading of his Codex C, with a ‫ܪ‬ (Field 1875:181, Note 1). Conclusions This paper dealt with three issues relating to the marginal notes in Codex Ambrosianus, namely the rendering of the technical term ַ ‫ח‬ ֥ ‫ַּצֵ‬ ‫נ‬ ְ ‫ַמ‬ ‫ל‬ in the Three (Aquila, Theodotion and Symmachus), references to the name of David and some instances where the LXX has a substantial plus in comparison to the MT. Much additional research can be done on all the notes in this codex. The research to date demonstrates that Field did not use these notes to their full extent. As far as the three elements under investigation are concerned, it has been demonstrated that the Three frequently differ from the LXX in their rendering of certain aspects of the headings. In some instances, the Three reflects a rendering much closer to the Hebrew. In others, it contains a rendering dependent on the Hebrew, but which displays a lack of understanding of especially some of the technical terms in the Hebrew.
4,046
2022-04-29T00:00:00.000
[ "Linguistics" ]
Isolation of multiple electrocardiogram artifacts using independent vector analysis Electrocardiogram (ECG) signals are normally contaminated by various physiological and nonphysiological artifacts. Among these artifacts baseline wandering, electrode movement and muscle artifacts are particularly difficult to remove. Independent component analysis (ICA) is a well-known technique of blind source separation (BSS) and is extensively used in literature for ECG artifact elimination. In this article, the independent vector analysis (IVA) is used for artifact removal in the ECG data. This technique takes advantage of both the canonical correlation analysis (CCA) and the ICA due to the utilization of second-order and high order statistics for un-mixing of the recorded mixed data. The utilization of recorded signals along with their delayed versions makes the IVA-based technique more practical. The proposed technique is evaluated on real and simulated ECG signals and it shows that the proposed technique outperforms the CCA and ICA because it removes the artifacts while altering the ECG signals minimally. INTRODUCTION An electrocardiogram (ECG) is an important tool to measure the electrical activity generated by the SA (sinoatrial) node that causes the upper heart chambers (atria) to contract. ECG is an effective tool for investigating the heart related problems like arrhythmia diagnosis and widely adopted in a number of practical applications. ECG signals are utilized for automatic detection of myocardial infarction in . Kumar, Pachori & Acharya (2017) investigated the ECG signals for detection and characterization of coronary artery disease. Similarly in , the authors presented the heart failure detection technique based on ECG signals and the extraction of fetal ECG from maternal ECG is achieved in Su & Wu (2017). Qingxue & Zhou (2018) developed person identification technique based on ECG signal processing. Moreover, ECG based silent myocardial infarction as well as long term risk of heart failure is diagnosed in Qureshi et al. (2018). Meanwhile, modern efficient ECG data recording and analysis systems are also been designed even in wireless scenario (Tao et al., 2018;Elgendi et al., 2018;Han et al., 2018;Tanguay et al., 2018;Orphanidou, 2018). However, the recorded ECG signals are normally affected by different types of electro-physiological and non electro-physiological artifacts. The artifacts affected ECG can not be adopted in the sensitive applications. Hence, efficient removal of artifacts is necessary for ECG signals analysis for various applications. Removal of these artifacts before further processing make the design of ECG instrument simpler and produce accurate results. In literature it is well known that among ECG artifacts, baseline wandering (BW), electrode movement (EM), and muscle artifacts (MAs) are more challenging to separate from the recorded ECG signals (Hesar & Mohebbi, 2017;Varanini et al., 2016;Zarzoso & Nandi, 2001). BW is normally generated through body movements, breathing, and lose sensors contacts. EM is the result of variations of electrodes positions over the human body surface, MA is caused by contraction of the muscles near the electrode (Limaye & Deshmukh, 2016). The main challenges associated with the removal of these artifacts are their unpredictable amplitudes and variable frequency range (Hegde, Deekshit & Satyanarayana, 2012). Related work Numerous researches have contributed to artifacts removal from ECG signals, using algorithms like, extended Kalman filter (Hesar & Mohebbi, 2017), least mean square (LMS) (Rahman, Shaik & Reddy, 2009) and Weiner filter (Chang & Liu, 2011), etc. ECG signal denoising and classification schemes based on projected and dynamic features are presented in Chen et al. (2017). High density muscle noise removal from the recorded ECG signal is performed in Wang et al. (2020) using the independent vector analysis (IVA) technique. Separation of the fetal and maternal ECG signals is carried out in Sugumar & Vanathi (2016) through the IVA technique. Successive local filtering based denoising is discussed in Mourad (2022). Deep learning based ECG de-noising technique is proposed in Rahhal et al. (2016) and Rasti-Meymandi & Ghaffari (2022). The segmented beat classification and de-noising method discussed in Agostinelli et al. (2016), proposed a filtering technique to suppress the noise followed by the detection of QRS complex from the ECG signals using the MIT-BIH Noise Stress Test Database. Time-series clustering techniques used for ECG classification and artifacts removal in Rodrigues, Belo & Gamboa (2017), extract the best characterize features of the signal over time and group its samples in individual clusters through an agglomerative clustering approach. Moreover, the blind source separation (BSS) technique called the independent component analysis (ICA) is also used for fetal ECG extraction and artifacts removal in Varanini et al. (2016), Sameni et al. (2007, and Jafari & Chambers (2005). ECG signal classification and de-noising are also performed in Uddin & Alam (2009), Sameni, Jutten & Shamsollahi (2008, Vayá et al. (2007), andRieta et al. (2004) using ICA. In all these applications mixed data is first recorded through electrodes and then processed using ICA algorithms for un-mixing and further classifications. The IVA technique is already used for gradient noise removal from electroencephalogram signals in Acharjee et al. (2015). Adaptive filtering techniques and ICA are used for ECG artifacts removal; however, in Zarzoso & Nandi (2001), it is shown that ICA outperforms the adaptive filtering techniques for ECG artifacts removal. Moreover, ICA is recommended by various researchers for artifacts removal but some inefficiency of ICA is also reported in Urrestarazu et al. (2004) andShackman et al. (2009). In literature, canonical correlation analysis (CCA) is used as an alternative to ICA (De Clercq et al., 2006), which is yet not used for ECG artifacts removal. CCA utilizes the original signals as well as the delayed versions of the signals. It is based on second-order statistics (SOS) and extracts maximally auto-correlated and mutually un-correlated signals (De Clercq et al., 2006). From Mowla et al. (2015), it is known that CCA is an efficient and practically useable technique as compared to ICA. Moreover, ICA utilizes high order statistics (HOS) to explore statistical independence while CCA is based on SOS to recover statistically un-correlated sources. It is clear from the statistical theory that un-correlatedness is a weaker condition than independence. A recently developed technique of BSS called the independent vector analysis (IVA) combines the advantages of both ICA and CCA in a single framework (Anderson et al., 2014). IVA processes the original and time-delayed versions of the signals (just like CCA) while utilizing the HOS (like ICA). IVA assumes that the source signals in one data set are independent of each other and at least one source is dependent on one source of the other data set. Moreover, from Anderson et al. (2014) it is known that IVA performs well as compared to ICA and CCA. Mohammed, Hassan & Ferikoglu (2021) in particular mentioned that the ICA algorithm gives more accurate results than the extended kalman filter in reducing baseline wandering and electrode movement artifacts. It is also important to mention that in case of low frequency applications ICA gives more accurate results (Mohammed, Hassan & Ferikoglu, 2021;Villena et al., 2018). Contribution Based on this discussion, an IVA based technique is proposed in this article for ECG artifacts removal. This is the first article that proposes the IVA-based technique for ECG artifacts removal. The IVA-based technique produces more clear and visible ECG signals that might help medical specialists to observe some very low amplitude electrophysiological effects of the heart. In this article, the performance of the three IVA algorithms called the IVA-L, the IVA-G, and the IVA-GGD is investigated for ECG artifacts removal. The IVA-L algorithm utilize the HOS and assumes Laplacian distribution for the source component vectors (Kim et al., 2007). The IVA-G algorithm exploits linear dependencies without taking into account the HOS. The IVA-G algorithm assumes Gaussian distribution for the mixing sources (Anderson, Adali & Li, 2012). The IVA-GGD algorithm utilizes both the SOS and HOS while assuming multivariate generalized Gaussian distribution for the underlying sources (Anderson et al., 2014). It is also important to mention that all the data is taken from the MIT-BIH Noise Stress Test Database for ECG and artifacts signals (Moody, Muldrow & Mark, 1984). The MIT-BIH Noise Stress Test Database is freely available for further research on ECG signal processing. In addition, the main contributions of this research are as follows: Recently developed BSS technique, the IVA is used to separate artifacts from ECG signal. The three most challenging ECG artifacts BW, MA and the EM are considered to remove from the recorded ECG signals. Performance of the ICA, CCA and IVA are analyzed for artifacts removal utilizing real simulated ECG signals. Three variants of IVA, the IVA-L, IVA-G, and IVA-GGD are investigated to study their performance for ECG artifacts removal. The rest of the article is organized such that Section 2 presents details of the ECG data. Both the realistic simulated and real ECG signals along with ECG artifacts are discussed. The system model is given in Section 3, while the proposed algorithm using IVA algorithms is discussed in Section 4. A simulation study of the simulated and real signals is carried out in Section 5 with the concluding remarks in Section 6. ECG DATA AND ARTIFACTS Realistic simulated and real ECG data are considered for simulations. Real data is taken from the MIT-BIH database (Moody, Muldrow & Mark, 1984). The acquired signals in the MIT-BIH Noise Stress Test Database are digitized using uni-polar ADCs with 11-bit resolution. This database is open source for further research. The MIT-BIH database contains the ECG signals and their artifacts. The artifacts considered in this work are as follows: Muscle artifact: Muscle artifacts are the results of muscle contraction having low amplitudes and a large frequency range from 0-10 kHz. Baseline wandering: Baseline wandering originates due to body movements, breathing, and loose sensor contact. Body movements cause unpredictable large amplitude and low-frequency artifacts. Breathing also causes low frequency drifting between 0.15 and 0.3 Hz. Electrode movement: Electrode movement is generated due to electrode position away from the skin contact, changing the electrode and skin impedance causing potential variations in the recorded ECG signal. Other artifacts: Other ECG artifacts include power line interference, device noise, Electro-surgical noise, quantization noise, aliasing, etc. The time-domain real ECG, BW, EM, and MA signals are demonstrated in Fig. 1 with 2,000 data samples of each signal as a first data set. Frequency domain representation is shown in Fig. 2. It shows that most of the frequencies of ECG and artifacts lie in the range of 50 Hz. From Fig. 2 it is observed that all the frequencies of ECG signal and artifacts overlap with each other. Hence, to cleanly extract these ECG signals, some efficient BSS techniques are required. As it is already discussed, IVA is the more efficient BSS technique as compared to ICA and CCA (Moody, Muldrow & Mark, 1984). Based on the discussions, it is recommended to utilize IVA for ECG signals de-noising. Moreover, the recorded mixed ECG data is shown in Fig. 3 for a single data set with L = 2,000 samples. The measurement is taken in the presence of additive white Gaussian noise (AWGN) with signal to noise ratio (SNR) of 20 dB. Figure 3 basically contains mixture signals of all the individual source signals. The source signals are ECG, BW, EM and MA. The mixing process is performed in MATLAB such that the source signals matrix S of size 4 Â 2;000 is multiplied with a randomly generated mixing matrix A of size 4 Â 4. Mixed data is recorded in matrix X, where X is of size 4 Â 2;000. It must be noted that in case of ICA a single data set as shown above is utilized while in case of IVA multiple copies of the source signals are recorded and processed for un-mixing i.e., multiple mixing matrices are observed and un-mixing is performed at a time. This is the main advantage of the IVA algorithm to un-mix the recorded signals and its delayed versions at a time. The realistic simulated ECG signals are generated in MATLAB version R2016a (MathWorks, Inc., Natick, MA, USA). SYSTEM MODEL This section presents the ECG signals and artifacts in the IVA data model. K number of independent sources i.e., ECG and artifacts are considered and all sources contain L number of samples for D data sets. The acquired data using ECG electrodes is expressed as: The matrices S d contains the source data vectors s d 1 ; s d 2 ; …;s d K , where every vector having length L. All vectors are real valued random vectors having zero mean. The mixing matices A d are also real with random values for D number of data sets. Hence, the the IVA algorithm responsible to estimate these unknown matrices while utilizing the mixed data. The source data is represented by ðS 1 Þ T ; ðS 2 Þ T ; …; ðS D Þ T in D data sets. After the estimation of A d through the IVA algorithm, the resultant source signals as given in Qamar et al. (2022) are expressed as: The W d is inverse of A d and is called the un-mixing matrix estimated for D data sets. The estimated source data vectors are y d 1 ; y d 2 ; …; y d k : PROPOSED IVA-BASED ECG ARTIFACTS SEPARATION Multi-channel ECG signals are recorded in the presence of various artifacts i.e., BW, EM, and MA as well as noise. The number of ECG and artifact signals are denoted by K, each signal has data block length L with D number of data sets. The recorded mixed data contains D number of data sets ðX 1 Þ T ; ðX 2 Þ T ; …; ðX D Þ T as shown in Fig. 3 for SNR of 20 dB. Since, the artifact signals have overlapped frequencies with the original ECG signal as illustrated in Fig. 2, the role of the BSS algorithms is to estimate the source signals from the recorded mixed signals. The BSS algorithms know nothing except independence and non-Gaussianity of the source signals. The estimated sources of each BSS algorithm have scaling and order ambiguities. The scaling issue can be easily resolved considering the source signals with unit variance and also scaling the un-mixing vector to extract the unit variance sources. The arbitrary order of the estimated signals in each data set can be corrected using the permutation matrix, which is common in each data set (Anderson, Adali & Li, 2012). The IVA algorithms separate the mixed recorded signals as a first data set and their delayed versions as other data sets. This separation is performed using the minimization of The I½y k represents mutual information within k th SCVs. H is the entropy, W d is the un-mixing matrix of d th data set and C is a constant factor which is equivalent to H½X 1 ; X 2 ; …; X D depending only on the recorded mixed data. The IVA algorithms minimize the cost function of and maximizes the mutual information within each SCV. ICA is a well-known blind source separation technique used for linearly mixed signals utilizing statistical independence of the source signals (Uddin et al., 2015). CCA considers the mixed recorded signals as well as its delayed versions by exploiting the SOS. The IVA combines the advantages of CCA and ICA by exploiting the SOS and HOS. Moreover, numerous variants of IVA algorithms, such as IVA-GGD (Anderson et al., 2014), IVA-L (Kim et al., 2007) and IVA-G (Anderson, Adali & Li, 2012) exist in literature and their dominance is already proven. Motivated by this, this research implemented various versions of IVA algorithms to verify their validity for ECG artifacts removal. All these algorithms utilize the IVA cost function given in to estimate the unmixing matrices. In the case of complex-valued data, the IVA-G algorithm includes the pseudo-co-variance matrix in the cost function. This algorithm also ignores the HOS and sample to sample dependency. The IVA-L utilizes the HOS for un-mixing while ignoring the sample to sample dependency and SOS. The matrix gradient approach is used in the implementation of the IVA-L algorithm. The IVA-GGD algorithm utilizes the HOS and SOS for source signal estimation considering multivariate Gaussian prior. This algorithm also avoids sample to sample dependency. Moreover, processing of the original as well as the delayed versions makes the IVA algorithms more practical compared to the ICA technique. Based on these advantages, various variants of IVA algorithms are implemented in this article and their performance is tested for the ECG artifacts removal. SIMULATION RESULTS In this section, simulation results of the proposed IVA based technique for ECG artifacts removal from the recorded mixed signals is presented. The IVA algorithms considered for simulations are IVA-GGD, IVA-L and IVA-G. Performance of these algorithms is evaluated for various SNRs ranging from 0 to 20 dB. Results are compiled using Monte Carlo simulation. The ECG artifacts considered for simulation are baseline wandering (BW), electrode movement (EM), and mascle artifacts (MA). Real and simulated ECG signals are utilized in the simulations. The real ECG signals are downloaded from MIT-BIH database and the simulated ECG signals are generated in MATLAB. The number of source signals considered are K = 4, the number of data sets D = 4, and length L of the processing data blocks in each data set ranges from 50 to 2,000 samples. Moreover, to evaluate the effectiveness of the proposed IVA technique for ECG artifacts removal different performance evaluation criterion are used that are given below: The corresponding root mean square error ðCRMSEÞ used in Chen et al. (2017) is expressed below: The s d ECG and y d ECG represent the original simulated ECG and the reconstructed ECG signals simultaneously at data set d. Common inter-symbol-interference (ISI com ) (Anderson, Adali & Li, 2012) is also utilized as a performance measure that is presented as: The ISI com is normalized so that its maximum value is one and minimum vale is zero, where zero value corresponds to ideal separation performance. The U W d A d is utilized as another evaluation criteria and is expressed as: Table 1 The ISI com performance of the real ECG for all the three IVA algorithms i.e., IVA-GGD, IVA-L, and IVA-G at SNR of 20 dB. The algorithms performance is evaluated for different values of the input data block lengths ranges from 50 to 2,000 samples in each data sets. Table 2 The ISI com results of the real ECG for all the three IVA algorithms at input data block length of 2,000 samples in each data set and different SNRs that ranges from 0 to 20 dB. The ideal separation corresponds to zero value of U W d A d . First, the effectiveness of the IVA-based technique in comparison with ICA and CCA techniques is demonstrated. The results of the three techniques are demonstrated while utilizing the Fast-ICA algorithm (Uddin, Ahmad & Iqbal, 2017) of the ICA, the GMCA algorithm (Li et al., 2009) of CCA and the IVA-G algorithm of the IVA. Simulations are performed at an SNR of 20 dB. The performance evaluation criteria used is CRMSE. In the case of the ICA algorithm, the value of the data set is one. Performance evaluation is carried out for different values of L ranging between 100 to 2,000 samples in a single data set. The results of ICA, CCA and IVA algorithms are demonstrated in Fig. 4. The simulation results clearly show that the IVA outperforms ICA and CCA algorithms. These results also verify that the IVA algorithm is less sensitive to the processing data block lengths. The performance improvement at a block length of L ¼ 100 is around 85% for the IVA technique and 15% for the CCA technique as compared with the ICA technique. Similarly, we demonstrate the ISI com performance of all these algorithms for the same conditions as given in the above simulations. The results are demonstrated in Fig. 5. This figure also shows the effective performance of the IVA algorithm. The extracted ECG signals for these three algorithms are also demonstrated in Fig. 6. It shows that the IVA algorithm outperforms other algorithms and is also less sensitive to AWGN noise. Second, the quality of the separated ECG signals from various artifacts using the IVA algorithms using (ISI com ) is evaluated. Here, the simulated ECG signal corrupted by various artifacts i.e., BW, MA, and EM is considered. Linearly mixed instantaneous signals are generated using randomly generated mixing matrices in MATLAB. The mixed recorded signals are shown in Fig. 7 for a single data set. The mixing process of Figs. 3 and 7 is same, the difference in signals is such that Fig. 3 contains the simulated ECG signals and Fig. 6 shows the realistic ECG signal. Three IVA algorithms are applied to the simulated ECG signals for artifacts removal. The reliability of the ECG signals for all three algorithms is evaluated for different values of SNRs. The simulations are performed over four recorded data sets independently. In each run, the pure ECG signal is extracted and artifacts are separated from the recorded mixed signals. The ISI com performance of the IVA algorithms for different number of iterations is performed. Results are shown in Fig. 8 for 20 dB SNR and a block length of 1,000 samples. It shows similar performance of all the algorithms at steady state condition. Furthermore, performance of the IVA algorithms is also evaluated for different values of the input data block lengths in different data sets. Simulation results are shown in Fig. 9 at 20 dB SNR. These results show that the IVA-L algorithm is more sensitive to length of the processing data blocks. At a block length of 100 samples in each data set the performance improvements of the IVA-G and IVA-GGD are 18% and 19% as compared to the IVA-L. In order to further investigate the IVA algorithms, we evaluate the U W d A d performance of the IVA algorithms at SNR of 20 dB for different number of iterations. Results are given in Fig. 10. It shows that the IVA-L converges faster as compared to IVA-G and IVA-GGD algorithms. The IVA-L converges at approximately 10 iterations, the other two converges at 25 iterations approximately. Although the IVA-L converges fast with same steady state results as achieved by other algorithms. In the third part of simulations, we demonstrate the practical performance of the IVA algorithms for real ECG artifacts removal. The ECG artifacts considered in this part are BW, EM and MA. Removal of these artifacts is a challenging task due to their variable amplitudes and frequencies. The IVA algorithms considered in this section are IVA-L, IVA-G and IVA-GGD. The separated signals of the IVA algorithms are shown in Fig. 11 for 20 dB SNR. The results shows that three algorithms perform well for ECG artifacts removal. Moreover, the error signals are also demonstrated in Fig. 12, where error signal is the difference of the real and separated ECG signals. The resultant very low amplitudes of the error signals shows the effectiveness of the IVA algorithms. performance of all three algorithms is evaluated while considering the 20 dB SNR. Results of all the five ECG signals i.e., ECG 1 , ECG 2 , ECG 3 , ECG 4 and ECG 5 are demonstrated in Table 3. This table shows approximately the same performance of a single algorithm for all five ECG signals. The ISI com performance of the ECG2 is also demonstrated in Fig. 14 to observe the performance improvement for increased lengths of the processing data blocks. Although, in addition to Table 3 the results of all the other ECG signals can also be included as figures but restricted to ECG2 only to avoid the unnecessary length of the article. Furthermore, the reconstructed ECG signal i.e., ECG2 in Fig. 15 is also demonstrated for all three algorithms. DISCUSSION AND CONCLUSION The ECG artifacts removal problem is investigated in this article. Both realistic simulated and real ECG signals are utilized for simulation. The artifacts considered are baseline wandering, electrode movement and muscle artifacts. Removal of these artifacts is difficult due to their variable amplitudes and frequencies. The IVA technique is compared in this article shows that it outperforms the CCA and ICA techniques. We further investigated the IVA technique for ECG artifacts removal. For comparison purpose, we consider three IVA algorithms to get more clear ECG signals in the presence of various artifacts. In addition, we utilized different evaluation criterion to confirm performance of the proposed technique. The ISI com performance of the IVA algorithms for different values of the input data block lengths in different data sets. Simulation results are shown in Fig. 9 at 20 dB SNR. These results show that the IVA-L algorithm is more sensitive to length of the processing data blocks. At a block length of 100 samples in each data set the performance improvements of the IVA-G and IVA-GGD are 18% and 19% as compared to the IVA-L. As a concluding remarks, we can say that the IVA algorithms are less sensitive to input data block lengths and input SNRs as compared to the ICA technique. Thus, IVA is proved to be an efficient and more practical technique for ECG de-noising. ADDITIONAL INFORMATION AND DECLARATIONS Funding The authors received no funding for this work.
5,878.6
2023-02-09T00:00:00.000
[ "Computer Science" ]
Drastic influence of micro-alloying on Frank loop nature in Ni and Ni-based model alloys ABSTRACT Nickel and its alloys are f.c.c. model materials to investigate the elementary mechanisms of radiation damage and solute effects. This paper focuses on the drastic influence of micro-alloying (0.4 wt.% Ti or Cr) on the nature of defects after ion irradiation. Ultra-high purity materials are used to avoid impurity effects. For the first time, (i) large stable intrinsic Frank loops are identified in nickel while in alloys extrinsic Frank loops are observed; (ii) an eradication mechanism of intrinsic Frank loops is clearly identified; (iii) the morphology of Frank loops is shown to be characteristic of their nature. GRAPHICAL ABSTRACT Impact statement For the first time, a drastic influence of micro-alloying on the defect nature is shown in irradiated Ni systems. Minor addition of solutes modifies significantly elementary mechanisms of radiation damage. Introduction Advanced austenitic steels are candidate materials for current and future nuclear systems. Their main limitation is their macroscopic volume extension under irradiation, so-called void swelling [1,2]. Empirically, the fine-tuning of major elements (Ni, Cr) and the addition of some minor solutes (Ti, etc.) efficiently improved the swelling resistance [3]. To predict the structural evolution of these face-centered cubic (fcc) materials under irradiation, it is essential to understand the solute effects on the elementary mechanisms of radiation damage. To summarize the literature, the irradiated microstructure closely depends on solutes, the type of irradiations and is strongly affected by impurities. Furthermore, the nature of formed defects in nickel is not determined unambiguously. It has to be stressed that the existence of grown stable vacancy loops in nickel has not been proved and to the best of our knowledge the influence of solutes on the loop nature is unavailable in the literature. This paper focuses on a microstructural analysis of the early stage of ion irradiation in Ni and Ni-based model alloys. Ion irradiation is used in our study to reveal fundamentals of damage mechanisms. As micro-alloying (0.4 wt.% Ti) is efficient to reduce swelling [25], two alloys (Ni-0.4wt.%Ti and Ni-0.4wt.%Cr) are chosen and compared with Ni to understand the solute effect. As impurities affect the microstructure, ultra-high purity materials are elaborated. In this paper, the effects of micro-alloying with Cr or Ti on the nature of irradiation-induced defects are analyzed in detail and discussed. Materials and methods The ultra-high purity nickel (Ni) and the Ni-0.4wt.%Ti (Ni-Ti for short) with measured impurities (O, C, N, S) content in mass ppm respectively (3, 8, 2, 2 for Ni; 14, 2, 1, 4 for Ni-Ti), were manufactured by cold crucible induction melting at the Ecole des Mines de Saint Etienne (EMSE) in France. The high purity Ni-0.4wt.%Cr (Ni-Cr for short) was prepared also by induction melting in Service de Recherches Métallurgiques Physiques (SRMP) using pure chromium from J. Braconnot&Cie (> 99.996wt.%) and pure nickel from Goodfellow company (> 99.99wt.%). The raw materials were cut into 400 μm thick slides and then mechanically polished to 50-80 μm. 3 mm diameter disks were punched out and annealed at 1273 K for 2 h in a vacuum of 10 −7 mbar followed by air-cooling. Annealed samples were perforated by twin-jet electro-polishing in a methanol-nitric acid solution for Transmission Electron Microscopy (TEM) observations. A low density of dislocations (< 10 10 m −2 ) was measured by TEM in the initial state, before irradiation. The irradiation was performed on the JANNuS-Saclay platform in CEA-Saclay. Thin foils were irradiated by 5 MeV Ni 2+ ions at 450 ± 20°C using a rastered beam. The temperature was monitored by 4 thermocouples respectively in contact with samples. The ion flux was 2.1 ± 0.2 x10 11 ions.cm −2 .s −1 and the fluence was 2.3 ± 0.3 x10 15 ions.cm −2 . The damage profile was calculated by the Stopping Range of Ions in Matter (SRIM) 2013 code [26] using Kinchin-Pease option with a displacement threshold energy of 40 eV. The final dose varied from 0.7 dpa near the surface to 2.2 dpa near the damage peak at depth of 1.5 μm. After irradiation, TEM samples were lifted out far from thin zones using Focused Ion Beam (FIB) equipped on an FEI Helio 650 NanoLab dual-beam Scanning Electron Microscopy (SEM). The lifted out samples were then electro-polished using flash polishing technique to remove the FIB-induced defects [27]. The characterization of irradiation defects was performed using a 200 kV FEI TECNAI G2 TEM. Bright-Field (BF) mode and Weak-Beam Dark-Field (WBDF) mode were employed to optimize the defect contrast. All TEM micrographs were taken for s g > 0. Dislocation loops were characterized under different two-beam conditions along several zone axes to apply the invisibility criterion for the analysis of the Burgers vector. The inside-outside method [28][29][30] was used to determine the nature of dislocation loops. Results In all materials, the microstructure is dominated by Frank loops and perfect loops. In FIB samples, loops are present from the irradiated surface to the depth of 1.5 μm. In pure Ni, all Frank loops are segmented. Very few voids are observed in Ni samples. In alloys, both loops and voids are present in FIB samples and the majority of Frank loops are not segmented. In both Ni and alloys, inner loops may be present within Frank loops. In Ni samples, no stacking fault is observed inside the inner loops whereas they are always observed in alloys ( Figure 1 and Figure 2). The different morphology of Frank loops in Ni and the alloys raises questions about their nature. The determination of the nature of Frank loops was carried out. It is important to stress on that in each material, for tens of characterized loops, the same nature of almost all the defects was identified, independently on their Burgers vectors and geometric position (depth). In the following, the results of each material are presented for a representative loop, for which, detailed characterizations are given. To demonstrate unambiguously the nature of a dislocation loop, (1) the Burgers vector, (2) the loop plane and (3) the inside-outside behavior in ±g must be fully analyzed. For a pure edge loop, such as a Frank loop, (1) and (3) will be sufficient because the Burgers vector is perpendicular to the loop plane. The analysis of the Burgers vector is carried out with the invisibility criterion [26]. It means that for a given two-beam condition with the diffraction vector g, the dislocation loop with the Burgers vector b will be invisible or show a residual contrast if |g · b| = 0. Figure 1(a-j) presents the TEM micrographs of a dislocation loop in Ni with ten different g along different zone axes. The zone axis and diffraction vectors were cautiously indexed by comparing the Kikuchi lines and diffractions patterns with the reference schematic maps [28]. Since the loop is invisible for g = [202] (Figure 1(c)) and g = [220] (Figure 1(h)), it is a Frank loop with a Burgers vector of b = ±a 0 /3 [111]. The visibility of the loop for the other g (Figure 1(k)) is in agreement with this Burgers vector. For the inner small loops, their visibility is the same as the outer Frank loop for all the g analyzed. Thus, their Burgers vector is also ±a 0 /3 [111]. Figure 2(a) presents two Frank loops respectively in Ni-Cr and Ni-Ti. From their visibility (Figure 2(b)), both the outer Frank loop and the inner one have the same Burgers vector, b = ±a 0 /3 [111] in Ni-Cr and ±a 0 /3 [111] in Ni-Ti. The sign of the Burgers vector depends on the nature of the loop. Figure 3 shows ±g pairs of the three previous Frank loops in Figure 1 and Figure 2. The inside-outside behavior of these loops is resumed in Table 1. In Ni, using the FS/RH convention [31] by considering the sense of the dislocation line as clockwise, the outside contrast of the outer loop for g = [020] gives (g · b) · s g > 0. Thus, the Burgers vector of the outer Frank loop is a 0 /3 [111]. Within the FS/RH convention, the Burgers vector points to the opposed direction against the loop plane normal [111], the outer Frank loop is therefore intrinsic (vacancy-type). On the contrary, the inner loops show a reverse contrast to the outer loop. The Burgers vector of inner loops is therefore a 0 /3 [111], corresponding to an extrinsic Frank loop (interstitial-type). No fault contrast is observed inside those inner loops in accordance with a perfect lattice. By the same analysis, the outer and inner loops are both of interstitial-type in Ni-Cr and Ni-Ti. As a contrast of the stacking fault is observed inside the inner loops, double layer extrinsic Frank loops are produced. Possible structures of these defects are given in Figure 3(p-r). Among all the studied loops, no Shockley partial was observed. To sum up, intrinsic segmented single layer Frank loops are observed in irradiated Ni while extrinsic nonsegmented (single and double layers) Frank loops are found in Ni-Cr and Ni-Ti. Only one intrinsic segmented Frank loop is detected in Ni-Cr. These results contrast with literature [17,21,24] which reports, in irradiated Ni, mostly extrinsic loops and sometimes small metastable vacancy loops. The different loop nature between Ni and its alloys together with the contrast behavior in Ni have to be understood. Discussion The drastic difference of the defect nature between Ni and micro-alloyed Ni is related to the behaviors of vacancies and interstitials. In Ni, vacancies form intrinsic Frank loops while, in the alloys, they form voids. Interstitials in Ni agglomerate as Frank loops nucleated in the middle of the intrinsic Frank loops eradicating the stacking fault. The growth of inner extrinsic Frank loops will lead to the total eradication of intrinsic Frank loops. Considering that loops in Ni are vacancy-type and partially eradicated, interstitials may be depleted in the damage production zone by migrating far away. Contrarily, interstitials in the alloys agglomerate as single or multi-layer Frank loops (two Frank loops in our case). By molecular dynamics (MD) calculations, it was shown [13] that the low migration energy of interstitials and interstitial clusters (I-clusters) in Ni provokes their straight long-distance migration from the damage production region to the surface and deeper zones. On the contrary, as extrinsic Frank loops are observed in the alloys, interstitials may be trapped within the damage production region. Solutes like Fe and Co can increase the migration energy barrier of I-clusters, resulting in a short-distance complex migration trajectory [12]. Thus, interstitials atoms, I-clusters, and interstitial loops remain in irradiated zones leading to the growth of interstitial loops and the nucleation of a multi-layer structure as observed in our case. Such complex Frank loops were observed in irradiated nickel and alloys [22,32]. As the content of solute in our alloys is very low, a minor addition of Ti or Cr modifies drastically the migration of interstitials. Meanwhile, the nucleation of vacancy defects occurs both in Ni (vacancy loops) and in the alloys (stable voids). Due to the much higher migration energy of vacancies compared to interstitials, the vacancy diffusion path is quite short [12]. Thus, germs of vacancy loops created in cascades may absorb nearby free vacancies and grow in irradiated regions. It is interesting to wonder why vacancies agglomerate into different forms in Ni and in the alloys. In the literature, calculations in Ni demonstrate that if the surface energy of voids is reduced (presence of impurities like oxygen), voids may become the stable form instead of dislocation loops when the number of vacancies in the defect exceeds a critical value [13]. This critical value also depends on the stacking fault energy (SFE). In our case, we calculated the average number of vacancies present in loops for Ni (∼ 20 nm in diameter ≈ 6 × 10 3 vacancies) and in voids for alloys (∼ 10 nm in diameter ≈ 3 × 10 4 vacancies). In both cases, we found values lower than the critical value found in the reference [13], which explains our observations in Ni. Meanwhile, in the alloys, voids are the stable form, so they are energetically favored by the addition of Ti and Cr. It suggests an influence of a very small amount of solutes on the SFE and/or the surface energy of voids. It has to be mentioned that the observation of stable grown vacancy Frank loops at different depths in our irradiated Ni samples seems to contrast with previous studies [17,21,24] where vacancy loops were only reported either near other dislocations or were metastable. This difference may be understood considering the type of irradiation (issue of vacancy loop nucleation with electrons and light ions) and the presence of impurities (oxygen and nitrogen) which is believed to affect the microstructure [13,22]. At last, it is worth noting that the morphology of Frank loops is found to depend drastically on its nature: segmented for intrinsic and not segmented for extrinsic. Even the only observed intrinsic Frank loop in the alloys is segmented. These segmented intrinsic Frank loops are similar to those obtained after quenching in nickel [33] and in quenched aluminium [34,35] which are assumed to be intrinsic. Conclusions Microstructural analysis was conducted in ultra-high purity Ni and micro-alloyed Ni (with Ti or Cr) irradiated by self-ion at high temperature. A drastic influence of micro-alloying on the nature of radiation-induced Frank loops is observed. In Ni, vacancies form intrinsic Frank loops. Interstitials agglomerate also as Frank loops but only inside existing intrinsic Frank loops and eradicate the stacking fault. In the alloys, vacancies form voids while interstitials agglomerate as single or multilayer extrinsic Frank loops. For the first time, (i) the real impact of micro-alloying on Frank loop nature and fine microstructure is shown; (ii) large stable intrinsic Frank loops are identified in nickel; (iii) an eradication mechanism of intrinsic Frank loops by inner extrinsic Frank loops is clearly shown; (iv) the morphology of Frank loops is shown to be a characteristic feature of their nature. These observations of fundamental properties of radiation-induced defects introduce new considerations in theoretical calculations, which will contribute significantly to a better understanding of the elementary mechanisms of radiation damage and solute effects in f.c.c. structure.
3,315.2
2020-03-16T00:00:00.000
[ "Materials Science", "Physics" ]
Finite-Dimensional Characterization of Optimal Control Laws Over an Infinite Horizon for Nonlinear Systems Infinite-horizon optimal control problems for nonlinear systems are considered. Due to the nonlinear and intrinsically infinite-dimensional nature of the task, solving such optimal control problems is challenging. In this article, an exact finite-dimensional characterization of the optimal solution over the entire horizon is proposed. This is obtained via the (static) minimization of a suitably defined function of (projected) trajectories of the underlying Hamiltonian dynamics on a hypersphere of fixed radius. The result is achieved in the spirit of the so-called shooting methods by introducing, via simultaneous forward/backward propagation, an intermediate shooting point much closer to the origin, regardless of the actual initial state. A modified strategy allows one to determine an arbitrarily accurate approximate solution by means of standard gradient-descent algorithms over compact domains. Finally, to further increase the robustness of the control law, a receding-horizon architecture is envisioned by designing a sequence of shrinking hyperspheres. These aspects are illustrated by means of a benchmark numerical simulation. Finite-Dimensional Characterization of Optimal Control Laws Over an Infinite Horizon for Nonlinear Systems Mario Sassano , Senior Member, IEEE, and Thulasi Mylvaganam , Senior Member, IEEE Abstract-Infinite-horizon optimal control problems for nonlinear systems are considered.Due to the nonlinear and intrinsically infinite-dimensional nature of the task, solving such optimal control problems is challenging.In this article, an exact finite-dimensional characterization of the optimal solution over the entire horizon is proposed.This is obtained via the (static) minimization of a suitably defined function of (projected) trajectories of the underlying Hamiltonian dynamics on a hypersphere of fixed radius.The result is achieved in the spirit of the so-called shooting methods by introducing, via simultaneous forward/backward propagation, an intermediate shooting point much closer to the origin, regardless of the actual initial state.A modified strategy allows one to determine an arbitrarily accurate approximate solution by means of standard gradient-descent algorithms over compact domains.Finally, to further increase the robustness of the control law, a receding-horizon architecture is envisioned by designing a sequence of shrinking hyperspheres.These aspects are illustrated by means of a benchmark numerical simulation.Index Terms-Hamiltonian systems, nonlinear systems, optimal control, stability of nonlinear (NL) systems. I. INTRODUCTION I NFINITE-HORIZON optimal control problems for nonlin- ear systems are considered herein.The class of problems involves determining feedback stabilizing control policies with the property that a certain cost functional is minimized along the trajectories of the resulting closed-loop system (see, for instance, [1]).Such problems are, in general, challenging nonconvex, infinite-dimensional optimization problems.The optimal policy can be determined by relying either on Pontryagin's minimum principle (PMP) [2] or on the dynamic programming (DP) method [3], [4].A combination of the two methods may also be viable [5], [6], [7], [8].In practice, both approaches possess specific drawbacks.The former implies that the optimal trajectory satisfies a certain ordinary differential equation, defined on an extended state-space with the initial condition partially unknown.The latter, on the other hand, relies on the solution of a partial differential equation (PDE), namely the Hamilton-Jacobi-Bellman (HJB) equation, for which closedform solutions are rarely feasible to obtain and for which numerical solutions also pose a significant challenge due to the computational complexity associated with solving nonlinear PDEs numerically.Various approaches have been explored to overcome the computational hurdles associated with solving both finite and infinite-horizon optimal control problems, encompassing numerical methods [9], approaches based on the notion of viscosity solution [10], [11], or regional methods [12], to mention just a few.For the special case in which the dynamics are linear and the running cost is quadratic in the input and state, namely the well-known linear quadratic regulator (LQR), solutions are readily found via the algebraic Riccati equation (ARE).Consequently, methods based on state-dependent Riccati equations (SDREs) have emerged for nonlinear problems, where essentially an ARE is solved pointwise along the trajectory of the system.While the approach is relatively simple from a computational point of view, the resulting control strategies are, in general, suboptimal and characterized by a quality of approximation that is not easily quantifiable a priori (see, e.g., [13] and references therein).In [14] and [15], the solution of a single algebraic matrix equation is used to construct a dynamic control law, which satisfies, by construction, a partial differential inequality in some nonempty neighborhood including an equilibrium of the system.Consequently, the approach solves (exactly) a modified optimal control problem and yields an approximate (local) solution of the original optimal control problem, where the level of approximation can be explicitly quantified.Alternative approaches to obtain approximate solutions analytically can be found, for instance, in [16].Therein, the HJB PDE is considered and, recognizing the relevance of a certain associated Hamiltonian system (which possesses a stable invariant manifold characterizing the solution of the HJB PDE), two approaches for obtaining approximate solutions are proposed.One approach relies on viewing the control input as a Hamiltonian perturbation and uses geometric tools to approximate the stable manifold associated with the aforementioned Hamiltonian system.Methods to compute invariant manifolds of dynamical systems are also relevant in this regard (see, e.g., [17], [18]).The second approach relies on expressing the dynamics of a system in terms of a linear and a nonlinear component and presents a sequence that provides an approximation of the flow of the Hamiltonian system on the stable manifold.A different approach, still utilizing the relationship between optimal control problems and their associated Hamiltonian systems, is taken in [19].Therein, solutions for the LQR problem with input constraints are solved via a numerical procedure, which exploits the property that the optimal solution is associated with a specific trajectory of the associated Hamiltonian system.Approximate solutions to the constrained optimal control problems are then found via the generation of a "lookup table" created based on backward integration of the Hamiltonian dynamics from initial conditions in a neighborhood of the origin.Similarly to the approaches presented in [16] and [19], we utilize the Hamiltonian system associated with an optimal control problem to characterize exact and approximate solutions of the problem.Differently from the results in [16] and [19], however, the approach proposed herein ultimately requires only the solution of an (unconstrained) static optimization problem.It does not rely on approximating the stable invariant manifold of the Hamiltonian system and does not require any sequential process or lookup table.The Hamiltonian dynamics are in fact employed to design a certain finite-dimensional cost function that attains its minimal value at the intersecting point of the optimal trajectory and a hypersphere of fixed radius. To be more specific, in this article, we consider a class of infinite-horizon optimal control problems characterized by the property that the underlying value function satisfies a certain differentiability condition.This condition is implied (locally) by equivalent properties of the data of the problem and, hence, is easily verifiable a priori (see [20]).Such differentiability conditions are instrumental in showing that a solution to the HJB PDE also satisfies the so-called Hamilton condition (sensitivity relations, see, e.g., [21]) associated with PMP.Namely, the optimal process is associated with a certain trajectory of the Hamiltonian system defined on the extended state/costate space, obtained via a Hamiltonian lifting (see, e.g., [16]).We utilize the above property to provide a finite-dimensional characterization of the solution of the problem via the formulation of a static minimization problem, defined in terms of trajectories of the underlying Hamiltonian system.More precisely, the solution of the optimal control problem is determined by first computing the intersection of the corresponding trajectory of the Hamiltonian system with a sufficiently small hypersphere around the origin.Consequently, the optimal process is determined by forward and backward propagating the flow of the Hamiltonian dynamics from this intermediate point, which is a minimizer of the aforementioned static cost function, involving the initial condition of the underlying plant.It is worth observing that, differently from existing methods that aim at providing a finite-dimensional characterization of the optimal policy, herein the results do not rely on any quantization arguments with respect to time or space.The proposed conditions instead provide an exact finite-dimensional description of the entire optimal trajectory, namely over the infinite horizon.In practice, the static minimization problem cannot, in general, be solved by using standard gradient-descent algorithms unless explicit expressions for the flow of the Hamiltonian dynamics and its sensitivity are available.Such expressions are rarely available except for specially structured classes of systems, such as for linear systems.This limitation is, therefore, circumvented through the introduction of a hybrid architecture that comprises state variables whose evolution captures the propagation of the flow of the Hamiltonian dynamics and of its sensitivity to the initial condition.Finally, a receding-horizon implementation of the architecture is suggested by suitably generating a sequence of hyperspheres of decreasing radii.Building on the property that such a strategy provides not only a suboptimal policy over a restricted time interval but ideally the entire (infinite-horizon) optimal policy, the novel algorithm allows for particularly large moving windows without hindering the accuracy of the overall scheme.This is an interesting difference with respect to existing receding-horizon strategies that are based on the computation of a (relatively small) portion of the optimal strategy at each sampling instant. The remainder of the article is organized as follows.The infinite-horizon optimal control problem and some preliminaries are recalled in Section II.A finite-dimensional characterization of optimal control laws is provided in Section III.Two results are provided therein: One is the characterization of the stable (which yields the optimal control law) and the unstable invariant submanifolds of the underlying Hamiltonian system via a static minimization problem.Through this problem, the set containing the intersections between the invariant manifolds and a hypersphere of the fixed radius is determined.Having determined these intersection points, the optimal control input can be readily obtained by backward/forward integration of the flow of the Hamiltonian system.A companion result yields a characterization of trajectories "close" to the stable and unstable submanifolds via a modified (saturated) static minimization problem.The latter lends itself efficiently to an algorithmic interpretation, which is provided in Section IV.Note that while the stable and unstable manifolds of the Hamiltonian system are crucial in the characterization of the optimal control laws, the proposed approach does not require determining the submanifolds themselves.Namely, the relationship between the Hamiltonian system, its stable/unstable invariant manifolds, and the optimal control law is exploited to determine the specific trajectory of the Hamiltonian system associated with the optimal control input (for a given initial condition) of an infinite-horizon optimal control problem.Finally, the results presented in this article are demonstrated by means of a benchmark example, before some concluding remarks are provided in Sections V and VI, respectively. II. PROBLEM FORMULATION AND PRELIMINARIES Consider a dynamical system with state x : R → R n and control input u : R → R m .In this article, we consider the infinite-horizon optimal control problem recalled below. Problem 1: Determine, if it exists, a continuous function u : for any x 0 ∈ X , where X is a nonempty neighborhood of the origin and where the positive definite function q : R n → R >0 , the vector field f : R n → R n , f (0) = 0, and the vector fields g i : R n → R n , i = 1, . .., m, columns of g, are assumed to be sufficiently smooth functions in X 1 . • The first term of (1a) quantifies a running cost on the state variable while the second term represents a penalty on the control effort.The overall integral in (1a) is the cost functional to be minimized via the selection of the control input u, whereas the constraints (1b) represent the dynamics dictating the behavior of the system and the initial condition of the state of the system.Problem 1 is associated with a so-called value function V : X → R >0 , defined as V (x 0 ) min u {Q x 0 }, for any initial condition x 0 ∈ X (see, for instance, [21]). Remark 1: It has been shown in [20] that V locally inherits certain properties from the problem data.In particular, provided the requirements of Problem 1 hold, there exists a neighborhood X of the origin such that V ∈ C 2 (X ). Let A := ∇f | x=0 and B = g(0) describe the linearized dynamics, δ ẋ = Aδx + Bu, of (1b) around the origin and let q(x) = x Qx, with Q := ∇ 2 q(x)| x=0 represent the corresponding quadratic approximation of the running cost.The following assumption is instrumental for guaranteeing the existence of a solution to Problem 1, at least locally around the origin, as well as ensuring (local) asymptotic stability of the origin for the closed-loop optimal system.Assumption 1: The pairs (A, B) and (A, Q) are stabilizable and observable, respectively. • Note that, as a consequence of Assumption 1, the linearized dynamics together with the quadratic approximation of the running cost admit the optimal solution u = −B P x, where P ∈ R n×n denotes the (unique) positive definite solution of the ARE 0 = A P + P A + Q − P BB P . ( It can be demonstrated by means of the DP approach (see, e.g., [23]) that the solution to Problem 1 is given by the feedback 1 The smoothness assumption on the data of the problem is a condition verifiable a priori, see, e.g., [20], [22].control law for any x ∈ X , with X a neighborhood of the origin.In particular, (4) is derived from the HJB PDE for any x ∈ X .Moreover, such function V coincides with the value function V , which possesses the properties mentioned in Remark 1.The DP approach provides sufficient conditions for optimality: a function V satisfying the HJB PDE (4) yields, via , a solution to the optimal control problem Q x 0 in (1) in terms of a feedback control. Since DP provides the optimal solution in terms of a feedback policy, the DP approach yields (locally) information regarding the optimal solution and the minimal cost for any initial condition, hence slightly generalizing the requirements of Problem 1.However, to take advantage of these insights, a solution of the HJB PDE (4) or ( 5) is required.Typically, closed-form solutions of the HJB PDE are not available and numerical approaches to solve the PDE are cursed with high computational demands.Thus, alternative design approaches based on PMP and the Hamilton Conditions (see [6], [7], [21] for insightful discussions on the relation between PMP and DP) are often preferred in applications.The Hamilton conditions are trajectory-based and, therefore, computed only for specific initial states.Moreover, these requirements provide, in general, only necessary conditions of optimality.This alternative approach is summarized in the remainder of this section. Let H : R n × R n → R denote the (minimized) Hamiltonian function associated with the optimal control problem (1), i.e., where for a certain X ⊆ R n , as implied by the data of the problem for x 0 ∈ X (see Remark 1) the optimal process x , i.e., the trajectory of (1b) in a closed loop with the optimal control, evolves according to the dynamics (Hamilton conditions) Let ϕ H (t; x 0 , λ 0 ) denote the flow of the Hamiltonian dynamics (7), which is assumed to be complete, at time t and from the initial condition (x 0 , λ 0 ), and let π x • ϕ H and π λ • ϕ H denote the projections of the flow on the x components and on the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. λ components, respectively, of the state/costate space.Since the conditions in Assumption 1 imply that the system (1b) is locally exponentially stabilizable and detectable from the (virtual) output y = q(x), it follows, by [24,Sec. 6] that the Hamiltonian dynamics (7) possesses a hyperbolic equilibrium point at (x, λ) = (0, 0) with n-dimensional global stable N s and unstable N u submanifolds through the origin that is invariant for the system (7).The interested reader is referred to the work in [25] for similar results in the case of nonlinear H ∞ control problems in which the arising Hamiltonian dynamics are structurally identical to those in (7).In addition, recalling that the stable and unstable submanifolds of a Hamiltonian system are Lagrangian (see, e.g., [25,Lemma 1]), one has that such submanifolds are locally described as the graph of closed one-forms, namely N s = graph(∇V s ) and N u = graph(∇V u ), respectively, for some generating functions V s : R n → R and V u : R n → R that constitute smooth solutions to the HJB (4).Finally, note that the stable (unstable, respectively) invariant submanifold is tangent at the origin to the n-dimensional subspace W s (W u , respectively) described by the eigenspace of the linearized Hamiltonian system associated with eigenvalues with negative (positive, respectively) real parts of the Hamiltonian matrix As a consequence, the origin of the state-space is a locally asymptotically stable equilibrium point of the (forward-time) closed-loop dynamics ẋ = f (x) − g(x)g(x) ∇V s , x(0) = x 0 and of the (backward-time) dynamics ẋ = −(f (x) − g(x)g(x) ∇V u ), x(0) = x 0 .Moreover, the optimal costate λ : R → R n in (7) satisfies λ (t) = ∇ V s (x (t)), for any t 0 (and, provided the problem satisfies the aforementioned conditions, under which the DP method provides necessary and sufficient conditions of optimality, V = V s ). III. FINITE-DIMENSIONAL CHARACTERIZATION OF OPTIMAL CONTROL LAWS To provide a concise statement of the following result-which yields a finite-dimensional, static characterization of the solution to Problem 1-consider a vector s = [s 1 s 2 . . .s 2n−1 ] ∈ R 2n−1 and a scalar positive constant ε ∈ R >0 , and define the vector-valued function α : R 2n−1 → R 2n according to The mapping α describes the Cartesian coordinates of points on the hypersphere of radius ε in R 2n , namely the set Casting the problem in terms of the angular coordinates s, rather than the original (x, λ) coordinates, is central to the construction of a static, unconstrained optimization problem characterizing the solution to Problem 1. Nonetheless, the specific selection of the parameterization in (10) may be replaced by alternative descriptions. In the following, it is assumed that, whenever s consists of a set of (potentially infinitely many) values s = {s i } i∈I⊆R , the application of the function α(•) to the set s denotes in turn the set α(s) := {α(s i )} i∈I .Finally, let φ(x 0 , λ 0 ) define the locus of points along a specific trajectory of the Hamiltonian dynamics (7), namely φ(x 0 , λ 0 Recall that V s : R n → R and V u : R n → R locally describe the generating functions for the stable and unstable invariant submanifolds of (7), respectively.Consider the function The first term of (11) quantifies the distance between the initial condition x 0 and the trajectory of the Hamiltonian dynamics (7) starting from the point α(s) and propagated in backward or forward time (for τ 1 positive or negative, respectively).The second term, instead, quantifies the distance between the origin and the trajectory of the Hamiltonian dynamics (7) starting from the point α(s) and propagated in forward or backward time (for τ 2 positive or negative, respectively).The following result shows that the Hamiltonian system (7) has the property that the minimum of C x 0 can be obtained by two trajectories only, namely those associated with its stable and unstable submanifolds and passing through x 0 .Moreover, such trajectories can be parameterized by the intersection point α(s) ∈ S ε , together with the times τ 1 and τ 2 .As a consequence, the solution to Problem 1 can be equivalently characterized by the static, unconstrained minimization of (11) with respect to (s, τ 1 , τ 2 ).The above discussion suggests that the solution to Problem 1 can be constructed in two steps. First, the minimizers of C x 0 (associated with the generating functions V s and V u ) are computed (see (13)).This requires detecting the intersections, described by the point α(s) for some s, of two sets: 1) the loci of points of the trajectories of ( 7) that both converge to the origin and have a projection on the state space that contains x 0 ; 2) the surface of S ε .These intersections are identified via the static minimization of C x 0 . Second, once the intersection α(s ) corresponding to the stable submanifold N s has been obtained, the optimal control law is u (t) = −g(x(t)) λ(t), with (x(t), λ(t)) the trajectory of (7) corresponding to the initial condition x(0 By inspecting the latter step (i.e., a standard forward propagation of certain initial conditions), it is evident that the former (i.e., the characterization and the static minimization of C x 0 ) constitutes the crucial aspect of the construction of the optimal control law.Hence, in what follows attention is focused mainly on this first step.The above (intuitive) discussion is formalized in the following statements. Theorem 1: Let x 0 ∈ X ⊆ R n be given and consider the infinite-horizon optimal control problem Q x 0 .Suppose that V s Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.and V u are C2 (X ).Fix a sufficiently small ε > 0 and suppose that Assumption 1 holds.Then where 2 s ∈ [0, 2π) 2n−1 , is such that there exists τ 1 ∈ R with the property that with equal to either −∞ or +∞. • Remark 2: Relying on the definition of the extended real line R, the intuition behind Theorem 1 can be captured by abusing the notation (i.e., letting a minimizer belong to R): the values s are those that belong to the set Proof: The claim is demonstrated by showing that the inclusions and hold simultaneously.Consider first the inclusion (14), which is verified provided the cost function C x 0 obtains its infimum value, relative to s in Ξ, at the intersection between the trajectory identified by φ(x 0 , ∇V s (x 0 )) (similarly for V u ) and the ball of radius ε.The initial value problem defined by the dynamics (7) and a generic initial condition (x 0 , λ 0 ) ∈ R n × R n admits a unique solution locally around the origin of the state-costate space, which is continuous with respect to time and such that the limit for t that tends to +∞ (−∞, respectively) is equal to zero for any initial condition in the stable N s (unstable N u , respectively) invariant submanifold of (7).Thus, it follows that the intersection between φ(x 0 , ∇V s (x 0 ) ) ( φ(x 0 , ∇V u (x 0 ) ), respectively) with S ε , for sufficiently small ε, contains a single point z s ∈ R 2n (z u ∈ R 2n , respectively) such that the infimum with respect to τ 2 in R of the second term of C x 0 in (11), i.e., the limit (13), can be zero.Moreover, by definition of φ(•, •), z s is such that also the first term of C x 0 is zero for some τ 1 ∈ R since z s ∈ φ(x 0 , ∇V s (x 0 )).Therefore, since C x 0 is clearly nonnegative and since z s satisfies z s = ε by definition of S ε , if follows that z s ∈ α(s ).An identical reasoning can be carried out for z u , hence showing (14). To prove the converse inclusion (15) note that the function ϕ H (•; z 0 ) is continuous in R and ϕ H (±∞; z 0 ) = lim t→±∞ ϕ H (t; z 0 ).Thus, it must be shown that there does not exist any point z ∈ R 2n , with a norm equal to ε, other than z s and z u , at which C x 0 (z, τ 1 , τ 2 ) = 0 for some τ 1 ∈ R and τ 2 ∈ R. To this end, suppose initially that there exists such a point z that does not belong to N s ∪ N u .Then, clearly inf τ 2 ∈ R ϕ H (τ 2 ; z) 2 is strictly greater than zero, since the trajectory ensuing from z does not tend to the origin neither in forward nor in backward time.If instead z ∈ N s \ φ(x 0 , ∇V s (x 0 )) (an identical discussion can be carried out for the case of the unstable invariant submanifold 2 is strictly greater than zero by definition of the set φ(x 0 , ∇V s (x 0 )) and of the distance between such a set and the locus of points defined by ϕ H (t; z) for any t ∈ R. The proof is concluded by recalling that φ(x 0 , ∇V s (x 0 )) (and similarly for V u (x 0 )) contains a single point with norm equal to ε, namely z s (z u , equivalently). The proposed results seemingly share common ingredients with strategies that aim at a trajectory-based characterization of certain invariant manifolds for nonlinear systems as well as with the class of the so-called shooting methods, which possess a long history in the literature.Therefore, it is worth stressing the particular features of the method discussed herein, which significantly distinguish them from the two above-mentioned frameworks.First, differently from the former, the objective here is not to approximate the entire manifold but rather a single trajectory with certain properties.Differently from the latter, instead of considering only the forward propagation from a prescribed boundary condition, the desired trajectory is computed by determining its intersection with a hypersphere centered at the origin.In fact, the key idea consists of the simultaneous forward and backward propagation of trajectories from points lying on the surface of the hypersphere.As a consequence one obtains a computationally amenable method that hinges upon the unconstrained minimization of a certain function (11).Furthermore, since the radius of the hypersphere can be selected arbitrarily small, differently from standard shooting methods over relatively long horizons, the knowledge of the linearized solution becomes a valid initial guess for the nonlinear iterations (see the more detailed discussion in Remark 5).Finally, it is worth mentioning that, as a side-effect of relying on a completely unconstrained minimization [namely without even restricting the sign of the time variables τ 1 and τ 2 in (11)], the minimum of the equivalent cost function is obtained also at a trajectory that belongs to the unstable invariant manifold. Remark 3: Theorem 1 establishes that the (static) minimization of the function C : Ξ → R 0 , parameterized with respect to the initial state x 0 ∈ R n , characterizes the intersection points between the trajectories associated with the solutions V s (x 0 ) and V u (x 0 ) of the HJB PDE lying on the stable and unstable Lagrangian submanifolds of (7), respectively, with the hypersphere of radius ε in the state/costate space. Remark 4: As noted in the proof of Theorem 1, for a given initial state x 0 there are (only) two initial conditions λ 0 , such that the resulting trajectories of the Hamiltonian system (7) converge to the origin in either forward or backward time.The set α(s ) contains the two intersection points between these trajectories and the ball of radius ε.Equation (12) entails that the knowledge of the intersection point corresponding to the trajectory belonging to the stable submanifold, i.e., z s , permits the computation of the optimal initial condition for the costate variable λ 0 = ∇V s (x 0 ) by integrating backward in time Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. the Hamiltonian dynamics (7).The relation ϕ H (−τ 1 , α(s )) ⊆ graph ∇ x V s ∪ graph ∇ x V u suggests that by considering a certain number of suitably selected initial conditions x 0,i and by determining minimum points of C x 0,i it is possible to envision a strategy to reconstruct the stable and unstable invariant submanifolds, which would in turn yield the gradient of the value function V s without explicitly solving the HJB PDE (4).The objective of this article is, however, different: rather than determining the invariant manifolds, the objective is to determine (or approximate) the trajectory of the Hamiltonian system corresponding to the optimal process, and thereby construct the optimal control law, for a specific initial condition of (1b). Remark 5: The intuition of forward propagating the trajectories of the underlying Hamiltonian dynamics to determine the solution to an optimal control problem has been explored in the literature before, especially in the case of finite-horizon problems, see, e.g., [26] and references therein.Due to the intrinsic instability of the Hamiltonian dynamics, the so-called shooting methods necessarily require an accurate initial guess of the costate initial condition λ(0) or a (short) bounded time interval even in the case of infinite-horizon problems.A novel feature of the method proposed herein is that the optimal state/costate trajectory is not approximated by considering only the forward propagation from certain initial conditions (as is common in the literature), but by characterizing the intersection of such a trajectory with a hypersphere, namely by forward and backward propagating the trajectory from an intermediate point that belongs to the surface of the hypersphere.Provided the radius of the hypersphere is selected sufficiently small, the intersection to be determined is (much) closer to the origin than x 0 .Thus, a valid initial guess for the intermediate point on the boundary of the hypersphere may, indeed, be suggested by the linearized solution, i.e., by letting s(0) ∈ R 2n−1 be such that α(s(0)) := ε x 0 P x 0 Such a strategy is employed and illustrated in the numerical simulations of Section V. Remark 6: The main objective of this article-namely to provide a finite-dimensional characterization of the solution in infinite-horizon optimal control problems-is similar in spirit to that of the well-known model predictive control (MPC) strategies, see, e.g., [27] and references therein.However, a few relevant differences should be highlighted.In MPC the finite-dimensional characterization is obtained by considering a time quantization over a finite, typically short, time interval around the current value of the state: This allows one to pose a static optimization task with respect to the (finite) samples of the underlying control law.However, in continuous-time nonlinear system-since portions of trajectories that are optimal over a finite horizon are not, in general, restrictions of the optimal control law over the entire horizon-such a strategy inevitably introduces a residual approximation error even in the nominal case of perfect knowledge of the plant and absence of noise or disturbances.Theorem 1 instead provides a finite-dimensional exact characterization of the optimal control law over the entire Fig. 1.Graphical illustration of the statement of Theorem 1.The solid black circles constitute the set α(s ).The solid black line depicts the forward/backward flow of the optimal set along the Hamiltonian dynamics, which is such that C x 0 (s , τ 1 , τ 2 ) = 0. Any other trajectory of the Hamiltonian dynamics that intersects the set S ε (depicted by the gray lines) is such that either the first or the second term of C x 0 (•, τ 1 , τ 2 ) is strictly positive for any τ 1 ∈ R and τ 2 ∈ R. infinite horizon.Therefore, even in a receding-horizon implementation of the constructions proposed here, potentially in the presence of disturbances, the proposed strategy allows one to employ moving windows of length significantly larger than alternative receding horizon strategies in general.This feature is illustrated via a comparative study in Section V. The intuition behind Theorem 1 is illustrated for the scalar case, namely with n = 1, in Fig. 1, where the black circles constitute the set α(s ), namely the intersections of φ(x 0 , ∇V s (x 0 )) and φ(x 0 , ∇V u (x 0 )), which in the scalar case coincide with N s and N u , respectively, with the set S ε .The solid black line represents the forward/backward flow of the optimal set along the Hamiltonian dynamics, which is such that C x 0 (s , τ 1 , τ 2 ) = 0.As shown in the proof of Theorem 1, any other trajectory of the Hamiltonian dynamics that intersects the set S ε (represented by the gray lines) is such that either the first (for α(s 1 )) or the second (for α(s 2 )) term of C x 0 (•, τ 1 , τ 2 ) is strictly positive for any τ 1 ∈ R and τ 2 ∈ R. Remark 7: The discussion in the proof of Theorem 1 suggests two further implications on the values τ 1 and τ 2 .First, it is evident that the infimum of C x 0 is obtained by considering the limit of τ 2 ∈ R to ±∞, as implied by (13).Moreover, one has that τ 1 τ 2 > 0, i.e., the infimum is obtained by simultaneous forward/backward evaluation of Hamiltonian trajectories. Remark 7 motivates the result below, which allows one to approximate the solution of Q x 0 with an arbitrary degree of accuracy.Let V and W be two nonempty sets and define the Hausdorff distance between V and W, denoted d H (V, W), as i.e., defining the largest among all the distances between the points in one set and the closest point in the other set. Theorem 2: Let x 0 ∈ X ⊆ R n be given and consider the infinite-horizon optimal control problem Q x 0 .Suppose that V s Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.and V u are C 2 (X ).Fix ε > 0 sufficiently small, and suppose that Assumption 1 holds.Define where ε ∈ R >0 and ε < ε.Then, for any μ > 0 there exist where 3 (ŝ, τ1 , τ2 ) := arg min (s,τ 1 ,τ 2 )∈Θ C ε x 0 (s, τ 1 , τ 2 ) and Θ := • Proof: First, note that, by continuity of the function , pointwise for any (s, τ 1 , τ 2 ), where the latter function is precisely the one defined in (11).Then, let C x 0 ,θ : Θ → R >0 be the restriction of the function C x 0 in (11) to the set Θ. Given μ > 0, select θ > 0 and μ θ < μ such that, letting where s θ , τ θ 1 , τ θ 2 := arg min (s,τ 1 ,τ 2 )∈Θ C x 0 ,θ (s, τ 1 , τ 2 ), which exist by continuity of the function C x 0 with respect to its arguments.Therefore, consider a subsequence of indices {ε ,i } i∈N such that lim i→∞ ε ,i = 0 + and define the corresponding sequence of the functions {C x 0 } i∈N with the underlying arguments restricted to the compact domain Θ.By the discussion above, the sequence {C x 0 is equi-continuous, being a continuous function.Hence, [28,Th. 7.10] implies also epi-convergence 4 of {C ε ,i x 0 } i∈N to C x 0 ,θ .Furthermore, one has that the restrictions of {C ε ,i x 0 } i∈N to Θ are level-bounded, following from compactness of the corresponding supports, proper and lower semicontinuous for any i ∈ N. Therefore, by relying on [28,Th. 7.33], it follows that and the claim is shown by the definition of the Hausdorff distance and by the inequality (20).Remark 8: Similarly to the interpretation in Remark 3 following the statement of Theorem 1, the rationale behind the formal claim of Theorem 2 can be further explained as follows.The arguments that attain the minimum of function C ε x 0 characterize (via the intersection points represented in spherical coordinates ŝ) the set of all the trajectories of the Hamiltonian dynamics (7) that enter (in finite time) or are tangent to the set S ε := {(x, λ) ∈ R n × R n : (x, λ) = ε } and whose projection on the state component intersects x 0 .The fundamental difference with respect to the characterization of the trajectories as in the statement of Theorem 1 lies in the fact that the minimum of C ε x 0 is achieved also at some |τ i | < ∞, namely for certain (ŝ, τ1 , τ2 ) belonging to the compact set Θ.These observations are illustrated for the scalar case in Fig. 2 .7) that satisfy the initial condition x(0) = x 0 and that (in finite time) enter (depicted by the gray, dashed line) or are tangent (depicted by the gray, solid lines) to the set S ε . IV. ALGORITHMS FOR MINIMIZING THE HAMILTONIAN TRAJECTORY-BASED COST The main objective of this section lies in presenting an algorithm that achieves the minimization of the trajectory-based (finite-dimensional) cost functions defined in (11) and especially (18).After providing a general statement, outlining the design of such an algorithm, the discussion is focused on showing how to circumvent specific implementation issues arising by employing standard minimization techniques.In the following statement, we introduce a continuous-time dynamical system, whose (asymptotically stable) equilibrium corresponds to a minimizer of the static cost function (18).This dynamical system constitutes a central component of the algorithm presented in the following sections.To provide a concise statement of the result, consider the partial derivatives of the function (18) with respect to its arguments, provided in (22), shown at the bottom of the next page, and let B denote a ball of radius one Proposition 1: Let x 0 ∈ X ⊆ R n be given.Fix ε > 0 and ε > 0 sufficiently small and any θ 1 , θ 2 ∈ R >0 .Suppose in addition that Assumption 1 holds.Let γ > 0 and consider where for t 0 and η(0) ∈ arg min (s,τ 1 ,τ 2 )∈Θ C ε x 0 (s, τ 1 , τ 2 ) + μB.• Proof: First, note that the function C ε x 0 restricted to Θ is lower semicontinuous, since it is, in fact, continuous, proper, and level-bounded.It follows by [28,Th. 1.9] that the set (ŝ, τ1 , τ2 ) := arg min (s,τ 1 ,τ 2 )∈Θ C ε x 0 is nonempty and compact; hence, the set α(ŝ) is compact.Moreover, by definition of the set (ŝ, τ1 , τ2 ) (namely, the points in Θ at which C ε x 0 (s, τ 1 , τ 2 ) = 0) and of (local) minimum, it follows that there exists a strictly Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. , is well-defined in the same neighborhood.The remainder of the proof then follows directly from the result of Theorem 2 and standard Lyapunov stability arguments.To this end, note first that the function C ε x 0 is (locally) positive-definite with respect to the set (ŝ, τ1 , τ2 ) and for any η ∈ (ŝ, τ1 , τ2 ) + μ 2 B, for some μ 2 ∈ R >0 ; hence, it can be employed as a candidate for a Lyapunov function.Moreover, (locally) the dynamics (23) are such that Ċε The latter implies that with μ := min{μ 1 , μ 2 }, by continuity of α(•), ξ(t) converges exponentially to ŝ. Remark 9: The intuition behind the statement of Proposition 1 suggests that each connected component of the set α(ŝ) is locally exponentially stable for the dynamics (23).It is worth observing that, for a fixed value of the parameter ε > 0 while clearly α(s ) ⊂ α(ŝ), there may be instead connected components of α(ŝ) that possess empty intersection with α(s ) (see Fig. 2 compared with the exact case depicted in Fig. 1 for a graphical intuition in the case of scalar systems).Furthermore, note that, by combining the claims of Proposition 1 with those of Theorem 2, one has that the trajectories of system (23) recover in practice the actual solution s provided ε is selected arbitrarily small and θ i arbitrarily large. Remark 10: The result of Proposition 1 entails that a single static minimization allows one to compute an essentially open-loop approximation of the entire optimal control policy.Note that the strategy ensures that the resulting trajectory enters the hypersphere S ε in finite time.Within the hypersphere, one could, for instance, implement the solution of the linearized problem, which will induce a residual error of the order of O(ε 2 ) on the feedback policy and of the order of O(ε 3 ) on the cost of the optimal solution, namely on the value function.Thus, the resulting strategy (for the entire time horizon) can be made arbitrarily accurate through the selection of ε .Toward the end of this section, we also provide an algorithm (see Algorithm 1), in which the static minimization problem is solved iteratively, yielding a receding-horizon architecture.The receding-horizon architecture has the additional benefit that it can be implemented online and yields an overall strategy with favorable robustness properties. By inspecting (22), the dynamics ( 23) cannot be easily implemented since the underlying vector field depends on the knowledge of the flow ϕ H (t; •) and the sensitivity, namely the Jacobian matrix of the flow with respect to the initial condition, whose closed-form expressions are typically not available for nonlinear Hamiltonian dynamics (7).This computational issue can be circumvented via a hybrid implementation.In the following statement, standard notation is used to represent the hybrid time domain: (t; k), with t ∈ R ≥0 and k ∈ N, is used to denote the continuous time parameter t along with the index k representing the discontinuous jumps (for reasons of space the interested reader is referred to, e.g., [29] for further details on hybrid systems). Proposition 2: Suppose that the assumptions of Proposition 1 hold and consider a hybrid system with state jump dynamics with and as specified in ( 29), shown at the bottom of the next page.Then there exists τ M > 0 with the property that the trajectories of ( 25), (26) are such that Proof: To begin with, let t i denote the continuous time at which the ith jump occurs.Note that the flow dynamics (25f) is such that τ increases at the same rate as the underlying continuous-time component of the hybrid time domain until it reaches τ ≥ τ M , at which point a jump occurs and its value is "reset" to zero, namely it behaves as a timer variable.Thus, regardless of the initial condition of the state of the hybrid system, the hybrid arcs exhibit a nonempty continuous time interval with the property that t k+1 − t k = τ M , for any k > 0, followed by (periodic) jumps.Note also that η remains constant during flows.Dynamics (25b) and (26b) are such that for any ρ ≤ τ M and any k > 0. Moreover Similarly, (25c) and (26c) are such that χ f (t k+1 , k) = ϕ H (σ 2 (t k , k), α(ξ(t k , k)), for any k > 0. Furthermore, by comparing the structure of ( 22) and of (29), it is clear that the rationale behind the dynamics (25d) and (25e) consists precisely in yielding the Jacobian matrix of the (unknown) flow of the Hamiltonian dynamics (7) with respect to the initial condition.To show this claim, let Ψ : R → R 2n×2n denote the sensitivity of the flow ϕ H (t, z 0 ) with respect to z 0 , namely Ψ(t) = ∇ z 0 ϕ H (t, z 0 ), where z 0 ∈ R 2n denotes the initial condition of the flow.Recalling that, by definition, the flow satisfies the fixed-point condition it follows that the sensitivity satisfies the relation Thus, it follows from (30) (noting also that σ 1 and σ 2 are constant during flows, namely for any t ∈ (t k , t k+1 ]) that the dynamics (25d) and (26d) are such that which is precisely the sensitivity [as given in (32)] of the flow ϕ H (−σ 1 , α(η)).Similarly, it can be shown that which is the sensitivity of the flow ϕ H (σ 2 , α(η)).As a consequence of the preceding discussion, one has that h(η, χ, Ψ) (t k ,k) = C x 0 (η) [with C x 0 as defined in (18)] and [as defined in (22)].That is, the dynamics ( 25) and ( 26) are such that Namely, the sequence η(t k+1 , k + 1), for k ∈ Z >0 , realizes the discretization, via Euler's method, of the continuous dynamics (23).Recalling the main result of Proposition 1, namely that the set α(ŝ) is locally exponentially stable for the continuous dynamics (23), it follows that for sufficiently small τ M , the set α(ŝ) is locally exponentially stable for the discretized dynamics (33). Remark 11: The intuition behind the formal statement of Proposition 2 can be summarized as follows.The state τ of the hybrid system ( 25)-( 26) essentially represents a "sampling time" and the system is such that at jumps χ and Ψ are "initialized" in a manner that ensures that, via the flow dynamics, χ b and χ f determine the backward and forward flows of the Hamiltonian system, whereas Ψ b and Ψ f determine the sensitivities of the backward and forward flows, respectively.Namely, it is possible to periodically determine the flows and their sensitivities, for which analytical expressions are, in general, not available, required to integrate the continuous dynamics (23) of Proposition 1.At jumps, this knowledge is then used to implement a discretized version of (23). The previous discussions and formal statements are finally summarized in the following algorithm, which essentially outlines a receding-horizon implementation of the results of Propositions 1 and 2, in which the hyperspheres defined by ε and ε are allowed to shrink. A few comments about the practical implementation of the algorithm above are necessary before considering a benchmark numerical simulation.To begin with, it is worth observing that the iterations may be terminated whenever x c (which defines, via ξ c in step (2), the intersection of the trajectory with the outer hypersphere of radius ε) is sufficiently small.In addition, in order to obtain that τ κ = τ κ−1 + τ rh for all κ ∈ N-namely such that Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.25), (26), with ξ c such that α(ξ c ) = ε[x c , x c P ] / [x c , x c P ] as in (16) with P denoting the positive definite solution of the ARE (2).(3) Integrate the hybrid system (25), (26) in [0, ντ M ] and let σi = σ i (ντ M , ν), τ κ = τ κ−1 + min{τ rh , σ1 + σ2 } and Algorithm 1: Receding Implement the control input u • in the system (1b) for τ κ − τ κ−1 seconds, i.e., compute the time instants at which the control law is updated coincide with the periodic pattern induced by the desired value τ rh -one should select ε sufficiently small and ν sufficiently large.This aspect is further discussed also in the numerical simulations below. V. BENCHMARK EXAMPLE To illustrate and validate the theory discussed in the previous sections, the following benchmark example for infinite-horizon optimal control problems in the presence of nonlinear dynamics is considered.In particular, the problem is borrowed from [30] and [31], where a detailed comparative analysis is carried out among several alternative techniques.Toward this end, consider the nonlinear system described by the equations ẋ1 = x 2 , ẋ2 = x 3 1 + u (37) with x(t) ∈ R 2 and u(t) ∈ R, which exhibits a cubic nonlinearity in the state variables, together with a cost function of the form (1a), with q(x) = x x.As implied by [20] (see also Remark 1), the problem admits a locally C 2 value function.The graphs in Figs. 3 and 4 depict the outcome corresponding to the strategy obtained by implementing Algorithm 1 to solve the infinite-horizon optimal control problem defined by (37) and (1a) in the nominal setting, i.e., in the absence of any noise or disturbances.The results are obtained by selecting the configurable parameters of Algorithm 1 as τ M = 0.01 s, τ rh = 1 s, ν = 6 × 10 5 , c 1 = 0.5, c 2 = 0.1, τ 10 = τ 20 = 1, and γ = 0.1.The dashed vertical lines in Fig. 3, which are uniformly distributed over time, show that the selection of c 2 , hence in turn of ε at each iteration, induces the property that τ κ − τ κ−1 = τ rh = 1 s for all κ.Moreover, it is worth observing that-by relying on the discussion of Remark 6, which highlights that at each execution of the steps (2)-(4) of Algorithm 1 the entire (infinite-horizon) optimal solution is characterized provided ε is sufficiently small and ν is large-Algorithm 1 is capable of determining a solution with a cost lower than all the alternative techniques considered in the comparison provided in [31].More precisely, J(u • ) = 1.4828, even with a relatively long moving window, namely with one second between two consecutive updates of the approximate policy.The proposed strategy (and its performance) is explicitly compared with control laws designed on the basis of linear and nonlinear MPC strategies (see, e.g., [32]).In particular, within the framework of a linear (parameter-varying) MPC architecture, the dynamics (37) are linearized around the current value of the state-which is assumed constant during the prediction phase-and subsequently discretized via Euler's method, with respect to the sampling time δT ∈ R >0 .To obtain a finite-dimensional characterization of the optimal control task, as it is common in this setting, it is assumed that the control action is piecewise constant during the evolution of the prediction model: This choice allows one to pose the underlying optimal control problem restricted to the moving window of length δT N u , with N u ∈ N denoting the prediction steps, as a static quadratic optimization task.Fig. 5 shows the time-histories of the state variable x 1 (t) of the system (37) for several selections of the parameters δT and N u .In particular, the cost of the choice (δT, N u ) = (0.2, 5) (solid black line) is equal to 1.8389.It is worth observing that for the above selection of parameters, the optimization is performed over a receding window of 1 s, whereas the control action is computed every 0.2 s, i.e., at a significantly higher rate than the 1 s for the control design method we propose.Furthermore, (δT, N u ) = (0.2, 10) (solid dark gray line) induces a cost of 2.9949, (δT, N u ) = (0.5, 4) (solid light gray line) induces a cost of 2.1528 and, finally, (δT, N u ) = (1, 2) (dash-dotted gray line) results in an unbounded trajectory of the closed-loop system.A similar comparison is performed also with respect to a nonlinear MPC scheme, with δT = 0.2 and N u = 5, in which the optimization problem at each step is solved via the fmincon command in Matlab.A quadratic virtual weight on the terminal state of the moving window, described by x(t + N u |t) Sx(t + N u |t) with S = 10 2 I, is required to induce a bounded evolution to the resulting closed-loop system.The corresponding trajectory is described by the dashed gray line in Fig. 5 with associated cost equal to 1.8922.The objective of the second numerical simulation is instead to assess the behavior of Algorithm 1 in the presence of an exogenous disturbance signal affecting the dynamics (37).In particular, the disturbance is unmatched with the control input since it affects (linearly) the dynamics of the first state, namely ẋ1 = x 2 + w, with w(t) defined as w(t) = 0.5 for t ∈ [0.4,0.8], w(t) = −0.5 VI. CONCLUSION Infinite-horizon optimal control problems for nonlinear systems have been studied.Their intrinsically infinite-dimensional nature renders this class of control problems particularly challenging to solve.The contribution of this article has been twofold.On one hand, we have provided a finite-dimensional characterization of the solution to such problems in terms of the set in which a certain function-which involves the trajectories of the associated Hamiltonian system and their sensitivity with respect to the initial condition-attains its minimum value. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. Then, we have shown that a suitably adapted hybrid implementation of a standard gradient-descent algorithm permits the minimization of such functions, without requiring explicit knowledge of the flow of the Hamiltonian system, which is seldom available in practice.This result is well suited for an algorithmic interpretation, which can be implemented in a receding horizon fashion.The efficacy of the resulting algorithm is demonstrated by means of a benchmark infinite-horizon nonlinear optimal control problem. Fig. 2 . Fig. 2. Graphical illustration of the statement of Theorem 2. The solid segments on the set S ε indicate the arguments, which attain the minimum of C ε x 0 .These constitute intermediate points of all trajectories of the Hamiltonian system (7) that satisfy the initial condition x(0) = x 0 and that (in finite time) enter (depicted by the gray, dashed line) or are tangent (depicted by the gray, solid lines) to the set S ε . Fig. 5 . Fig. 5. Time histories of the state x 1 (t) of the system (37) with x(0) = (1, 0), in closed-loop with MPC controllers different values of the configuration parameters δT (sampling time) and N u (length of the receding horizon).
12,657.4
2023-10-01T00:00:00.000
[ "Mathematics" ]
Does the Exhaustion of Resources Drive Land Use Changes ? Evidence from the Influence of Coal Resources-Exhaustion on Coal Resources – Based Industry Land Use Changes Analyzing the spatial-temporal changes of resources–based industrial land is essential to the transformation and development of resources–exhausted cities. In this paper, we studied coal resources–based industrial land use changes and their driving factors in a typical coal resources–exhausted city, Anyuan District, Pingxiang city. The changes between coal resources–based industrial land and other land-use types were analyzed. The logistic regression models were applied to identify the main driving factors and quantify their contributions to coal resources–based industrial land-use changes during the two periods of 2003–2008 and 2008–2013. The results show that coal resources–based industrial land declined by 34.37% during the period 2008–2013 as coal resources were being exhausted. Altitude, distance to roads, distance to town, population density change, fixed-asset investment per area change, and GDP per capita change drove coal resources–based industrial land-use changes. However, the patterns of the driving effects differed, and even the same factors had different influences on coal resources–based industrial land-use changes during the two periods. The changes in the driving factors can be seen as responses to socioeconomic transformation and development in the city, which is experiencing the exhaustion of coal resources. As a result of the comprehensive effects of these driving factors, coal resources–based industrial land use has changed in complex ways. Introduction Nowadays resources-exhausted cities whose economy used to rely on development of the natural resources such as mineral resources, forest resources and so on are facing the phenomenon of resource exhaustion.The cumulative extraction of resources in these cities has reached more than 70% of recoverable reserves and resource exploitation has come to the end stage [1].In recent years, resources extraction and initial processing enterprises in these cities had gradually closed, causing a dramatic change of industrial land.On the one hand, enormous abandoned industrial land brings environmental problems [2,3].On the other hand, there is an urgent need for construction land in economic transformation and sustainable development of these cities [4].However, few studies have explored coal resources-based industrial land-use changes and their driving forces. In recent decades, dramatic land-use changes have occurred around the world, especially in rapidly developing regions [5,6].As an important component of global environmental change, land use and cover change (LUCC) has become a hot topic in sustainable development studies [7,8].Increasing numbers of studies have investigated land-use changes and their driving forces [9][10][11], including comprehensive land-use changes [12,13], construction land-use changes [14][15][16], farmland changes [17][18][19], and ecological land changes [20,21].Generally speaking, researchers believe that natural environment changes, industrialization and urbanization are the main driving forces in the conversion of farmland to nonagricultural use [22][23][24].Socioeconomic developments and land-use policies drive urban and industrial land expansion for future urban land-use spaces [25,26].On the other hand, the wetland and forest has increased and environment has improved with the ecological restoration projects are implemented [27].Most of these studies are focused on large-scale or mesoscale land use changes and their driving forces [28][29][30].At different spatial scales, the interactions between land use change and its associated factors are also different [31][32][33].An in-depth study of small-scale land-use changes can further reveal the micro-processes and internal mechanisms of regional spatial-temporal changes in land use [34].Moreover, as a special and important land use type in resources-exhausted cities, resources-based industrial land use changes are related to the phases of coal resources-based cities, which include the two different phases of prosperity and exhaustion.Analyzing the spatial-temporal changes of resources-based industrial land is essential to provide important information for its reasonable use and reclamation, and is beneficial to the transformation and development of these cities.Therefore, it is necessary to expand the research to coal resources-based industrial land-use changes and their driving forces at a small scale. Pingxiang is a typical coal resources-exhausted city located in Jiangxi province in China.Anyuan District in Pingxiang was selected in this study to explore coal resources-based industrial land-use changes and their driving forces.By the using of high-resolution images from 2003, 2008 and 2013, changes in coal resources-based industrial land areas and their directions of flow were considered to determine spatiotemporal characteristics.The possible driving forces and their impact on coal resources-based industrial land-use changes were spatially identified and quantified using logistic regression models. Overview of the Study Region Anyuan District is situated in the hilly and mountainous area of central China, lying between 113 • 45 -113 • 59 E and 27 • 33 -27 • 44 N (Figure 1).It is the political, economic, and cultural center of Pingxiang in Jiangxi province.The landform is plain in the northwest part of the study area and mountainous in the southeast.The east-west width is about 22.75 km, and the north-south length is about 20.33 km, covering an area of approximately 212.58 km 2 .The area is located in a humid subtropical monsoon climate zone and is characterized by four distinct seasons with an average annual precipitation of 1630 mm and a mean annual temperature of 17.2 • C. The annual rainfall is higher than the annual evaporation.Anyuan District is among the earliest recorded coal mining areas in China and can be traced back to the Tang Dynasty.In the past, it was called the "Coal Capital" south of the Yangtze River because of its abundant coal resources.Coal resources-based industrial land is the main component of industrial land.As they are located in hilly and mountainous areas, coal mines are widely distributed at relatively high altitudes.The exploitation of coal resources has greatly contributed to the modernization of the region, as well as the whole country.As a result of high-intensity coal mining over a long period, coal resources in Anyuan District since 2000 have become nearly exhausted, resulting in a reduction of coal mining and coal processing activities.The coal economy is collapsing.In March 2008, the State Development and Reform Commission identified Pingxiang as a resources-exhausted city.As the most important coal production area of Pingxiang, Anyuan District is experiencing dilemmas in both its socioeconomic and ecological development.2).Slope and altitude data were extracted from ASTER GDEM data (30 m resolution) obtained from the Geospatial Data Cloud, Chinese Academy of Sciences (http://www.gscloud.cn).The data for distances to roads and towns were calculated using the ArcGIS.The socioeconomic data including urbanization rates, financial revenues, population, and fixed-asset investment were collected from Anyuan District's yearbooks for 2004-2014.These data were assigned to the sub-districts of Anyuan District's vector layer, and converted to raster layers with a resolution of 30 m [36].It is important to note that we take the changes of the data from 2003-2008, and from 2008-2013 into consideration, therefore, the study is actually based on cross-sectional data for two periods.In the study, coal resources-based industrial land refers to land used for coal mining, coal product processing and coal storage.The land use data of the corresponding periods and high-resolution images from Google Earth were used as reference data.At the same time, the coal resources-based industrial land and the changes were revised according to the investigation of mine management bureau and the village cadres of the area.The accuracy of the comprehensive evaluation of results was more than 86% [35].All images were converted into a grid map with 2.5 m resolution to reduce errors in counting the data.To obtain gains and losses in coal resources-based industrial land during the periods of 2003-2008 and 2008-2013, the land-use maps for 2003, 2008 and 2013 were overlaid using ArcGIS 10.3 software (Figure 2).Slope and altitude data were extracted from ASTER GDEM data (30 m resolution) obtained from the Geospatial Data Cloud, Chinese Academy of Sciences (http://www.gscloud.cn).The data for distances to roads and towns were calculated using the ArcGIS.The socioeconomic data including urbanization rates, financial revenues, population, and fixed-asset investment were collected from Anyuan District's yearbooks for 2004-2014.These data were assigned to the sub-districts of Anyuan District's vector layer, and converted to raster layers with a resolution of 30 m [36].It is important to note that we take the changes of the data from 2003-2008, and from 2008-2013 into consideration, therefore, the study is actually based on cross-sectional data for two periods. Driving Factors Analysis of Industrial Land-Use Change At any point on the earth's surface, land-use change can be divided into two types: Changed or not changed.Regression model was popular in the quantitative analysis of land-use change driving factors.However, the traditional linear regression model is not applicable for solving this problem with a dichotomous dependent variable [37][38][39].In this situation, the logistic regression model has the advantage of addressing this problem properly as it is a nonlinear model for binary or multivariate analysis [40].Therefore, logistic regression models are often used to analyze the driving forces in land-use changes [36,41,42].In this study, coal resources-based industrial land-use change was considered the dependent variable (Y) with a binary value (i.e., changed or not changed), linked to a series of possible driving factors (X), with the following logistic regression equation [43]: where ( ) Λ ⋅ represents the standard logistic cumulative distribution function.P is the occurrence probability of coal resources-based industrial land-use change, and β is a vector of parameters to be estimated.While the coefficients of logistic model only provide the direction and significance of explanatory variables, marginal effects which shows how much the event probability changes when a given explanatory variable is changed by one unit usually need to be computed.When one of the driving factors Xk is a continuous variable, its partial effect on coal resources-based industrial land-use change is obtained from the partial derivative: The explanatory variable will only pass the significance test if the associated p-value of the z-statistic is less than the given significance level (0.05 in our study).In addition, socioeconomic data including urbanization rates, financial revenues, population, and fixed-asset investment are based on Anyuan District's yearbooks for 2004-2014, which is measured at an aggregation level.However, the dependent variable is measured as a detailed spatial unit.The different measurement level of dependent variable and explanatory variables may lead to a bias in standard errors.Therefore, we use clustered adjusted standard errors to alleviate this problem [44]. Driving Factors Analysis of Industrial Land-Use Change At any point on the earth's surface, land-use change can be divided into two types: Changed or not changed.Regression model was popular in the quantitative analysis of land-use change driving factors.However, the traditional linear regression model is not applicable for solving this problem with a dichotomous dependent variable [37][38][39].In this situation, the logistic regression model has the advantage of addressing this problem properly as it is a nonlinear model for binary or multivariate analysis [40].Therefore, logistic regression models are often used to analyze the driving forces in land-use changes [36,41,42].In this study, coal resources-based industrial land-use change was considered the dependent variable (Y) with a binary value (i.e., changed or not changed), linked to a series of possible driving factors (X), with the following logistic regression equation [43]: where Λ(•) represents the standard logistic cumulative distribution function.P is the occurrence probability of coal resources-based industrial land-use change, and β is a vector of parameters to be estimated.While the coefficients of logistic model only provide the direction and significance of explanatory variables, marginal effects which shows how much the event probability changes when a given explanatory variable is changed by one unit usually need to be computed.When one of the driving factors X k is a continuous variable, its partial effect on coal resources-based industrial land-use change is obtained from the partial derivative: The explanatory variable will only pass the significance test if the associated p-value of the z-statistic is less than the given significance level (0.05 in our study).In addition, socioeconomic data including urbanization rates, financial revenues, population, and fixed-asset investment are based on Anyuan District's yearbooks for 2004-2014, which is measured at an aggregation level.However, the dependent variable is measured as a detailed spatial unit.The different measurement level of dependent variable and explanatory variables may lead to a bias in standard errors.Therefore, we use clustered adjusted standard errors to alleviate this problem [44]. Selection of Driving Factors Previous studies have shown that land-use change is a complex process that is affected by the comprehensive effects of natural environment changes and human activity [45][46][47].As a special land-use type, coal resources-based industrial land-use change is also affected by changes in the natural environment and human activity.After investigating the status of coal resources-based industrial land-use changes, physical geography conditions, location conditions, and socioeconomic conditions in Anyuan District-and combining them with principles of driving-factor selection such as representativeness, comprehensiveness, and data availability-9 possible driving factors were selected.However, financial revenue per area, which had a strong collinear relationship with other driving factors, was excluded.Thus, 8 independent variables were finally selected as possible driving factors (Table 1).In these driving factors, slope and altitude represent natural driving factors; distances to roads and towns represent location driving factors; and population density change, fixed-asset investment per area change, urbanization rate change, and GDP per capita change represent socioeconomic driving factors.Changes in socioeconomic factors were calculated in terms of the difference between the two periods of 2003-2008 and 2008-2013.The socioeconomic data were assigned to the towns of Anyuan District as attribute data in vector layers using ArcGIS 10.3.Finally, maps of the driving factors were converted to 30 m resolution grid maps to facilitate sampling. Spatial Sampling To avoid spatial autocorrelation, stratified random sampling was used by creating a random point tool in ArcGIS, and n observations were selected randomly throughout the study area.For every observation, the data for dependent and independent variables were extracted from coal resources-based industrial land-use changes and driving factor layers, and 200 sampling points were finally retained.For each sampling point, 1 was assigned if the changes between coal resources-based industrial land and other land-use types occurred during the two periods of 2003-2008 and 2008-2013; otherwise, 0 was assigned.Using the same observations, the data were extracted from the driving factor maps as the independent variable.To avoid influence on the constant term, the number of observations with values of 0 and 1 was equal in every model [48,49].Since the independent variables were selected from natural conditions, socioeconomic conditions, location, and so on, the sampled data have different dimensions and dimensional units.To eliminate the influence of the dimension, the sampled data were standardized using the standard scores standardization method using the following expression [36]: where X ' i,j stands for the standardization result; X i,j is the raw value of the ith possible driving factor in the jth sampling point; X i is the mean of the ith possible driving factor for all sampling points.Once all the data were ready, Stata 12 was used to construct the logistic regression models and analyze the driving forces of coal resources-based industrial land-use change.where X ' i,j stands for the standardization result; Xi,j is the raw value of the ith possible driving factor in the jth sampling point; i X is the mean of the ith possible driving factor for all sampling points. Coal Resources-Based Industrial Land-Use Change Once all the data were ready, Stata 12 was used to construct the logistic regression models and analyze the driving forces of coal resources-based industrial land-use change.During the period 2003-2013, the area of farmland, forest, and coal resources-based industrial land decreased.The area of general construction land and other land types increased, and the area of water remained mostly steady over the same period.Anyuan District is the urban area of Pingxiang City whose residential space is the main construction land.Although coal resources-based industrial land accounts for a small proportion of construction land, it accounts for a large proportion of industrial land.The total area of coal resources-based industrial land was 743.99 hm 2 in 2003, 498.55 hm 2 Of the total reduced coal resources-based industrial land, 38.15% was converted to general construction land (area: 179.60 hm 2 ), 32.37% was converted to forest (area: 152.36 hm 2 ), and 26.98% was converted to other land-use types (area: 127.03 hm 2 ).On the other hand, some general construction land, forest, and farmland were converted to coal resources-based industrial land.General construction land increased 25.64% from 5556.01 hm 2 to 6980.61 hm 2 whereas 109.70 hm 2 of general construction land was converted to coal resources-based industrial land.Farmland and forest were reduced by 29.31% (from 4975.91 hm 2 to 3517.61 hm 2 ) and 2.07% (from 9499.20 hm 2 to 9302.16 hm 2 ), respectively.Of the total reduced area in these two types, 66.82 hm 2 of forest and 36.61hm 2 of farmland were converted to coal resources-based industrial land.In model 1, altitude, distance to roads, distance to town, population density change, and fixed-asset investment per area change have significant effect on the loss of coal resources-based industrial land.Altitude has positive impact on coal resources-based industrial land loss: One unit increase in altitude will produce 0.111 units increase in the probability of coal resources-based industrial land loss.Given the difficulty of coal mining at higher altitudes, industrial land with the exhaustion of coal resources is more likely to be transferred.Distance to roads and distance to town as location factors positively affect coal resources-based industrial land loss: With one unit increase in these two factors, the probabilities of coal resources-based industrial land loss increase by 0.231 units and 0.095 units, respectively.Generally speaking, most of the coal mines are located in the hilly and mountainous regions far away from the town.The probability of coal resources-based industrial land loss increases indicates that more coal resources-based industrial land changes to other land use types.This is mainly because coal declined in quality during the period of coal-resources exhaustion and could not be sold at good prices, and long-distance transportation increases production costs.Population density change is another important driving factor that positively affects coal resources-based industrial land loss.Loss is more likely to occur in regions with higher population density.With a one unit increase in population density, the probability of coal resources-based industrial land loss increases by 0.153 units.Population agglomeration drives socioeconomic development.As a consequence, when the coal industry declines, labor forces and capitals that originally worked or invested in small and micro coal enterprises transfer to other industries for higher profits, which caused changes in coal resources-based industrial land use.It was found that many coal enterprises, especially small and micro private coal enterprises, shut down between 2003 and 2008.Further, the variable of fixed-asset investment per area change has negative impact on coal resources-based industrial land loss.With one unit increase in fixed-asset investment per area, the probabilities of coal resources-based industrial land loss decreased by 0.228 units.The reason for this could be that to ensure industrial output value, the government increased investment in the mining industry, since coal is the leading industry in the area. Factors Driving Changes in In model 2, only the variable of distance to town passes the significance test.Distance to town has a negative impact on coal resources-based industrial land gain.One unit increase in distance to town will produce a 0.122 units decrease in the probability of coal resources-based industrial land gain.The probability of coal resources-based industrial land gain declines indicates that it is more difficult to make the other land use types change to coal resources-based industrial land.With the coal-resources exhaustion, the difficulties of coal exploitation and the costs of production increased and as a result, the expanding of land for coal mining became more and more difficult. Factors Influencing Industrial Land-Use Change, 2008-2013 Table 4 shows the results of the logistic regression models for the driving factors of coal resources-based industrial land losses and gains during the period 2008-2013.In model 3, altitude, distance to roads, distance to town, population density change, fixed-asset investment per area change and GDP per capita change significantly affect the loss of coal resources-based industrial land.Altitude has positive impact on coal resources-based industrial land loss: With a one unit increase in altitude, the probability of coal resources-based industrial land loss increases by 0.125 units.This suggests that coal resources-based industrial land at higher altitudes is more likely to be transferred.It was found that with the aggravation of coal-resource exhaustion during this period, coal shortages caused the exit of state-owned coal enterprises.Distance to roads has a negative impact on coal resources-based industrial land loss: With a one unit increase in distance to roads, the probability of coal resources-based industrial land loss decreases by 0.186 units.Meanwhile, distance to town positively affects coal resources-based industrial land loss: With one unit increase in distance to town, the probability of coal resources-based industrial land loss increases by 0.128 units.This is mainly because coal mining, processes, and storage cannot generate satisfactory profits in advantageous locations, and the coal resources-based industrial lands near roads are likely to change to other land types.At the same time, other coal mining enterprises that are far from towns are in continuous recession as coal resources become exhausted.As for socioeconomic factors, population density change, fixed-asset investment per area change, and GDP per capita change positively affect coal resources-based industrial land loss.With one unit increase in these three factors, the probabilities of coal resources-based industrial land losses increase 0.235 units, 0.242 units and 0.118 units, respectively.Along with socioeconomic transformation and development, the coal industry-previously the area's leading industry-is replaced by other industries, and capital, labor, and other production factors flow into the new leading industries with higher outputs.Therefore, these factors accelerate coal resources-based industrial land loss. Similarly to Model 2, only the variables of distance to town passed the significance test at the 1% significance level in model 4. As with 2003-2008, distance to town negatively affected coal resources-based industrial land increase during this period.A one unit increase in distance to town will produce a 0.145 units decrease in the probability of coal resources-based industrial land gain.Note: *, **, and *** denote variables significant at the 10%, 5% and 1% levels, respectively. Exhaustion of Coal Resources Improves the Land Environment Indirectly The prosperity of mining activities will lead to a rapid reduction in the scale of ecological land such as forests and result in some ecological environment problems [50,51].After high-intensity coal mining over a long period, the coal resources had been exhausted from 2000 in the Anyuan District.At the first stage of 2003 to 2008, small and micro coal enterprises were rapidly shut down because of the lack of capital and labor, causing a dramatic decrease of coal resources-based industrial land.Although the state-owned coal enterprises possessing advanced production equipment and technology remained, the exhaustion of coal resources had influenced their production capacity, which led to a long-term decrease of coal resources-based industrial land.With the implementation of the policy of environmental improvement for abandoned mine, the reduced coal resources-based industrial land was converted to forest with the area of 152.36 hm 2 from 2003 to 2013, which went through a process from abandonment to recovery (Figure 4).With the depletion of coal resources and demand for urban transformation and development, the structure and function of the ecosystem have been restored under the help of self-repair and artificial restoration.Although the exhaustion of coal resources does not directly promote ecological restoration, the land's environmental conditions improve indirectly.Note: *, **, and *** denote variables significant at the 10%, 5% and 1% levels, respectively. Exhaustion of Coal Resources Improves the Land Environment Indirectly The prosperity of mining activities will lead to a rapid reduction in the scale of ecological land such as forests and result in some ecological environment problems [50,51].After high-intensity coal mining over a long period, the coal resources had been exhausted from 2000 in the Anyuan District.At the first stage of 2003 to 2008, small and micro coal enterprises were rapidly shut down because of the lack of capital and labor, causing a dramatic decrease of coal resources-based industrial land.Although the state-owned coal enterprises possessing advanced production equipment and technology remained, the exhaustion of coal resources had influenced their production capacity, which led to a long-term decrease of coal resources-based industrial land.With the implementation of the policy of environmental improvement for abandoned mine, the reduced coal resources-based industrial land was converted to forest with the area of 152.36 hm 2 from 2003 to 2013, which went through a process from abandonment to recovery (Figure 4).With the depletion of coal resources and demand for urban transformation and development, the structure and function of the ecosystem have been restored under the help of self-repair and artificial restoration.Although the exhaustion of coal resources does not directly promote ecological restoration, the land's environmental conditions improve indirectly. Complexity of Coal Resources-Based Industrial Land Changes Driving Factors It has been shown that the processes of land-use changes are closely related to environmental changes, population increases, socioeconomic developments, and land-use policies [24,52,53]. Complexity of Coal Resources-Based Industrial Land Changes Driving Factors It has been shown that the processes of land-use changes are closely related to environmental changes, population increases, socioeconomic developments, and land-use policies [24,52,53].Natural factors control long-term and large scale land use change while socioeconomic factors play a leading role in short-term and small scale land use change [12].For different land use types, the driving factors and the driving mechanism of factors are different.As a special type of land use, coal resources-based industrial land changes have the both directions of increase and decrease in space in the depletion stage of coal resources in this area.The driving forces of social and economic development can also be seen as responses to socioeconomic transformation and development in the city, which is eager to get rid of the development dilemma caused by the exhaustion of coal resources.Apart from the social and economic development driving factors, the altitude is still an important factor influencing coal resources-based industrial land change in the hilly and mountainous areas.This may because particularity that coal production difficulty will increase with altitude.As a result of the comprehensive effects of these driving factors, the coal resources-based industrial land use changes in complex ways. Bidirectional Process of Coal Resources-Based Industrial Land Changes Land use change is a complex process in spatial and temporal dimensions, which could be drastic in a short time under the influence of human activities.In a period of time, the land use change of any point in space is bidirectional [23].Coal resources-based industrial land refers to land used for coal mining, coal product processing and coal storage, and the role of the driving factors of coal resources-based industrial land gain and loss may be different.Study on the spatiotemporal bidirectional process combining with investigation can help to explore the detailed driving mechanisms of land use change. Influence of Policies on the Coal Resources-Based Industrial Land Changes Taking various factors of physical geography, location and socioeconomic into consideration, the driving mechanism of coal resources-based industrial land changes is explained to some extent in this research.In addition to these driving factors, the policies of environment protection, ecological restoration, treatment of abandoned mines and transformation and development of resource exhausted cities can drive the coal resources-based industrial land changes in a short time.Pilot projects addressing waste land in industrial and mining have begun to be carried out in this area and obtained achievements in the improvement of the ecological environment.However, the policy impacts have not been taken into consideration, because the impacts of policies on coal resources-based industrial land changes are still hard to quantify by the logistic regression.The influence of policies on the coal resources-based industrial land changes should be examined in future studies if the data and methods are available. Conclusions Taking Anyuan District as a typical coal resource-exhausted city, this study analyzed coal resources-based industrial land-use changes and quantified the driving factors of physical geography, location, and socioeconomics using logistic regression models.Some conclusions can be drawn as follows. SPOT images from 2003 and aerial photographs from 2008 and 2013 obtained from the Pingxiang Municipal Bureau of Land and Resources were chosen as the base images.The resolution of the SPOT images is 2.5 m, and the resolution of the aerial photography images is 1 m.The images from 2003, 2008 and 2013 were geo-referenced and imported into a Gauss-Kruger coordinate system.An accuracy of less than 0.5 pixel root mean square error was achieved using image rectification in ERDAS Imagine 9.2 software.The land-use categories were divided into six kinds: Coal resourcesbased industrial land, general construction land, farmland, forestland, and other use types by the objected-based classification method.In the study, coal resources-based industrial land refers to land used for coal mining, coal product processing and coal storage.The land use data of the corresponding periods and high-resolution images from Google Earth were used as reference data.At the same time, the coal resources-based industrial land and the changes were revised according to the investigation of mine management bureau and the village cadres of the area.The accuracy of the comprehensive evaluation of results was more than 86% [35].All images were converted into a grid map with 2.5 m resolution to reduce errors in counting the data.To obtain gains and losses in coal resources-based industrial land during the periods of 2003-2008 and 2008-2013, the land-use maps for 2003, 2008 and 2013 were overlaid using ArcGIS 10.3 software (Figure SPOT images from 2003 and aerial photographs from 2008 and 2013 obtained from the Pingxiang Municipal Bureau of Land and Resources were chosen as the base images.The resolution of the SPOT images is 2.5 m, and the resolution of the aerial photography images is 1 m.The images from 2003, 2008 and 2013 were geo-referenced and imported into a Gauss-Kruger coordinate system.An accuracy of less than 0.5 pixel root mean square error was achieved using image rectification in ERDAS Imagine 9.2 software.The land-use categories were divided into six kinds: Coal resources-based industrial land, general construction land, farmland, forestland, and other use types by the objected-based classification method. Figure 2 . Figure 2. Changes in coal resources-based industrial land in Anyuan District. Figure 2 . Figure 2. Changes in coal resources-based industrial land in Anyuan District. Figure 3 Figure 3 shows the area for each type of land use in Anyuan District from 2003 to 2013.To analyze coal resources-based industrial land-use changes, the land-use transition between coal resources-based industrial land and non-coal resources-based industrial land during 2003-2013 was calculated using a land-use change matrix.Table 2 shows the results of the land-use change matrix for 2003-2013 in Anyuan District. Figure 3 Figure 3 shows the area for each type of land use in Anyuan District from 2003 to 2013.To analyze coal resources-based industrial land-use changes, the land-use transition between coal resources-based industrial land and non-coal resources-based industrial land during 2003-2013 was calculated using a land-use change matrix.Table 2 shows the results of the land-use change matrix for 2003-2013 in Anyuan District. Figure 3 . Figure 3. Area for each type of land use in Anyuan District during the period 2003-2013. Figure 3 . Figure 3. Area for each type of land use in Anyuan District during the period 2003-2013. in 2008, and 488.30hm 2 in 2013, which represents a dramatic decrease during 2003 to 2008. Industrial Land-Use 4.2.1.Driving Factors of Industrial Land-Use Change, 2003-2008 Table 3 shows the coefficients, clustered adjusted standard errors and marginal effects of the two logistic regression for the driving factors of coal resources-based industrial land-use losses and gains from 2003 to 2008. ( 1 ) From 2003 to 2013, the area of coal resources-based industrial land was significantly reduced with the exhaustion of coal resources.Coal resources-based industrial land was mainly converted to general construction land and forest.And some general construction land, forest, and farmland were converted to coal resources-based industrial land.(2) Factors of physical geography, location, and socioeconomics exerted varying degrees of impact on coal resources-based industrial land gains and losses during the periods of 2003-2008 and 2008-2013.From 2003 to 2008, distance to town was the main factor affecting coal resources-based industrial land gains, while altitude, distance to roads, distance to town, population density change, and fixed-asset investment per area change were the main factors affecting the loss of coal resources-based industrial land.From 2008 to 2013, altitude and distance to town drove coal resources-based industrial land gains, while altitude, distance to roads, distance to town, population density change, fixed-asset investment per area change, and GDP per capita change drove coal resources-based industrial land decreases.(3) Generally speaking, altitude, distance to roads, distance to town, population density change, and fixed-asset investment per area change were the main factors affecting the change of coal resources-based industrial land.Although the driving factors of coal resources-based industrial land gains and losses shared some similarities, the patterns of driving effects were different, and even the same factors had different influences on coal resources-based industrial land-use changes during different periods.(4) In the exhaustion process of coal resources, the transformation from coal resources-based industrial land into other types of land is the main trend.The land administration department and mine management department should make relevant plans in advance to ensure the orderly transformation of coal resources-based industrial land into other land use types.At the same time, the interests of residents around the mining area, grass-roots governments and other stakeholders need to be considered for sustainable development to occur.Although the study is based on evidence for county scale, it provides a reference for the study of land use change in hilly and mountainous coal resources-exhausted cities. Table 1 . Description and value assignment of variables. Table 3 . Results of the logistic regression models for coal resources-based industrial land-use changes, 2003-2008. Table 4 . Results of the logistic regression models for coal resources-based industrial land-use changes, 2008-2013. Table 4 . Results of the logistic regression models for coal resources-based industrial land-use changes, 2008-2013.
7,757.4
2018-08-01T00:00:00.000
[ "Economics" ]
FPM-WSI: Fourier ptychographic whole slide imaging via feature-domain backdiffraction Fourier ptychographic microscopy (FPM), characterized by high-throughput computational imaging, theoretically provides a cunning solution to the trade-off between spatial resolution and field of view (FOV), which has a promising prospect in the application of digital pathology. However, block reconstruction and then stitching has currently become an unavoidable procedure due to vignetting effects. The stitched image tends to present color inconsistency in different image segments, or even stitching artifacts. In response, we reported a computational framework based on feature-domain backdiffraction to realize full-FOV, stitching-free FPM reconstruction. Different from conventional algorithms that establish the loss function in the image domain, our method formulates it in the feature domain, where effective information of images is extracted by a feature extractor to bypass the vignetting effect. The feature-domain error between predicted images based on estimation of model parameters and practically captured images is then digitally diffracted back through the optical system for complex amplitude reconstruction and aberration compensation. Through massive simulations and experiments, the method presents effective elimination of vignetting artifacts, and reduces the requirement of precise knowledge of illumination positions. We also found its great potential to recover the data with a lower overlapping rate of spectrum and to realize automatic blind-digital refocusing without a prior defocus distance. Introduction For decades, using the conventional optical microscopy for pathological analysis has been the gold standard of disease detection and grading, where pathologists assess several tissue slides to attain precise observation of cellular features and growth pattern.Ruling out subjective factors, the accuracy of diagnosis profoundly depends on the throughput of imaging system.Hence, high-throughput microscopic imaging is of great significance for the study of pathological mechanisms and effective therapy of diseases, which is also intensively explored in applications like haematology [1], immunohistochemistry and neuroanatomy [2][3][4]. The throughput of an optical imaging system is determined fundamentally by its spacebandwidth product (SBP), which is defined as the number of resolvable pixels in the imaging field of view (FOV).However, the achievable SBP is in essence restricted by the scale-dependent geometric aberrations of the optical elements, leading to a trade-off between image resolution and FOV.A natural solution, from the point of optical design, is to optimize the aberrations caused by large-scale elements, but the resultant utilization of multiple lens considerably escalates the system volume and complexity.The demand for high SBP microscopic systems in the field of pathology and biomedicine has spurred the development and commercialization of whole slide imaging (WSI).Instead of manually examining glass slides with a microscope eyepiece, WSI digitalizes the entire FOV of a histological or biological specimen at high resolution (HR) for pathologists, researchers and clinicians to observe and analyze on a computer screen [5,6].The workflow of existing WSI systems generally consists of two parts: the first entails a specialized high-precision scanner to capture a series of HR images corresponding to different regions of the slide; the second stitches together these image segments into a full-FOV image of the slide by professional softwares.However, inevitable errors in mechanical scanning easily cause misalignment in the stitched image despite a sufficient overlapping rate.Uneven illumination of light source will also lead to uneven distribution of brightness or even stitching stripe-artifacts, which not only deteriorates the quality of stitched image, but also affect the quantitative analysis of downstream applications.For example, one study suggested that ignoring illumination correction resulted in a 35% increase of false and missed detections of yeast cells images [7].Both conventional [7] and deep-learning [8] methods attempt to eliminate the artifacts based on post processing, not yet tackling the problem fundamentally. Inspired by the concept of synthetic aperture [9,10], Fourier ptychographic microscopy (FPM) provided a brand-new perspective for the seeking of high-SBP microscopic systems [11].With the multiple-angle illumination of an LED array, FPM acquires corresponding low resolution (LR) images and stitches them in the Fourier domain to reconstruct an HR complex amplitude image of the sample.As the used low numerical aperture (NA) objective has an innate large FOV, FPM enables high-SBP imaging without mechanical scanning, and thus can bypass the artifacts caused by image stitching.In practical implementations, however, full-FOV FPM reconstruction highly relies on block processing [12,13].The vital consideration for doing so is to avoid the degraded reconstruction quality caused by vignetting effect, typically severe wrinkle-artifacts appearing at the edge of FOV.The requirement of plane wave illumination [14], coherence and reducing computational load also makes block processing almost a compulsory option.However, the resultant reconstruction still suffers from digital stitching artifacts (distinguished from mechanical ones).That's because conventional reconstruction algorithms require high precision of parameters, and inadequate noise removal [15][16][17] or system error-correction (e.g.deviation of illumination positions [18][19][20], intensity fluctuations of LEDs [21,22]) will cause color inconsistency of different reconstructed image segments.Some methods try to avoid block reconstruction by solving vignetting at the hardware level, such as designing an illumination source with special LED layout [13,23], which undoubtedly increases the system complexity.Solving the requirement of parameters at the algorithmic level might be feasible.Unfortunately, the models of existing methods are all established based on image-domain optimization [15,24], in which the loss function for FPM reconstruction is designed to calculate the difference between estimated images and experimentally captured images.They have to face the challenges of noise signals and systematic errors, because these factors of image degradation cannot be split away from the image domain.The majority of studies, therefore, focused on the display of subregion reconstruction.Some exceptions [11,25,26] have reported their impressive performance of stitched full-FOV reconstructions, while the mentioned flaw can still be slightly detected.This fundamentally explains why the advancement and landing of FPM in digital pathology faces numerous obstacles, and why FPM has not been widely accepted by the field of biomedicine even with a 10-year development. In this paper, we reported a direct, non-blocked full-FOV reconstruction method of FPM based on feature-domain back diffraction.Unlike previous algorithms, the loss function for reconstruction is uniquely formulated in the feature domain of images.As the vignetting effect has a slow variation characteristic in the feature domain, the effective information can be extracted from the degraded images with a certain designed feature extractor [27][28][29], and the undesired influence of vignetting effect can be bypassed.The feature-domain error is then digitally diffracted back through the optical system for complex amplitude reconstruction and aberration compensation.Intensive simulations and experiments have verified that our method can fundamentally remove vignetting artifacts and thus omit blocking-and-stitching procedures.The precision requirement of LED positions for FPM systems becomes relaxed, even raw data acquired using an even-number illumination array can be well reconstructed. Interestingly, our method also presents impressive performance in recovering data with a lower overlapping rate of spectrum and completing digital refocusing without prior knowledge of defocus distance.Furthermore, we firstly reported in this paper on applying FPM to a WSI system, named FPM-WSI.On the hardware side, the proposed WSI platform is adapted from a conventional trinocular inverted microscope.The high-brightness LED source enables only 2ms exposure for each capture of the camera.A axis driver and an - axis electric displacement stage are respectively equipped for autofocusing and for automatic shift of batch samples (4 slides).On the software side, the platform provides optional colorization schemes, precise autofocusing method and user-friendly operation interface.The reported system is expected to break the bottleneck that has long been constraining the development of FPM, which will offer a ingenious solution to transform WSI platforms into ones that can be broadly accepted and utilized in the field of biomedical research and clinical applications. Results Fig. 1 (c) demonstrates the wide-FOV color image of a pathology slide [human colorectal carcinoma section in Fig. 1 (a)] by directly fusing FPM reconstructed results of three color channels despite the presence of prominent vignetting effect, as shown in Fig. 1 (b).Fig. 1 (d1,e1) present a magnified view of two regions of interest (ROI).Shifting the light source to a halogen lamp, we also utilized a color charge coupled device (CCD) camera (ImagingSource DFK 23U445) to capture the images of corresponding regions with a 20×/0.4NAand a 4×/0.1NAobjective lens for comparison, as shown in Fig. 1 (d2,e2) and Fig. 1 (d3,e3).The maximum synthetic NA for this experimental setup achieves 0.68 set by the angle between the optical axis and the LED located at the outermost edge.Based on the concept of synthetic aperture, FPM realizes a resolution improvement of approximately seven times compared with simply imaging via a 0.1NA objective.Theoretically, the resolution of FPM reconstruction is close to that of 0.4NA incoherent imaging (equivalent synthetic NA of 0.8).As to the contrast of Fig. 1 (e1,e2), even when the observed region has a dense distribution of cells, the details can be clearly identified in both results.However, the FPM reconstruction provides better contrast of display than the result of 0.4NA imaging. The most attractive elegance of FPM lies in large-FOV imaging without mechanical scanning, while the proposed feature-domain-based reconstruction method further removes the necessity of block processing used in conventional gradient descent iterative algorithms.The image is free of stitching artifacts and color differences as no segmentation of each full-FOV raw image was applied.We also do not find typical vignetting artifacts at the image margin, verifying the unique effect of our method on artifact elimination.For comparison, the result of scanning and stitching and reconstructions via gradient descent method (AS-EPRY: EPRY [30] with adaptive step-size [16]) for the same slide are respectively discussed in Supplementary 1 and Supplementary 2. FPM-WSI platform Fig. 2 (a) shows the system integration of our high-throughput automatic WSI platform, and Fig. 2 (b1-b4) present the details of different components marked in Fig. 2 (a).Visualization 1 gives the overall display of the platform.The platform can generally be divided into four parts: illumination source, automatic control system, main body of the microscopic imaging system and a host computer. The illumination source for FPM should meet the basic requirements of high brightness and high refreshing rate, in order to reduce the time for data acquisition.Accordingly, we designed a programmable LED array containing 19×19 surface-mounted full-color LEDs [Fig. 2 Step 3: the feature-domain error between model prediction and the observations is calculated.Step 4: the error is back-diffracted to yield the complex-gradient.Step 5: the complex-gradient is managed by the optimizer with potential first-order and second-order moments. (b1)], Step 6: the model parameters are updated.and the distance between two adjacent LEDs is 4 mm (referred to Section 4.4 for the selection of LED parameters).The central wavelengths of three color channels are 631.23 nm (red), 538.86 nm (green) and 456.70 nm (blue), each offering an approximately spatially coherent quasi-monochromatic source with 20 nm bandwidth.We used LM3549 driver chips to provide the logical control for the LED array.For each LED, the measured maximum power of a single channel is 1 W, and the refreshing rate is no less than 100 Hz.In practical use with a 16-bit sCMOS camera, the exposure time for both brightfield and darkfield images can be controlled properly at 2 ms.The frame rate of our camera is 100 fps, resulting in an acquisition time of 10 ms for each raw image and totally less than 4 s for a single slide (see Visualization 2).There is still a large space for the reduction of acquisition time if employing a camera with higher frame rate.Notably, as shown in Fig. 2 (b2), the LED array is packaged into an opaque sealed environment, except that the side facing the slide is exposed to the surroundings.Together with the high-brightness illumination, the majority of stray light can be suppressed, and there is no need for the platform to work under strict darkroom conditions.Compared with our previous work on monochromatic hemispherical illuminator [31], besides the significant improvement of acquisition efficiency, the manufacture difficulty of this R/G/B source is greatly increased.Even so, the flat-panel structure also has tremendous advantages in standardization and pipeline production. The automatic control system consists of a axis driver [Fig. 2 (b3)] and an - axis electric displacement stage [Fig. 2 (b4)].The operation of the system can be seen in Visualization 3. The axis drive (OptoSigma, SGSP-OBL-3) is used to control the mechanical movement of the objective lens for autofocusing.The drive covers a range of 3 mm with the resolution of 1 μm/pulse, basically satisfying the focusing demand of slides with various thicknesses.The command transmission between the drive and the host computer is realized via a GIP-101B controller.The - axis electric displacement stage enables precise positioning and automatic shift between a batch of 4 slides (25 mm × 75 mm), thus we customized a rectangle aluminum alloy plate embedded with 4 slide slots and fixed it on the upper surface of the displacement stage.The range of the stage at two directions is 120 mm and 50 mm respectively, both with 10 μm repetition precision. The main body of the microscopic imaging system was adapted from a conventional trinocular inverted microscope, whose optical path is demonstrated in Fig. 2 (c).The trinocular design supports wide-angle observation through the eyepiece.In addition, the system is highly flexible with expansibility.A built-in halogen light source allows us to switch between the FPM imaging mode and the regular bright-field imaging mode.Other microscopy techniques such as polarization imaging and fluorescence imaging can also be implemented using this system as the polarizer and ultraviolet lamp are optional to be equipped. The host computer with an Intel i5 CPU, 32 GB RAM is mainly responsible for data storage and processing, control of automatic devices and user access.This configuration averts the high cost associated with modifying, manufacturing and maintaining a GPU-based device, which appeals more to healthcare organizations and research institutes.Here, we would like to highlight three characteristics in terms of software.First, different from conventional WSI platforms, our system does not adopt focus map-based methods [32] to realize autofocusing.Instead, we propose a hill-climbing algorithm where two symmetric LED units with different wavelengths light up and the degree of focus is determined based on evaluation of spectrum energy (see Supplementary Note 3).Second, as we all know, color FPM images can be created by simply combining reconstructed results from R/G/B LED illumination into each corresponding color channel.Our WSI platform additionally incorporates a state-of-art colorization method named color-transfer filtering FPM (CFFPM) [33] as an alternative option, which sacrifices minimal precision imperceptible to human vision while tripling the acquisition efficiency.Third, all software modules of the platform have been integrated into Matlab 2022, including data acquisition, LED control, autofocusing, slide shifting, FPM reconstruction and colorization.For the refinement and controllability of experiments, we also designed a user-friendly operation interface to facilitate the implementation of a complete workflow.(see Supplementary 4) Feature-domain backdiffraction The forward model of FPM reads: where I is the -th captured low-resolution (LR) images illuminated by corresponding LED.U is the complex amplitude of the object, and F denotes the Fourier transform (Fraunhofer diffraction) when U propagates through the objective lens.The superscript symbol † means the conjugate transpose of a matrix, and F † denotes the inverse Fourier transform since F † = F −1 .M is the selection matrix for the -th LED of the array.P is the pupil function of the imaging system.FPM reconstruction can be regarded as a maximum a posteriori estimate (MAP) problem [34,35] in which we are going to find an estimation of parameter that can best explain the observed data through the forward model.Conventional reconstruction methods are based on the ptychographic iterative engine (PIE) that maximizes the Gaussian-likelihood, or in other words, minimizes the 2 -distance (Euclidean distance) between model prediction and observations in the image domain [15], given as The reconstruction results are highly susceptible to vignetting effect, noise signals and systematic errors.Some studies tried to maximize the Poisson-likelihood [36], yet still cannot address the vignetting effect as the root of problem lies in the mismatch of forward model.In fact, Eq. ( 1) holds for an approximated linear space-invariant (LSI) coherent microscopic system, where the coherent transfer function (CTF) is determined by a complex pupil function of the objective.However, it is not well suited to model the half-bright and half-dark vignetting effect in practical experimental conditions, as a typical FPM imaging system cannot be classified as a strict LSI system without the introduction of complicated Fresnel diffraction theory. Instead of adopting a more complicated forward model, our FPM-WSI minimizes the 1distance (Manhattan distance) on the feature domain of images [Fig. 2 (d)], and the feature-domain loss function is given as where K denotes the invertible convolution kernel for feature extraction.In this implementation, we use the first-order image's edge to represent the feature and K = ∇ , ∇ ⊤ .The 1 -distance is derived from the statistical fact that the edge of image follows heavy-tail distribution [37], which can be approximated by Laplacian distribution.Moreover, 1 -distance promotes the sparsity of input vector which favors the edge features [38].Although 1 -norm is non-differentiable at the origin, the cost function in this case is zero and there is no need to update the parameters.Therefore, we are able to use the sub-gradient for parameter learning.The benefits of proposed feature-domain loss function is detailed explained in Subsec.4.1. The gradient of Eq. (2) w.r.t.U and P are calculated by CR-calculus [39], and then digitally diffracted back through the optical system for parameter update, including the complex amplitude of the object and the CTF.The details of this process are given in Supplementary 5.The back-diffraction in optics literally refers to propagating the wavefront to the object plane as sketched in Fig. 2 (d).For example, the Fraunhofer propagator that diffracts the input wave to the output wave is described as the Fourier transform, while its Hermitian transpose denotes the inverse Fourier transform which exactly diffracts the output wave back to the input wave. It is worth to mention that in Eq. ( 2) can be any integer ranging from 1 to the total number of raw images.This flexibility allows for the computing batch gradients from randomly selected images from the raw image dataset.While employing all raw images for gradient calculation leads to a global gradient descent, utilizing a smaller batch of raw images can enhance the possibility of the algorithm escaping local minima, especially given the severely non-convex nature of the optimization process. Feature-domain loss and batch gradient descent The feature-domain loss, based on the first-order edge of image, effectively suppresses the impact of vignetting effect due to the different statistical properties between image domain and feature domain.Notably, the vignetting effect, which lacks sharp edges, manifests solely in the image domain but is absent in the feature domain, meaning that we can efficiently separate them in image's edge-feature domain.Let √ ideal and √︁ vignet be the amplitude in an ideal condition and in the case of vignetting.Simply minimizing the 2 -distance between them is ineffective and the discrepancy remains substantial, as the forward model in Eq. ( 1) fails to encapsulate the vignetting effect.In other words, the MAP cannot find a good estimation of model parameter that can explain the observations.This situation introduces unexpected low-frequency components into the reconstructed Fourier spectrum, which is reflected as severe wrinkle artifacts. While taking the first-order spatial gradient of predicted and observed data, the distance between ∇ √ ideal and ∇ √︁ vignet becomes very close because the vignetting effect has a slow spatial variation that contributes gentle spatial gradient values compared to that of valid sample structures.More importantly, the edge information of an image is sparsely distributed and can be approximate to Laplacian distribution [40].Consequently, minimizing the 1 -distance in Eq.( 2) provides an effective solution to circumventing the troublesome vignetting effect.Moreover, the 1 -distance takes more efficient use of data information, thus enabling robust FPM reconstruction even at a low overlapping rate of spectrum.According to the redundant information model for FPM [41], the utilization rate of our method is approximately calibrated as 30%, higher than that of conventional gradient descent algorithms (24%).Previous work [42] indicated that at least 35% overlapping rate of sub-apertures in the Fourier domain is required for a successful reconstruction using conventional FPM algorithms.However, we demonstrated that our method could break through the lower limit.We created two groups of simulated data with overlapping rate of spectrum down to 22% and 11%.The AS-EPRY fail to reconstruct the amplitude in high quality, generating obvious crosstalk with phase information, as shown in Fig. 3 (a1-b2).On the contrary, FPM-WSI with feature-domain loss still works promisingly as indicated by the comparison of structure similarity (SSIM) and peak signal-to noise ratio (PSNR) in Fig. 3 (c,d).We also experimentally examined the reconstruction performance of AS-EPRY and FPM-WSI, and the result of comparison is shown Fig. 3 (e,f).Here, we set the illumination height as 30 mm, and the overlapping rate of spectrum is calculated as 22.47%.Merely from a visual perspective, the same conclusion can be drawn as in the simulations. Different from conventional techniques, which frequently update the Fourier spectrum of object for each individual illumination angle, FPM-WSI incorporates batch-based gradient descent that computes the collective gradient for a mini-batch involving multiple illumination angles.This approach allows compensation of gradient components in the overlapping area of sub-apertures, resulting in acceleration of convergence and also showing robustness to data noises.Before utilizing the gradient to update spectrum, the processing of optimizer such as Adam [43], RMSprop or YOGI [44] is necessary.The optimizer generates the first-order and second-order moment, which can not only accelerate the convergence of this non-convex phase retrieval problem, but also facilitate the update of spectrum exceeding the synthetic aperture, as illustrated in Fig. 3 (b2,b3) and Fig. 3 (d2,d3). Experimental robustness Since deviation of LED positions also introduces unexpected low-frequency component, our method [Eq.( 2)] is more robust to LED misalignment compared with conventional reconstruction algorithms.We performed verification on simulated data as shown in Fig. 4. The ideal distance between two adjacent LEDs is 4 mm, and we added random shift to each LED position with a maximum amplitude of 1 mm as demonstrated in Fig. 4 (a).Such randomly shifted LEDs typically occur to customized LED arrays.Fig. 4 (c,d) shows the reconstructed amplitude and Fourier spectrum using AS-EPRY and FPM-WSI.The reconstruction of FPM-WSI is quite similar to the ground truth in Fig. 4 (b), providing a neat background distribution.In contrast, the quality of AS-EPRY reconstruction is severely degraded by wrinkle artifacts.We also calculated the values of SSIM and PSNR based on the ground truth to quantitatively evaluate the reconstruction performance.As listed in Fig. 4 (e), FPM-WSI obtains a higher score than AS-EPRY in terms of both criteria.Results of massive simulations on 500 images with different degree of LED positional shifting are plotted in Fig. 4 (f1) for PSNR and Fig. 4 (f2) for SSIM, which equally suggest that FPM-WSI suffers less from the deviation of LED positions.We compared FPM-WSI with other three state-of-the-art methods, adaptive step-size FPM [16], ADMM-FPM [45], and momentum-PIE [46], which can be referred to Supplementary 6.Additional simulation studies are also included regarding noise interference and LED intensity fluctuations, and the superiority of FPM-WSI has also been verified. Given the reduced dependency of FPM-WSI on precise LED positioning, it becomes feasible to implement FPM with a squared LED array and even number of LEDs.As depicted in Fig. 4 (g), we collected the LR images of USAF target only using 12×12 LEDs of the array.Obvious vignetting effect can be found with many half-bright and half-dark images.In this case, placed right above the slide is not the central LED but the middle area between two adjacent LEDs making the aligning of LED array more difficult.Fig. 4 (j1,j2) shows the reconstructed amplitude of AS-EPRY and FPM-WSI with their corresponding magnified view of ROI.Despite the great challenge associated with LED alignment, FPM-WSI obtains a full-FOV reconstructed image with enhanced resolution compared with the raw data in Fig. 4 (h).According to the quantitative plots illustrated in Fig. 4 (k), the feature of group 9, element 3 on the target can be clearly resolved.The result of AS-EPRY, however, is significantly distorted due to the vignetting effect as well as potential misalignment of LED positions. Pupil function recovery Local aberration recovery: The use of objective lenses in FPM inherently introduces aberrations.Conventional reconstruction methods address this by incorporating updates to the pupil function within iterative phase retrieval processes, enabling correction of these aberrations.The fidelity function of FPM-WSI gives mathematical reciprocity to both P and U, and thus their positions can be exchanged.We can then recover the pupil function by calculating the derivative w.r.t.P from Eq. ( 2) during the optimization.As shown in Fig. 5 (a1), we divided the full-FOV raw data of USAF target into 64 small segments, reconstructed them respectively and finally stitched reconstructions of all image segments.Each of them can be assigned a specific aberrationcorrection pupil function which is assumed to be unchanged in that region.The recovery of spatially varying pupil function for each image segment is demonstrated in Fig. 5 (a3).Fig. 5 (a2) shows the magnified view of one image segment, and its corresponding result of pupil recovery is marked by a yellow box in Fig. 5 (a3). Computational refocusing: Sometimes, the high magnification objectives used in conventional WSI systems cannot capture precisely focused images within a narrow focal range due to the three-dimensional aspect or thick nature of slides.Layered scanning along the -axis plane, also known as "-stack", has become an increasingly prevalent practice to deal with this, by which a series of images are captured at multiple focal planes and then digitally combined to form a clearly focused composite.The number of scanning layers should be determined according to the evaluation of features in each image segment.This optimization of image capture through broadening of focus is time-consuming, and leads to the overall reduction of digitization speed considering the subsequent image stitching. Sample defocus is equivalent to introducing a defocus phase factor (4th-order Zernike function) to the pupil plane.Therefore, we can safely regard defocus as a special type of optical aberration (that is, a defocus aberration).The depth of focus (DOF) of the imaging system can be extended beyond that of the objective lens.In the initial FPM implementation [11], digital refocusing was dependent on adding a predefined defocused wavefront to correct the pupil function.When the defocus distance is unknown, ergodic reconstruction is necessary followed by identifying the sharpest image either manually or using softwares.For a tilted sample, this approach achieves acuity for different regions of the entire image, and stitches these focused regions together to complete refocusing.Thereafter, improved algorithms never broke the constraint that the defocus distance should be known.Even some network-based methods require training to learn a prior over -slices [47] or interpolation along the -axis [48].As shown in Fig. 5 (b1), the USAF target is placed off the focus plane with an unknown defocus distance.After the FPM-WSI reconstruction, both high-resolution amplitude in Fig. 5 (b2) and pupil function in Fig. 5 (b3) can be directly recovered.Zernike fitting image of the pupil function given in Fig. 5 (c) has three 2 jumpings implying very large aberration values.The plot of Zernike coefficients denotes that defocus aberration and tilting aberration exist in the imaging system.The reconstruction of AS-EPRY fails without a priori defocus distance for aberration compensation as the algorithm falls into local minima due to large aberrations.Severe vignetting effect also hinders the recovery of both sample wavefront and pupil function. Resolution of FPM-WSI platform To determine the design parameters of the illumination source, we manufactured a programmable 25×25 LED array in advance and experimentally examined its limit resolution on the data of USAF target with green channel illumination.Considering the size of selected LED unit (3.5 mm × 3.5 mm) and the manufacturing technology, the distance between two adjacent LEDs was set as 4 mm.The experimental conditions basically remained unchanged as in section 2, except that a larger number of LEDs were used to provide an extended synthetic NA (the theoretical value is approximately 0.8, defined by the sum of objective NA and illumination NA).According to Fig. 6 (b), the line structures of group 7, element 5 can be clearly identified with the illumination of central LED.Fig. 6 (d1,d2) compare the reconstructed amplitude of AS-EPRY and FPM-WSI for the marked region in Fig. 6 (a).Fig. 6 (e1,e2) demonstrate the corresponding magnified view of elements in group 10.After the synthetic aperture was completed, both of them obtained a great improvement of resolution with resolvable manifestation of group 10, element 3 (388 nm half-pitch resolution), as evidenced by Fig. 6 (f).The achievable resolution of an FPM system is jointly determined by the illumination wavelength and synthetic NA.In this implementation, however, the practical synthetic NA is only about 0.7, which roughly equals to the value that can be achieved by 19×19 LEDs ( syn = 0.68).It is indicated that the LEDs exceeding the illumination NA of 0.6 ( obj = 0.1) can no longer provide effective information of the sample for their corresponding raw images.This is the reason why a programmable 19×19 LED array was designed for our FPM-WSI platform with a fixed illumination height of 70 mm.Under this configuration, the highest half-pitch resolution of gray images (with blue channel illumination) achieved by this platform reaches 336 nm. Although AS-EPRY reaches a comparable level of reconstructed resolution with FPM-WSI, the full-FOV result is severely degraded by wrinkle artifacts due to vignetting.The middle region of the image is free of artifacts, but the background is distributed with uneven patches, as demonstrated in Fig. 6 (d1).This experiment offered substantial evidence for high experimental robustness and data fidelity of FPM-WSI in direct full-FOV reconstruction. Conclusions In this paper, we have presented an efficient computational framework for stitching-free FPM reconstruction.This method formulates the loss function of optimization on the feature domain of images, and thus the inverse problem of FPM reconstruction can be solved under the framework of feature extraction.The feature-domain error, after the processing of an optimizer, is backdiffracted through the optical system for the update of complex amplitude of sample and CTF.Such a design allows us to completely bypass the challenging vignetting effect in typical FPM systems without modification of LED layout.High-quality reconstruction no longer depends greatly on the procedure of blocking reconstruction and then stitching.Besides, the method effectively deals with deviation of LED positions and LED intensity fluctuations, which reduces the requirement of precise calibration of systematic errors.For some certain experimental conditions, which are tough for conventional algorithms to conduct successful reconstructions, our method can also obtain impressive performances, such as digital refocusing without a priori defocus distance and reconstruction of the data with a lower overlapping rate of spectrum. We further developed a FPM-WSI platform based on this framework, which firstly applied stitching-free FPM to the field of whole slide imaging.We list several typical characteristics of the platform as follows: (1) high-speed data acquisition (within 4 s for a single slide); (2) automatic and batch processing of 4 slides; (3) extension for multiple imaging modes and techniques; (4) user-friendly workflow with optional colorization schemes; (5) low cost of hardware without requirement of GPU-based devices.The reported platform is expected to promote the widely accepted application of FPM in the field of digital pathology.We believe that the improvement of hardware capabilities can adapt to more complex application scenarios.For example, the FOV of reconstruction demonstrated in Section 2 only occupies a small portion of the entire sample.The employment of cameras with larger sensor area will enable observation covering the whole range of slides, and provide essential and more comprehensive references for users' assessment.The time required for acquisition of each raw image can ascend to the limit value 2 ms using cameras with higher readout speed, which is particularly suited to intraoperative examination of pathological slides and facilitates prompt formulation of operation plans for clinicians.Furthermore, in applications of large-scale pathological imaging and biomedical research, designing a large-capacity storage and transportation device guarantees efficient operations. More modifications and extensions on the framework can be implemented in the future.The existing framework utilizes the first-order gradient to extract edge features from images, addressing vignetting effect, noise signals and a series of systematic errors in the feature domain.While the detection of first-order edge is proved to be effective, the precious role that the second-order gradient of images plays in overcoming vignetting effect cannot be neglected.However, calculating the second-order gradient of images can inadvertently amplify noise signals. To isolate the valid features of images while suppressing the noise, we believe that training various band-limited edge filters through dictionary learning or the development of neural networks can be a good solution.These trained filters are designed to have a bandwidth that precisely matches the width of pupil function.Future works will include but are not limited to reflective FPM [49,50], near-field FPM [51] and tomographic FPM [52][53][54].Imaging at a more macroscopic scale, such as remote sensing [55], might also draw some meaningful inspirations from our method. d Fig. 2. FPM-SIM platform setup.(a) Overall architecture of FPM-SIM platform generally consisting of microscopic imaging system, automatic control system and a host computer; (b1) 19 × 19 programmable LED array for sample illumination; (b2) Packaged appearance of LED array with the central LED lightning; (b3) axis driver holding the objective lens for autofocusing; (b4) − axis displacement stage for mechanical movement of a batch of 4 slides; (c) Optical path diagram of the microscope.(d) Flowchart for FPM-WSI reconstruction using feature-domain backdiffraction, involving 6 steps.Step 1: the model generates a series of predicted images based on current estimation of parameters including the comlex amplitude of the sample and pupil function.Step 2: the predicted images and their corresponding observed images are filtered by the feature extractor, producing the feature maps.Step 3: the feature-domain error between model prediction and the observations is calculated.Step 4: the error is back-diffracted to yield the complex-gradient.Step 5: the complex-gradient is managed by the optimizer with potential first-order and second-order moments.Step 6: the model parameters are updated. Fig. 3 . Fig. 3. Reconstructions for lower overlapping rate of spectrum.(a1-a2) and (b1-b2) are reconstructed amplitudes and their spectrum with simulated overlapping rate of 22%.(c1-c2) lists the average value of PSNR and SSIM for two simulations of overlapping rates.(d1) and (d2) plot PSNR and SSIM score for 500 groups of simulation study, 2% Gaussian noise was added.The LED height controls the overlapping rates.(e) and (f) show experimental results using AS-EPRY and FPM-WSI respectively when the overlapping rate of spectrum is 22.47%. Fig. 4 . Fig. 4. Comparison of experimental robustness for conventional FPM algorithm and FPM-WSI.(a) Simulated LED position shift in the LED array.(b-d) are simulated ground truth, reconstructed amplitude and corresponding Fourier spectrum.(e) lists the value of PSNR and SSIM for two methods.(f1,f2) plot the value of PSNR and SSIM for 500 groups of simulation with different degrees of LED position shift.(g) Raw data obtained with the illumination of 12 × 12 LEDs, and obvious vignetting effect can be found in the central 4 × 4 images.(h) Magnified view of ROI in the raw data.(i) Fourier spectrum of reconstruction using AS-EPRY and FPM-WSI.(j1,j2) show the reconstructed amplitude of USAF target and magnified view of ROI.(k) plots the quantitative profile along line 1 and 2 respectively.Scale bars in (h) and (j2) denote 14 μm. Fig. 5 . Fig. 5. Embedded pupil function recovery and digital refocusing for FPM reconstruction.(a1) Stitched reconstruction for a USAF target consisting of 16 image segments.(a2) Zoomed-in image of the region marked by the yellow box.(a3) Reconstructed spatially varying pupil functions for each segment.(b1) Central brightfield raw image of a defocused USAF target.(b2) Reconstructed results using AS-EPRY and FPM-WSI.(b3) and (c) are reconstructed pupil function corresponding to (b1) and its Zernike smoothed output.(d) plots the first 13 coefficients of the Zernike polynomial listed by fringe index. Fig. 6 . Fig. 6.Reconstruction of USAF resolution target with 25 × 25 LEDs.(a) Direct full-FOV reconstruction using AS-EPRY and FPM-WSI.(b) Magnified view of group 6-11 elements in the central brightfield raw image, marked by yellow box in (a).(c) Fourier spectrum for AS-EPRY and FPM-WSI reconstruction.(d1,d2) are reconstructed amplitudes of AS-EPRY and FPM-WSI corresponding to (b).(e1,e2) are magnified images of the region marked by the orange box in (d2).(e) plots the intensity profiles along the dashed lines in (e1,e2).(f1) and (f2) plot the intensity profiles along lines 1 and 2 , respectively. Funding. National Natural Science Foundation of China (NSFC) (No.12104500); Key Research and Development Projects of Shaanxi Province of China (No. 2023-YBSF-263)
8,367.8
2024-02-28T00:00:00.000
[ "Computer Science", "Engineering" ]
Synthesis and application of nanometer hydroxyapatite in biomedicine : Nano - hydroxyapatite ( nano - HA ) has been widely studied as a promising biomaterial because of its poten tial mechanical and biological properties. In this article, di ff erent synthesis methods for nano - HA were summar -ized. Key factors for the synthesis of nano - HA, including reactant concentration, e ff ects of temperature, PH, addi -tives, aging time, and sintering, were separately inves -tigated. The biological performances of the nano - HA depend strongly on its structures, morphology, and crystallite sizes. Nano - HA with di ff erent morphologies may cause di ff erent biological e ff ects, such as protein adsorption, cell viability and proliferation, angiogenesis, and vascularization. Recent research progress with respect to the biological functions of the nano - HA in some speci fi c biological applications are summarized and the future development of nano - sized hydroxyapatite is prospected. Introduction Calcium phosphate is a kind of bioactive ceramic composed of calcium and phosphorus ions. Because its chemical composition is similar to natural bone tissue, it is widely used in clinical practice. Among them, calcium phosphate also contains hydroxyapatite (HA), tricalcium phosphate, bipolar calcium phosphate, and other applications of the most common component materials. Calcium phosphate not only has good biocompatibility, but also can form chemical bonds with new bone. It can induce bone tissue regeneration, so it is widely used in bone tissue repair. Among calcium phosphate, HA is the thermodynamically most stable crystalline phase of calcium phosphate in body fluids, most similar to the mineral parts of human bones and teeth. Calcium phosphate in natural bone tissue is mainly deposited in the collagen matrix in the form of nano-crystallites in an orderly manner [1]. Nanoscale HA has certain similarities with natural bone apatite in chemical composition, structure, and scale. In the microstructure of nano-bioceramics, the grains, grain boundaries, and their combination are all at the nanoscale level. The refined grains and the increase of the grain boundary numbers can make its mechanical properties (especially fracture toughness) and biological activity increase. This makes nanohydroxyapatite (nano-HA) to be an ideal bone repair material. Among the calcium phosphate-based bioceramics, the bioactive ceramics represented by HA ceramics are the most widely used in bone tissue repair [2][3][4]. From the point of view of the process itself, the preparation process of nano-ceramics is not much different from that of ordinary ceramics (generally follows the process of "powder-forming-sintering"), but from a technical point of view, the preparation process of nano-ceramics is extremely harsh. The successful preparation of nano-HA ceramics should first synthesize nanoscale HA powder. Nano-HA powder has high surface activity and is easy to agglomerate. So, it is difficult to obtain nanoscale solid powder under normal conditions. During the molding process, whether the particles can be stabilized at the nanoscale relies on extremely strict control of the sintering process of ceramics [5,6]. During the sintering process, with the increase of temperature and the extension of time, the nano solid particles are fused with each other, and the pores and grain boundaries are gradually reduced, which lead to a high probability to cause the growth of HA grains. So, how to effectively inhibit the growth of nano-grains during the sintering process is a difficult problem in the preparation process of nano-HA ceramics. In short, the two major difficulties in the preparation of nano-HA ceramics lie in the synthesis of nano-powder and the sintering of nano-ceramics. To this end, this article comprehensively summarizes and analyzes the current progress of nano-HA powder synthesis and nano-ceramic sintering technology, and prospects for future research. 2 Synthesis and preparation of nanometer HA Different synthesis methods for nano-HA The synthesis methods of HA ceramic powder, represented by HA powder, mainly include dry synthesis and wet synthesis. Dry preparation of HA is a preparation method of selecting finely ground precursor and mixing it, and then heat treating the precursor. This method has strict requirements on the purity and dosage of reactants and has the advantage of better crystallinity of the products. However, dry synthesis requires a relatively high temperature, which affects the porosity of the products. Wet preparation of HA consists of sol-gel method, chemical precipitation method, hydrothermal reaction method, and so on, which is carried out in water or organic solvents and can be applied to a variety of equipment with the addition of various catalysts. The advantages are that the structure and morphology of HA can be well controlled and the yield can be improved. The disadvantage is that the purity and crystallinity of the product are not enough, and there may be other phosphate crystals in the product. According to the various synthesis methods, the synthesized calcium apatite powders have certain differences in structure, morphology, and size. In addition, under the same synthesis method, different synthesis conditions will also affect the final powder morphology. Figure 1 shows the morphology of nano-HA prepared under different synthesis conditions [7][8][9]. This article summarizes the synthesis process of nano-powder in the following aspects. Effects of reactant concentration The concentration of reactants is the key factor affecting HA synthesis [10]. The Ca/P ratio of HA is 1.6, so the concentration of reactants determines the purity of HA and also affects the grain size of HA under certain conditions. For example, when Ca/P ratio is within the range of 1.5-1.67, the synthesized product is the mixed phase of HA and beta-TCP; when Ca/P ratio is within the range of 1.68-1.7, the synthesized product is HA; when Ca/P molar Figure 1: (a) Spherical nano-HA particles, particle size is about 80 nm; (b) rod-like nano-HA particles, particle diameter is about 60 nm; (c) needle-like nano-HA particles, particle diameter ranged from 50 to 70 nm [7,9]. ratio is greater than or equal to 1.7, the synthesized product is HA + CaO [11]. From the perspective of crystal chemistry, crystal nuclei are formed first in the solution, and then they migrate and adhere to the particles under the action of thermal dynamics, grow up and form crystals. There are many nucleate particles in the high-concentration reactant solution, and HA with small grain size is easy to be formed [12,13]. Effects of temperature Temperature affects not only the grain size of HA, but also the morphology of HA. Every chemical reaction involves heat, and temperature affects the speed of chemical reaction. In the synthesis reaction of HA, the higher the temperature, the faster the reaction speed, and the easier it is to nucleate and crystallize. However, at high temperature, the grain growth rate is accelerated, and the synthesized nanometer powder has high activity and is easy to aggregate. Yunjing et al. [14] prepared HA powder by precipitation in heat treatment at 200, 500, 700, and 900°C, respectively. They found that HA crystallized at 200°C was not high and its morphology was in the shape of needles or stripes. Lianfeng et al. [15] prepared nano-HA by chemical precipitation and studied the influence of temperature on the final particle size and phase structure. They studied particle size changes at 25, 40, 60, and 90°C, respectively (as shown in Table 1). With the increase of temperature, the crystallinity of nano-HA increased, and the particle size was within the range of 25-60°C. With the increase of temperature, the particle size increased, and needle-like nanoparticles were synthesized. At 90°C, the particle size tends to decrease, but the synthesized nanoparticles have rod structure. Rodriguez-Lorenzo and Vallet-Regi [16] prepared nano-HA by precipitation method. They also found that the grain size increased between 25-90°C and 20-80 nm as the temperature increased. The surface area of the synthesized HA particles decreased as the temperature increased, meaning that the reaction temperature directly affected the size of the particles, with the higher the reaction temperature, the larger the particles. The grain size of nanoparticles synthesized at 25°C was comparable to that of human bone, and the grain size of nanoparticles synthesized at 90°C was comparable to that of tooth enamel. Effects of stress The influence of pressure only works in dry synthesis and has no effect on general wet synthesis. Dry synthesis is mainly based on ball milling. The grinding balls in the ball mill directly extrude the material to convert mechanical energy into chemical energy. Therefore, the mass of the grinding balls is converted into the pressure of the reaction environment, which in turn affects the particle size of the synthesized HA. Toriyama et al. [17] used CaCO 3 and CaHPO 4 ·2H 2 O as raw materials to grind and synthesize nano-HA by dry method. In this process, grinding pressure is an important factor affecting powder synthesis. When the grinding pressure reaches the critical value and above, the reaction starts, and the higher the pressure, the faster the reaction rate, which in turn increases the crystallization rate of HA, and reduces its crystallinity and grain size. Effects of pH pH is a key factor in HA synthesis. When HA is alkaline, the alkaline environment provides the necessary OH root ions for HA synthesis and precipitates HA. Increasing pH value is conducive to the synthesis of a single HA phase with fewer impurities and smaller grain size [13]. If the pH value is too small, HA will decompose into impurity phases, such as preparing HA with Ca (NO 3 ) 2 ·4H 2 O, and (NH 4 ) 2 HPO 4 as the precursor system. When the pH value is equal to 4.5, 9, and 12.4, the prepared powders are, respectively, β-Ca 2 P 2 O 7 , HA + β-TCP, and HA [14]. Increasing pH will increase Ca/P ratio, so if you want to synthesize pure HA, you must strictly control the pH value. Generally, the pH value of wet synthesis is between 10 and 10.5 [18]. Effects of additives Admixtures include organic matter, inorganic matter, and doped ions. Admixtures can affect not only grain size but also grain morphology. For example, Sr 2+ can enter HA lattice to replace Ca 2+ , reduce the crystallization rate of HA, and reduce the grain size [19]. Additives affect the surface energy of the reactants. The higher the surface energy is, the faster the reaction speed is and the easier it is to crystallize and nucleate. The nucleation energy of HA is relatively large, and the addition of nucleating agent can accelerate the nucleation formation and synthesize HA with smaller grains [20]. Xinlong et al. [21] added 3-5 wt% citric acid in the synthesis process and effectively inhibited the growth of HA grains through competitive adsorption between ions. Adding ethanol can improve the dispersibility of nano powder. Adding cerium salt can refine the grain and improve the dispersibility [22]. Effects of stirring The influence of stirring is mainly reflected in the time and speed of stirring. The longer the stirring time is, the more beneficial it is to the full contact of the reaction materials and improve the conversion efficiency. The speed of mixing also plays a role. Martins et al. [23] suggested that the faster the mixing speed, the better the nucleation formation and the easier the formation of HA with smaller grains. It has been reported that the higher stirring speed and longer stirring time affect the crystal morphology, making the morphology of HA similar to that of rod or rice shape [24]. Effects of aging time Aging time determines the integrity of HA crystal growth and grain size. In the aging stage, the longer the aging time is, the better the formation of HA and the perfection of grain [24]. With the further extension of aging time, the grain size became larger and agglomerated, mainly due to the dissolution of small-size particles and the regrowth of secondary grains [25]. Sintering process For nanostructured HA ceramics, sintering process is an independent step. Nanometer powders have large specific surface area and high activity. During sintering, the sintering temperature is lower than that of embryos with conventional particle size, and the grains are easy to grow and difficult to control [26]. In order to sinter nano-HA ceramics, it is necessary to prevent the grain growth during densification. Zhou et al. [27] studied the sintering properties of nano-HA; according to the DSC and TG curves of the nano-HA powder curve, the sintering temperature of the DSC curve is 761°C, and the material has an obvious endothermic peak. The concave surface indicates that the material corresponding to the sintering temperature is a violent crystal fusion and phase transformation, and the rapid fusion of nano-grains. After a short exothermic, from 761.6 to 1158.3°C, the material absorbs heat rapidly and the nanocrystals rapidly grow into ceramics. When the nano powder of HA is over 900°C, the material grain grows rapidly, and most of the ceramic grains sintered above 1,000°C are above 1 µm. Therefore, it is difficult to obtain the final nano-ceramics by conventional sintering technology. Further study found that nano-HA ceramics grain in different sintering stages have different state changes. The initial nanoparticles in the green body of ceramic would undergoing size change during the process of continuous sintering. With the increase of the sintering temperature and time, the nanoparticles would undergo fusion, grains grow up and density in different stages of sintering. By controlling of the sintering process, such as rapid cooling would affect the finial grain size and obtain nano-structured ceramic. Therefore, nano-HA ceramic sintering process should be rigorous. Current sintering process mainly through the high efficiency of energy conversion and sintering for a short period of time makes grain to grow up. The concrete method consists of two sections of pressureless sintering, the discharge plasma sintering, hot-pressing sintering, hot isostatic pressing sintering, microwave sintering, etc. [28,29]. In this article, the several kinds of sintering methods of nano-HA ceramics sintering process were analyzed. In order to overcome excessive grain growth in the later stage, Chen and Wang [30] invented a two-step sintering method. The principle is to raise the temperature to the critical point where the ceramic-sintered grain grows, so that the surface atom has a certain diffusion energy. Then, hold the temperature below the critical point to densify it. Mazaheri et al. [31] sintered HA by conventional sintering method and two-step sintering method, and found that the grain size of ceramics with the same densification degree was 1.7 μm in conventional sintering at 1,100°C. The grain size in the two-stage sintering of T1 = 900 and T2 = 800 was 0.19 μm. Compared with other sintering methods, the two-step nonpressure sintering method has the characteristics of simplicity, low cost, and remarkable effect. Plasma discharge sintering is a new sintering method, which has the characteristics of fast heating rate, short sintering time, even grain size, and good control of sintering structure. Gu et al. [32] sintered the compact HA block in the plasma at 950°C for 5 min to obtain HA ceramics with density greater than 99.5%. Microwave sintering is a new sintering technology in recent years. It used the coupling of the microstructure of the powder with the special wave band of the microwave to generate heat for sintering ceramic embryos. It has the advantages of fast heating rate, short sintering time, overall heating, and easy to obtain fine sintered materials with even grain size [33,34]. Wang et al. [35] found that even though the heating rate of microwave sintering was very high, the HA/β-TCP ceramics were not cracked or deformed. Due to the overall heating of microwave heating, there is no temperature gradient in the material, reducing the internal stress. To a certain extent, it can improve the mechanical strength of ceramic materials. Compared with the conventional sintering method, the ceramic grain size prepared by microwave sintering at the same temperature of 1,100°C is 200-400 nm, while the ceramic grain size prepared by conventional sintering is 1.0-1.5 μm. Application of nanometer HA in biomedicine Nano-HA has the advantages of good biocompatibility, large specific surface area, high biological activity, and stable chemical properties [36,37]. In recent years, researchers have probed and utilized the regeneration ability of nano-HA in numerous application fields. Here, we summarize and discuss the application of drug carrier, surface coating, antineoplastic, and composite materials [38][39][40][41][42][43][44]. Drug carrier Nano-HA has high specific surface area and strong plasticity. Through surface receptor modification or modification to improve in vivo targeting, nano-HA can absorb different drugs and can be adapted to different site delivery needs [40,41]. The adsorption of drugs on nano-HA is mainly determined by its own properties and microscopic morphology [42,43]. The adsorption sites of nano-HA for drugs mainly include carboxyl group, hydroxyl group, phosphate group, and amino group [45,46]. Nano-HA has different adsorption effects on drugs with different groups. Zhao et al. [47] studied the molecular simulation of HA on doxorubicin and tinidazole. According to the results of molecular dynamics simulation, for doxorubicin, the binding energy to HA is much higher than that of tinidazole. Then they, respectively, prepared hollow HA microspheres and nano-HA (Figure 2), and used them to carry out adsorption experiments on two drugs, doxorubicin and tinidazole. The results show that the adsorption efficiency of HA is affected by the group in the drug molecule. The adsorption of drugs on HA is mainly through the formation of Ca-O bonds between Ca ions on the surface of HA and "O" atoms in the drug molecule. The number and activity of oxygen atoms are the main factors affecting the binding ability of drugs on HA. In addition to the adsorption of drugs by HA itself, nano-HA is often combined with various substances to form nanoparticles with hollow or mesoporous shell structures to increase drug loading and sustained drug release. Li et al. [48] successfully prepared a hollow nano-HA structure by adding a pore-enlarging agent, and used LA-BSA to encapsulate the pores to obtain pH-responsive nanoparticles while increasing the drug loading. Zhang et al. [49] introduced Sr ions to form luminescent rod-like HA with mesoporous structure when preparing nano-HA particles by hydrothermal reaction. Nano-HA was loaded with ibuprofen. The luminescence intensity of Sr-HA was positively correlated with drug release, realizing the tracking of the drug release process. Yang et al. [50] successfully synthesized nanoparticles with a mesoporous nano-HA shell and a hollow calcium carbonate core by adjusting the oppositely charged ions between the shell and the core (Figure 3), and loaded the antitumor drug doxorubicin. The outer shell of nano-HA can achieve sustained release of drugs and pHresponsive release against tumor tissue, and the hollow calcium carbonate in the inner layer greatly increases the drug load. Not only drugs, nano-HA can also adsorb DNA and proteins. Nano-HA can be used as an ideal drug, protein, and gene carrier. HA nanoparticles have been used as carriers due to their affinity with DNA, proteins, several drugs, and appropriate release activity [51,52]. Ko et al. [53] modified HA with calcium chloride and carried the si-Stat3 plasmid that could inhibit the expression of Stat3. After injecting HA into the tumor, it was found that the growth in the tumor was significantly inhibited, and the inhibition rate was as high as 74%. The expression of Stat3 was significantly downregulated in the tumor. The use of nano-HA carrying RNA plasmids for tumor treatment is a reliable choice. Wan et al. [54] prepared layered HA nanoplates with different structures and morphologies by adjusting the content of the template agent and the concentration of the precursor, and embedded DNA molecules into the L-HA nanoplates (Figure 4). Under high SDS loading and low precursor concentration, L-HA nanoplates with higher order degree and larger size were obtained, which improved the DNA loading efficiency and transfection efficiency. Antitumor effects Nano-HA has an inhibitory effect on the growth of various tumor cells, but has no effect on the growth of normal cells [55][56][57][58]. Judging from the existing inferred mechanism, nano-HA degrades fast, increasing the concentration of Ca 2+ in tumor cell fluid leads to disorder of tumor cell function, degrades its DNA and inhibits telomerase gene expression, and so on, thereby inhibiting the growth and proliferation of tumor cells [59][60][61][62][63]. Ezhaveni et al. [64] explored the effect of nano-HA with different particle sizes prepared under different hydrothermal conditions on liver cancer cells, and found that the HA with an average particle size of 19 nm prepared by treating at 100°C for 5 h provides the most obvious inhibitory effect on tumor. Zhang et al. [65] prepared porous titanium scaffolds with nano-HA coating and cocultured with VX2 tumor cells in vitro and repaired the truncated bone defect of a critical defect size in a rabbit bone tumor model, and found that nano-HA-loaded scaffold not only has a significant effect of inhibiting tumor growth, but also has the effect of promoting bone regeneration. At the same time, they also conducted experiments on the effects of n-HA regulation on tumor suppression, calcium homeostasis, and immune responserelated gene expression ( Figure 5). Combined with its own antitumor properties, nano-HA loaded with drugs, DNA or protein is used as a more choice in tumor treatment. Surface coating Many components in cells are affected by nanoscale factors. According to reports, the adhesion sites of cells, proteins, etc., are generally 5-200 nm [66][67][68]. Thus, nano-grain-sized ceramics, metals, polymers, and composites stimulate cellular activity compared to micrograin sizes. Nano-HA crystals can increase cell adhesion and proliferation, create a biocompatible surface that combines well with bone tissue [69][70][71]. As a HA ceramic material, nano-HA can promote the development of stem cells toward osteogenesis by degrading calcium and phosphate ions [72,73]. Bryington et al. [74] studied the long-term repair of titanium alloy stents with nano-HA coating and without nano-HA coating in vivo, and found that nano-HA coating has a greater impact in the early stage of bone healing. Different structure of nanometer HA has different effect on cell behavior. He et al. [75] deposited two types of amorphous CaP nanoparticle nano-HA coating on the surface of titanium alloy and simulated the adhesion of osteoblasts on the scaffold (Figure 6). The results showed that the coating prepared by different forms of nano-HA particles had different upregulation of gene expression during bone tissue reconstruction. Xiao et al. [76] used a hydrothermal method to prepare nano-HA coating with different shapes and length on HA scaffolds by adjusting the concentration of 1,2,3,4,5,6-cycloadipic acid. In vitro studies have shown that the differentiation of cells cultured on spherical nanostructure coatings is significantly enhanced compared to plate-like or wire-like nanostructures. The results showed that the surface nanotopography of the scaffolds had a greater effect on cell differentiation than on cell proliferation. In addition, nano-HA has strong plasticity. It is often multifunctional nano-HA by compounding other particles. Zhang et al. [77] mixed Ag and ZnO into nano-HA powder to form a coating on the surface of titanium alloy by laser cladding technology. Ag + released antibacterial, Zn 2+ released to enhance bone formation. On the basis of nano-HA coating, the antibacterial and osteogenic functions are enhanced. The experimental results showed that the coated scaffolds achieved good osteogenesis and rapid osseointegration under the condition of S. aureus injection. Rios-Pimentel et al. [78] prepared amphiphilic peptide nanoparticles (APNPs), and APNPs can promote the attachment and proliferation of osteoblasts. Nanocrystalline HA and APNP coatings were prepared on poly-2-hydroxyethyl methacrylate, respectively, and the experimental surface osteoblast density of the group with APNPs coating increased by 3 times after 3 days. Therefore, nano-HA plays a significant role in improving the surface roughness of the material, increasing cell adhesion and enhancing the biological activity of the material. In addition, coating the nano-HA coating on the bone repair material can better improve the bone formation of the material. In addition, nano-HA coating can better improve osteogenic activity for bone repair materials. Nano-HA composited materials The natural bone tissue in the human body is a nanocomposite material, which is composed of crystals and collagen at the microscopic level [79][80][81][82]. Bone tissue is the compound tissue of nanometer HA and collagen. Using nano-HA as composite material can effectively improve biological activity and enhance cell survival. At the same time, nano-HA can effectively increase the surface roughness and mechanical properties of composites, and has a positive effect on the adhesion and proliferation of proteins and cells. In addition, the composite materials of nano-HA release calcium and phosphorus ions in the body, which has a positive effect on osteogenesis. The composite of nano-HA with polymer materials and hydrogel materials can improve the biological activity, surface roughness (Figure 7), and osteogenic activity of the materials [39,[83][84][85][86][87][88][89]. By adding nano-HA to the cellloaded hydrogel material, cell survival and regulation of cell differentiation can be enhanced [84,90]. Deng et al. [91] prepared nano-HA hybrid methyl cellulose (MC) hydrogel. The nano-HA-MC hydrogel loading bone marrow mesenchymal stem cells was used for rat skull defect experiments. The addition of nano-HA improved the gelling temperature of MC and enhanced the survival of marrow mesenchymal stem cells. Nabavinia et al. [92] prepared nano-HA/alginate/gelatin microcapsules as osteogenic building blocks in modular bone tissue engineering. By regulating the proportion of nano-HA and gelatin, the interaction between nano-HA and gelatin can enhance cell proliferation and differentiation. For natural bone tissue, cells are micron-sized entities embedded in the natural extracellular matrix (ECM). This ECM is highly organized at macroscopic, microscopic, and nanoscale [93,94]. Cells interact with topographic features at all scales, from macroscale (such as the shape of bone, ligament, or blood vessels) to nanoscale features (such as collagen ribbon shape, protein conformation, and ligand presentation). These topographic features strongly influence cell morphology, adhesion, attachment, movement, proliferation, endocytic activity, protein abundance and gene regulation, and other phenomena [95]. Inspired by the nano-layered structure and composition of bone, nanofibers and nanocomposite scaffolds doped with nanostructured HA that mimic the ECM of bone are increasingly used in bone tissue engineering [96,97]. Zhang et al. [94] used polylactic acid combined with nano-HA to simulate natural bone tissue structure and 3D printed it (Figures 8 and 9). According to the amount of nano-HA, the surface morphology of the printed scaffold is significantly different, and the hydrophilicity of the material will increase with the increase of the content of HA. The composite of nanometer HA can improve the surface roughness and hydrophilicity of materials. Zhang et al. [98] introduced an ECM-like self-assembly peptide (SAP) to nano-HA/chitosan (CTS) composite scaffolds. The SAP/nano-HA/CTS scaffolds enhanced the cell adhesion and overall mechanical properties of scaffolds. Chen et al. [99] used electrospinning technology to prepare multilayer nano-HA/polyhydroxybutyrate (PHB) film laminate scaffolds, and seeded cells on the scaffolds for bone defect repair experiments. The experimental results show that the scaffold with nanoscale simulated ECM enhances the adhesion and proliferation of osteoblasts and promotes the repair of bone defects. Yin et al. [100] perfused GelMA loaded with nano-HA into porous titanium alloy scaffolds, which enhanced the characteristics of low bioactivity and poor bone repair ability of traditional titanium alloy scaffolds. In addition, with the development of 3D printing technology, the polymer materials of composite nano-HA are prepared by melt extrusion (FDM), photocuring printing (DLP), or ink jet extrusion (IJD) to obtain porous scaffolds in bone tissue repair. Chen et al. [101] used FDM printing technology to prepare porous PLA/nano-HA composite scaffolds for the repair of large-segment bone defects. The composite of nano-HA enhanced the osteogenesis and angiogenesis ability of the scaffolds. Liu et al. [102] used GelMA hydrogels with different concentrations of ECM, mixed with different proportions of nano-HA to improve the electrical conductivity of the scaffold, and prepared a multilevel composite cartilage repair scaffold by 3D printing (Figure 9). The three-layer gradient scaffold can better simulate the complex layered structure of natural osteochondral tissue, and can repair osteochondral and lower bone at the same time. Conclusions Due to its good mechanical properties and high biological activity, nano-HA ceramics have attracted the attention of many researchers. However, there are still many problems in the preparation of nano-ceramics, including the synthesis of nano-powders and ceramic sintering, which still need to be further studied. In order to ideally control the particle size of nano-powders synthesized by HA materials, and to effectively suppress the growth of ceramic grains in the sintering process, it is still necessary to further optimize the process technology. The various powder synthesis methods and sintering processes reported so far have their own advantages and disadvantages. How to choose the appropriate nanopowder and sintering process according to the needs of the final application of nano-ceramics needs to be further explored. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission. Conflict of interest: The authors state no conflict of interest.
6,640.2
2022-01-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Vehicle Emissions Tax: An Opportunity to Control Air Pollution . This paper discusses about the condition of air quality in Indonesia and the opportunity to control air pollution using vehicle emissions tax. It is described descriptively based on literature, legal regulations, and prior researches. Transportation grow rapidly due to population growth. It is because transportation is very important for mobility and supports economic. Motor vehicles as a mean of transportation are known to cause air pollution due to their emissions. Pollutants in emissions are risk factors for several diseases, including acute respiratory infections, bronchitis, and pneumonia. Air quality in Indonesia, namely “indeks kualitas udara (IKU)” is still in good condition. However, there are provinces that have “moderate” IKU value and even “very less”. A variety of ways are being done to control air pollution from the transportation sector. Unfortunately, in Indonesia still has not touched the economic side. Environmental economic instruments need to be developed to address this. Basically, the regulations has provided the opportunity for the implementation, such as Law Number 32 Year 2009, Law Number 28 Year 2009, and Government Regulation Number 46 Year 2017. Therefore, the study of vehicle emission tax that internalizes economic losses due to air pollution for public health needs to be done. This kind of study is expected to be an input for policymakers on air pollution control. Introduction The population in Indonesia in 2008 was 228.5 million and reached 261.9 million in 2017 with a growth rate of 1.34% per year [1]. The growing number of people with their various activities has a major impact on the transportation sector due to its importance for mobility and support for the economy. In 2011 the number of motor vehicles in Indonesia reached 85,601,351 units then increased to 136,667,740 units in 2017. It was dominated by motorcycles (81.56%), ie 111,470,878 units [1] [2]. The growth of motor vehicles in Indonesia in the period 2012-2016 reached 8.19% per year. Java has the fastest growth rate of 9.28% per year [3]. The number of vehicles correlated to the fuel consumption. It will increase as result of travel length, personal transportation mode, and traffic density [4]. The transportation system has an effect on the quality of the environment [5]. This is closely related to exhaust emissions generated by motor vehicles from burning fuel that can pollute the air. Pollutants in vehicle emissions are a risk factor for several diseases, including acute respiratory infections, asthma, bronchitis, pneumonia, and eye irritation. In 2016 there were 568,146 cases of infants' pneumonia in Indonesia [1]. Various efforts to control air pollution from the environment has been done in Indonesia, but not yet followed from the economic side. Environmental economic instruments need to be developed to overcome this. It is a set of economic policies to encourage the central government, local government, or each person toward the preservation of environmental functions [6]. One way that can be taken to control air pollution and as an environmental economic instrument is by internalizing the economic losses due to air pollution for public health at the motor vehicle tax rates [7]. The application of such as this taxes is expected to encourage people to reduce vehicle usage/ownership and at least maintain the performance of their vehicles so that the emissions can meet the required quality standards. On the other hand, losses due to air pollution, such as health problems will be minimized. In addition, vehicles emission tax is expected to support low carbon development where the target of reducing carbon emissions in Indonesia has been set by 26% in 2020. In order to reduce carbon emissions, the energy and transportation sector has set a target for reducing emissions by 0.036 Gton of CO2. Vehichle Emissions and Public Health Vehicle emissions are a major source of air pollutants in urban areas. These emissions are carbon monoxide (CO), carbon dioxide (CO2), nitrogen oxide (NOx), hydrocarbon (HC), sulfur oxide (SOx), and particulates matter (PM). Emissions occur in three conditions: hot emission, start emission, and evaporation emissions (at the time of refueling, shortly after the engine off, parking conditions). The amount of emissions is determined by the speed, lifespan, and maintenance of the vehicle [8]. CO causes poisoning due to the formation of carboxy hemoglobin (HbCO) in the blood. It disrupts Hb bringing O2 to the whole body. The supply of O2 in the body will be decrease and causes shortness of breath, even death. HC causes respiratory disorders, laryngitis, and bronchitis. The exposure of NOx > 0.05 ppm causes acute respiratory disorders [9]. PM10 causes respiratory tract irritation, coughing, difficulty in breathing, decreasing lung function, exacerbating asthma, chronic bronchitis, even death [10]. The impact of death from air pollution in Indonesia reaches about 3% of Gross Domestic Product in 2010 [11]. Motor Vehichles Externalities Externality is a positive impact (benefits) as well as the negative impact (cost) of an action made by a particular party against other without any compensation for the beneficiaries and the losers. Environmental externality is the cost and benefits caused by the changing of the biology and physics environment. Pollution emitted by the transport and industrial sectors impacts people and the environment. As a form of externality, unfortunately fuel consumption and waste disposal has not been charged for its impact as an environmental cost [12]. There are five externalities of motor vehicles usage, ie accident, congestion, road damage, environmental damage, and dependence on fuel. The environmental damage are exhaust emissions, noise, landscape and urban changes, and the impacts on biodiversity and buildings. Emissions and noise can be valued [13]. Economic Valuation Economic valuation is an analysis in the preparation of public policy intervention scenarios, such as the determination of appropriate pricing strategies in the use of fiscal policy mechanisms, such as Pigovian tax [14]. Economic valuation is done by calculating the total value of degradation to find out the costs that need to be paid to the parties affected by pollution and/or environmental damage. This fee is paid out so that the affected parties will recover or not experience worse conditions. There are several economic valuation methods to measure the changing of air quality and health effects such as prevention cost expenditure, replacement cost, human capital, cost effectiveness of analysis of prevention, and benefit transfer [15]. Environmental Taxes Law Number 16 Year 2009 states that tax is a compulsory contribution of citizens to a coercive state without obtaining direct rewards. Taxes are utilized for state purposes for the welfare of the citizen. With regard to environmental taxes, polluters are free to dispose waste/pollutants, but they are required to pay taxes for each waste/pollutant. The application of environmental taxes becomes an incentive for polluters to find the best way to minimize the emissions/waste/pollutants. Tax is an action to prevent the occurrence of negative externalities. The environmental tax instrument is used to correct the social costs arising from the negative externalities caused by environmental pollution in which polluters have to pay due to their environmental pollution. This correction of negative externalities is often associated with pigouvian tax [14]. Pigouvian tax was introduced by British economist Arthur C Pigou. Pigou described the distinction of the marginal cost of individuals as economic agents and the social marginal cost. This framework that then becomes the internalization effort of externalities through the tax mechanism. Taxes increase the cost structure and correct the quantity of goods produced so that activities that have negative externalities will be reduced. The Pigouvian tax seeks to move the cost of damage into the perpetrator's cost structure [16]. Environmental taxes include emissions taxes, retribution on the use of natural resource and environment, and retribution on the basis of products. Emission tax is applied to the disposal of pollutants/wastes into the air, water bodies, and/or land. This tax relates to the quantity and quality of the pollutant as well as the cost of the damage. Implementation of the emissions tax will increase income, encourage pollution minimization, and technological innovation to reduce pollution [17]. Method A descriptive method is used to describe the condition of air quality in Indonesia. Data analysis tools from Excel 2010 software is used to analyze the correlation of the population, motor vehicle, fuel consumption, air quality index ("indeks kualitas udara/IKU"), and infants' pneumonia cases in Indonesia. A descriptive method is also used to explain the opportunity of air pollution control from the transport sector using vehicle emissions tax based on literature, regulation, and prior researches. Air Quality in Indonesia In Indonesia the national ambient air quality is stated as an index, "indeks kualitas udara (IKU)". This value is based on the monitoring of NO2 to represent air pollution from the transport sector, especially gasoline-fueled vehichles and SO2 to represent air pollution from industrial sectors and diesel-fueled vehicles. IKU in the Descriptions of population, vehicles, fuel consumption, IKU, and the cases of pneumonia in infants in Indonesia can be observed in the figure bellow. The result of correlation analysis at the 95% confidence level of the above five variables for data of 2016 can be observed in the following tables. The result of correlation analysis shows that the population has strong correlation to the number of vehicles, fuel consumption, and the cases of infants' pneumonia; the number of vehicles has a strong correlation to fuel consumption and the cases of infants' pneumonia and has a moderate correlation to IKU; fuel consumption has a strong correlation to the cases of infants' pneumonia and moderate against IKU; IKU has a weak correlation to the cases of pneumonia. Furthermore, the results of regression analysis at the 95% confidence level can be observed in the following tables. The above equation means that every increase of one million units of vehicles will decrease the IKU by 0.3897 and every consumption of one million liters of fuel will decrease the IKU by 0.00137. The Opportunity of Air Pollution Control in Indonesia Using Vehichles Emission Tax Based on the condition of air quality in Indonesia as described above there is a correlation between the number of vehicles and fuel consumption to the air quality and the cases of pneumonia. Therefore, it is necessary to make efforts to control air pollution from vehichle emission. Some efforts have been made in Indonesia, such as ambient air quality monitoring, vehicle emission testing, car free day, traffic engineering, mass transportation, tree planting, green open space. These efforts are come from environmental and technical side. The economic effort has not been done yet in Indonesia. This kind of effort is an opportunity that can be taken to strengthen the environmental efforts to control air pollution from mobile sources/transportation (vehicles). One effort that can be done is by applying vehicle emissions tax as an environmental economic instrument. Legal Regulation in Indonesia There are several legal regulations in Indonesia that can be the basis of the effort to control air pollution using environmental economic instruments in the form of environmental taxes. Law Number 32 Year 2009 "The Protection and Management of The Environment" Article 2 letter j contains the polluter pays principle. Article 43 paragraph (3) states that the environmental tax is one of the environmental economic instruments in the form of incentives and/or disincentives for polluters. Government Regulation Number 46 Year 2017 "Environmental Economics Instrument" states in Article 1 that environmental economic instruments are a set of economic policies to encourage central government, local government, or each person toward environmental preservation. One of them is an instrument of economic and development planning which can be realized by the internalization of environmental cost which is carried out by including the cost of pollution and /or environmental damage in calculation of production cost or cost of a business and/or activity (Article 3, Article 4, Article 18). Law Number 28 Year 2009 "Regional Tax and Regional Retribution" states in Article 5 that the basis of the implementation of vehicle tax is the selling value of motor vehicles and the weight reflecting the relative level of road damage and/or environmental pollution caused by the use of vehicles. Prior Researches about Environmental Tax Several studies related to environmental taxes have been published. In 2016 Filippini and Heimsch conducted "The Regional Impact of a CO2 Tax on Gasoline Demand: A Spatial Econometric Approach" [19]. This study aims to estimate the price elasticity and earnings on gasoline demand by considering spatial effects as well as analyzing the spatial effects of CO2 tax policy implementation on the pattern of gasoline consumption in Switzerland. The results show that a 10% increase in gasoline consumption in a municipality will spread to other cities and lead to an increase in gasoline consumption by 4.2%. The application of taxes could reduce gasoline consumption by ± 510 million liters and reduce CO2 emissions by about 1.2 million tonnes. Implementation of CO2 tax in the long term can affect the pattern of gasoline consumption and reduce 9% of greenhouse gas emissions in Switzerland. As in Indonesia has been conducted researches on vehicle tax estimation by Bestari, et al. in 2014 andHidayat, et al in 2016. Estimated tax rate of diesel-fueled public transport vehicles in DKI Jakarta that internalize economic losses based on health costs is Rp.4,617,119/vehicle/year. Key players in the formulation and implementation of this tax policies are environmental agencies, transportation agencies and universities/academic [7]. The estimated tax value of each gasoline-fueled public transport in Bogor City that internalizes economic losses based on health costs is Rp.178,397/vehicle/year [20]. Data Required to Estimate Vehicle Emission Tax One of the way to estimate vehicle emissions tax is by internalizing the economic losses due to air pollution for public health. The data required are primary and secondary data. Primary data can be obtained from the respondents through interviews and filling out the questionnaires related to the using of motor vehicles, illness suffered, and the costs incurred for treatment and care as well as loss of income during illness. Secondary data can be obtained from related institution, literature, and legal regulation, ie data of population, vehicles, disease case, fuel selling, air quality, and vehicle pollution load [7][20]. Method Used One of the method that can be used to estimate vehicle emission tax which internalize economic loss caused by pollution for health is cost of illness method. Regulation of the Minister of Environment Number 7 of 2014 "Environmental Loss due to Pollution" states that the cost of illness approach is used if pollution and/or environmental damage cause health problems so that the patient can not work. Losses can be calculated as long as the person is ill. Costs that are calculated, include the cost of medical care, medical consultation fee, medicine and laboratory; consumption expenses during illness; accommodation expenses when ill; transportation expenses during treatment; loss of income; declining productivity. Furthermore, the tax value every pollutant is estimated by dividing the total economic loss with total pollution load [7] [20]. Conclusion Air quality in Indonesia based on IKU is still in good condition. However, there are provinces with moderate IKU values, even very less. The number of vehicles and fuel consumption correlate to the value of IKU and can decrease its value. Therefore it is necessary to control air pollution from the source (motor vehicle). One effort that can strenghten the efforts that have been done so far in controlling air pollution is implement the environmental economic instruments. Legal regulations in Indonesia have accommodated the environmental economic instruments that can be implemented in the form of environmental taxes to maintain environmental sustainability. This is an opportunity to control air pollution. The prior research show that emissions tax can reduce fuel consumption and reduce greenhouse gas emissions. So that it can support the low carbon development. The emission tax can be calculated by internalizing the economic losses due to air pollution for public health. Suggestion Air pollution that become problem in almost every big city in Indonesia and have impact to public health is expected to be solved one of them by applying emission tax of motor vehicle. Therefore, research related to motor vehicle taxation estimates that internalize the economic losses of air pollution for public health needs to be done especially for private vehicles and motorcycles that have not been done yet. The hope, the results of the estimated value of vehicle emissions tax can be used in policymaking to control air pollution from the transportation sector.
3,794
2018-01-01T00:00:00.000
[ "Environmental Science", "Economics" ]
Accelerating Formulation Design via Machine Learning: Generating a High-throughput Shampoo Formulations Dataset Liquid formulations are ubiquitous yet have lengthy product development cycles owing to the complex physical interactions between ingredients making it difficult to tune formulations to customer-defined property targets. Interpolative ML models can accelerate liquid formulations design but are typically trained on limited sets of ingredients and without any structural information, which limits their out-of-training predictive capacity. To address this challenge, we selected eighteen formulation ingredients covering a diverse chemical space to prepare an open experimental dataset for training ML models for rinse-off formulations development. The resulting design space has an over 50-fold increase in dimensionality compared to our previous work. Here, we present a dataset of 812 formulations, including 294 stable samples, which cover the entire design space, with phase stability, turbidity, and high-fidelity rheology measurements generated on our semi-automated, ML-driven liquid formulations workflow. Our dataset has the unique attribute of sample-specific uncertainty measurements to train predictive surrogate models. Background & Summary Liquid formulations are ubiquitous in everyday lives and are produced across several industries, such as cosmetics, food, pharmaceuticals, and agrochemicals 1,2 .Increasingly, there is demand for a switch to eco-friendly products 3 .For example, there is a push to replace polymers in liquid formulations (PLFs), which are often made from fossil-derived monomers, with natural polymers, or those derived from bio-feedstocks 4,5 .Since designing liquid-formulated products that meet multiple customer-defined property targets is a time-, resource-and labour-intensive process 6 , changing the carefully optimised formulation recipes is a difficult proposition.In this respect, developing accurate predictive models, either through computational approaches like machine learning (ML) [7][8][9][10][11] or molecular modelling simulations [12][13][14] , can aid in accelerating formulation design. In an earlier study, some of us have developed interpolative ML models, trained on a relatively small dataset of liquid formulations 7 .This earlier study showed the potential of ML models to learn complex interactions within a formulation and the use-case within a workflow of formulations design.However, the models, trained on a limited set of formulation ingredients and without structural information about said ingredients in the input features, are rather limited in scope and cannot be generalised to new formulations without re-training the model on a new experimental dataset.For developing statistical-ML models, availability of experimental data to train or validate the models is the most critical factor 3,15,16 . In this study, we aim to create an open dataset for training machine learning models in the liquid formulations domain.Our chosen liquid formulation system is shampoo, similar to the previous study by Cao et al. 7 We selected twelve surfactants covering anionic, non-ionic, amphoteric, and cationic types 17 , four conditioning polymers, and two thickeners.From these we chose a binary mixture of surfactants, polymer, and thickener in any given formulation.These ingredients are also more broadly used in rinse-off product applications.Due to the combinatorial nature of formulation design, our set of ingredients lead to 528 distinct combinations, an over 50-fold increase in dimensionality compared to our previous study 8 .Further to the choice of ingredients, we investigated a diverse range of ingredient concentrations. With a large range of ingredient choices and concentrations to explore and a limited experimental budget, an efficient and cost-effective high-throughput (HT) workflow needed to be developed.Such workflows are becoming increasingly important across a wide spectrum of domains, including, but not limited to, organic chemistry 18,19 , inorganic materials 20,21 , and biological applications 22 .The convergence of ML, lab automation and robotics has led to the term "self-driving labs" (SDLs) being coined 23 . Automation brings along several benefits: increased efficiency, reproducibility, and safety; but not everything can be integrated into an automated workflow, in particular, certain analysis techniques represent the bottleneck 24 .As an alternative, if a particular characterisation instrument cannot be integrated with the synthesis set-up, samples can be brought to the analytical instruments, as in the case of the mobile chemist robot developed in the Materials Innovation Factory (MIF) at the University of Liverpool 25 .However, there is a very significant capital investment, both monetary and human, required to setting up such a laboratory.Instead, this work favours the approach of flexible, modular automation by breaking down the process of preparing and characterising formulation samples into unit operations which we aimed to automate with independent stations for liquid handling, pH adjustment, analysis etc 21,23,26 . There are a couple of interesting commercial formulation bots such as GEOFF by Labman Automation 27 (located at the University of Liverpool, with industrial backing from Unilever) and FORMAX by ChemSpeed Technologies 28 .One option we had was to buy time on an already available bespoke robotic platform, for example within the MIF at Liverpool.The alternative was to develop a workflow that demonstrates how any organisation with a not-too-large, dedicated lab automation budget, could make use of their existing stand-alone systems and complement them with lower-cost robotics/automation. Figure 1 shows our semi-automated liquid formulations workflow, where we adopted a "human-in-the-loop" approach to transfer samples between workstations, with only some steps being automated unit operations.The first step of the workflow was a ML-guided design of experiments (DoE), which is outside the scope of this Data Descriptor and is detailed elsewhere 29 .The DoE output was read by an Opentrons OT-2 liquid handling robot which we retrofitted with a mass balance for automated viscous liquid handling 30 .After this step, the formulations were mixed and characterised for their phase stability.We carried forward the stable samples to our pHbot 31 for pH adjustment, and once again, after a 36-hour wait (an agreed processing step), assessed stability of our products, before we measured turbidity and viscosity of the stable formulations.Rheology measurements were performed offline, as they are very difficult to automate, particularly for highly-viscous and non-Newtonian fluids 32 . We generated over 800 formulations within eight months using this workflow and instrumentation.We made the choice of samples to be chemically diverse, exploring different chemical functionalities of typical ingredients and their concentration ranges, to maximise the utility of the resulting dataset for training ML models for formulations properties prediction. The remainder of the paper is organised as follows.We firstly detail the experimental workflow to prepare and characterise the formulations in the Methods section.We then present the formulations dataset as a JSON file for a compact representation in the Data Records, along with a discussion of supplementary files (chemical structures, phase stability images) shared on the dataset repository.Under Technical Validation, we demonstrate the statistical diversity of the dataset and discuss the errors and confidence in our characterisation techniques.Furthermore, we show the utility of the image dataset for training computer vision models.Finally, under Usage Notes, we briefly discuss how to work with the JSON file in a Python Pandas dataframe, which is the most popular format for data science and machine learning.(ii) preparation of the liquid formulations by automated viscous liquids handling 30 ; (iii) mixing on a multiposition stirrer plate; followed by (iv) pH adjustment on the pHbot 31 ; and finally, (v) characterisation including optical imaging for phase stability, proxy UV-vis absorbance for turbidity, and rotational rheometry for viscosity measurements. Methods Viscous liquids handling with a retrofitted opentrons robot.We prepared our samples from commercial liquid formulation ingredients, which were used as received and are presented under the Data Records.These ingredients ranged from low (~10 mPa•s) to very high viscosities (10,000 mPa•s+) as shown in Supplementary Figure S1.We selected the Opentrons OT-2 liquid handling robot for ease-of-use and customisability of its operating parameters, which is essential for handling viscous fluids.Despite our attempts at optimising the OT-2's parameters to handle our ingredients 33 , we could not get the liquid handler to dispense the target amounts specified from the DoE.It remains an open challenge to achieve accurate transfers of highly viscous, particularly non-Newtonian, fluids on a pipetting robot.Some of our group recently performed a study into closed-loop optimisation of liquid handling parameters for viscous handling, however, this work only goes up to 1275 mPa•s 34 .Instead, we found roughly optimal liquid handling parameters for each class of ingredient (surfactant/polymer/thickener) and retrofitted the OT-2 with a precision balance, as shown in Fig. 2a 30 .Within this study, we have generated a dataset aimed at training ML models for formulation design.Therefore, instead of needing to be able to pipette our ingredients precisely and accurately, we only needed to accurately measure the mass of each ingredient added and could then back-calculate our prepared formulations compositions.The purple curve in Fig. 2b shows a typical mass vs. time profile from preparing a batch of six formulations in one Opentrons run.This signal was deconvoluted, using an analysis of the first and second derivatives of the mass profile, to identify the ingredient additions and calculate the formulation compositions.We discuss further details of this method in the Supplementary Information (SI). Mixing, imaging, and titrating formulations.We mixed the formulations for 35 minutes at 360 rpm and 25 °C using Fisherbrand ™ rare earth octagonal stir bars (25 × 8 mm) on an IKA RT 15 multi-position stirrer plate.The formulations were left to rest for 15 minutes before we visually inspected their stability.Unstable formulations were discarded at this stage, as preliminary tests showed a very low likelihood of an unstable formulation stabilising after pH adjustment.Additionally, titrating unstable formulations is particularly challenging and time-consuming. We defined a stable formulation as a single homogenous phase with no discernible phase separation, flocculation etc.We imaged each formulation using a Logitech C922 webcam controlled by open-cv 35 (see Supplementary Figure S3) and labelled each image as stable ('true') or unstable ('false').However, we sometimes could not distinguish a particular sample's phase stability from the computer image alone and would need to check the sample by eye from different angles, or under different light, and therefore, recorded these images in a separate "borderline" directory (as opposed to "main").This does not affect the useability of the complete dataset as all the samples have a well distinguished stability label by eye; it is only the supplementary image dataset which is slightly more restricted in size, but still over a thousand images, as "borderline" images have been separated to maintain the image set quality. Next, we titrated the formulations to pH 5.8 ± 0.2 with citric acid or sodium hydroxide, as per an industrial requirement, using our fully automated pHbot 31 .Our target pH is typical for shampoo formulations 36,37 .After pH adjustment, the samples were left for 36 hours in ambient lab conditions and then re-assessed to record their short-term phase stability.Typically, industry performs longer-term phase stability testing, including at elevated temperatures to accelerate the degradation process.Here, to develop a large dataset to model the formulation's properties, a short-term test is more suitable to maintain throughput.Only formulations stable post pH-adjustment, i.e., during both visual inspections (pre-and post-titration), were recorded as stable in the dataset.Formulations characterisation.We carried forward the remaining stable formulations for turbidity and viscosity testing.A Turbiscan would be ideal for high-throughput turbidity analysis and could additionally be used to assess destabilisation mechanisms (e.g., creaming, sedimentation, flocculation, and coalescence) 38 .However, we did not have access to such an instrument and nor is it a reasonable expectation, under limited time and budget constraints, to be able to access the ideal choice of equipment at every stage of a workflow.Instead, a degree of creativity is essential to build high-throughput/automated workflows and one trick is to develop fast proxies 39 .Proxy measurements can be employed in both experimental 22,[40][41][42] and computational work 43 . We measured UV-vis absorbance at a fixed wavelength (420 nm) as a proxy measurement for turbidity using the Tecan Infinite M Plex 200 Pro plate reader.We used a Corning ® 96-well clear flat bottom UV-transparent microtiter plate (MTP).We purchased a range of turbidity standards from Sigma Aldrich (1, 5, 10, 100, 1000, 4000 NTU) and used these, as well as intermediates prepared through their mixtures, to develop a UV absorbanceturbidity calibration curve (Supplementary Figure S4).We transferred 4 aliquots per sample into a MTP so that we had sufficient repeats, as if the formulations foamed during their transfer into the MTP, this would impair the measurement.We applied a local outlier factor (LOF) anomaly detection algorithm (with two nearest neighbour points) and used the average and standard deviation from the remaining points to read off our proxy turbidity measurements and their associated 95% confidence interval from the calibration curve. Since it is very challenging to affordably automate viscosity measurements 32,41 , particularly for non-Newtonian fluids where we are interested in characterising their viscosity -shear rate profile, we performed high-fidelity rheometry experiments offline.This was the bottleneck of the workflow but let us capture rich data about the formulations.We used two different rheometers with slightly different configurations.While the reason for this was operational, it is, nevertheless, typical of the types of hurdles faced during a large experimental campaign.We used the TA DHR 30 with a 60 mm parallel plate and 250 μm plate gap for S1-618 and S690-822 and the Anton Paar MCR702e with a 25 mm parallel plate and 200 μm plate gap for S619-689.We performed a constant temperature (25 °C) shear-rate sweep between 1-1000 s −1 .This is representative of pouring to spreading/lathering shear rate range 44 .We performed at least three repeat measurements for each sample and grouped data into two results guided by our domain expertise: i) classifying the hypothetical "zero-shear" viscosity for each formulation as very low (≤10 mPa•s), low (10 < η ≤100 mPa•s), medium (100 < η ≤1000 mPa•s), or high (>1000 mPa•s); and ii) whether the sample is a Newtonian, shear-thinning, or another type of non-Newtonian fluid.Additionally, we recorded the complete data of shear-rate vs. average viscosity measurements, along with their standard deviations. We include for the knowledge of our community in the SI, an estimated capital cost-breakdown of developing this semi-automated, ML-driven liquid formulations workflow. Data Records The dataset, with 812 liquid formulations, is provided as a JSON file, publicly available at 45 .This cleaned dataset excludes ten samples from the total of 822 prepared formulations (~1% exclusion rate illustrating the high-quality of the workflow) for the reasons stated in Table 1.For the formulations which were too turbid, this would have resulted in extrapolation for the proxy turbidity measurement, which would be unreliable.Additionally, we found it hard to distinguish the stability of these very cloudy formulations.Each data entry in the JSON file corresponds to a sample including information about its ingredient composition and phase stability, plus turbidity and rheology measurements (including the full flow curves) for the stable formulations, of which we prepared 294.Each formulation in the set is a mixture of two surfactants, a polyelectrolyte, and a thickener in a base of water. On the figshare repository 45 , we share the commercial formulation ingredients we used in a slide deck ("BASF Formulation Ingredient Structures") containing the trade and INCI names for each of the eighteen ingredients, along with their chemical structures.SMILES strings are provided where possible, i.e., for all the surfactants and one thickener (Arlypon ® F) -the small molecules.The remainder of the molecules are polymers, for which the monomer units, and their proportions (for co-polymers), are included.The estimated molecular weight range is also provided for these macromolecules where available from our industrial partner.Further, to these slides, a separate CSV file ("BASF Surfactants Information") is provided for the surfactant ingredients, detailing their active matter/water content %, as well as any other content, e.g., preservatives (potassium sorbate), or buffering agents (citric acid), which were present for the anionic surfactants.Furthermore, as we worked with real commercial materials, we found via UPLC-MS characterisation (protocol detailed in the SI) that the surfactant ingredients had a distribution of chain lengths, as summarised in Supplementary Figure S5 -the average chain length calculated from these results and a functional group count is included in the surfactants CSV file. Finally, on the repository 45 , we publish 1,292 formulation phase stability images, including 176 separated into a "borderline" category as discussed above, which we suggest should be excluded from training any ML models, but are included for completeness.We endeavoured to make this dataset FAIR: i) findable through using a consistent and unique sample ID for each formulation; ii) accessible by sharing the dataset on a public repository; iii) interoperable through the use of a common and system-agnostic data format (JSON), and iv) reusable with an open licence to use this dataset and thereby follow the FAIR community guidelines for data management 46,47 . Technical Validation Within this section we discuss the experimental uncertainties resulting from our methods and illustrate the utility of this dataset.We present our discussion in the order of the workflow. Chemical diversity of the dataset.Our automated viscous liquid handling method is self-validating by design, as we have incorporated a balance with the OT-2 to measure mass of ingredients added; therefore, we confidently know compositions of the prepared formulations.Figure 3 shows a correlation heatmap as evidence of statistical diversity of the generated dataset. As mentioned under the Background & Summary, the DoE method used for this work is outside the scope of this article and is properly detailed elsewhere 29 .Briefly, an ML-guided space-filling design was used with the goal to maximise the coverage and spread in the dataset (and explore towards regions of phase stability), so that the complete formulations space is represented.This subsequently allows accurate prediction across the entire design space.There are eight unique combinations of polymer and thickener used in this study and we cycled through each combination, focusing the DoE on one subsystem (polymer, thickener combination) at a time.Figure 3 shows that the surfactant compositions are not correlated with one another, and therefore, that the dataset is not biased to any one region of the design space.This is further supported by Supplementary Figures S6, S7, which show histograms for the surfactant ingredient concentrations and a pair plot of all the formulation ingredients across the dataset.We see a well-distributed and non-correlated spread for the formulation compositions. Computer vision for phase stability classification. To demonstrate the utility of the image dataset, we trained a computer vision (CV) model for phase stability classification.There is an apparently increasing adoption of CV, including in the formulations domain [48][49][50] .Figure 4 shows a small selection of sample images which the CV model must classify.We developed a two-stage method, with a first stage of identifying the formulation in the image, i.e., object detection, and then the second stage performing the classification.We use the popular YOLOv5s 51 ("you only look once") object detection model trained on a subset of the first 300 images we generated, from which we can train an almost perfect crop detection for our formulations: 0.995 mean average Fig. 3 A heatmap for the surfactant ingredient concentrations across the formulations dataset.precision (mAP) at a 0.5 intersection over union (IoU) threshold.We introduce this crop detection prior to the classification, as it improves the final model performance. We then use a standard 3-layer convolutional neural network architecture (with 16, 32, 64 filters in each convolution layer) to train the phase stability classifier, using the full set of images and only holding out the final sub-system investigated (formulations prepared with polymer = Dehyquart ® CC7 Benz and thick- ener = Arlypon ® TT which represent ~10% of the image dataset).The overall test-set performance of this two-stage model is a 0.84 F 1 score (precision = 0.92, recall = 0.77), which is a reasonably strong classifier, particularly when one considers the ML model has a challenging task of distinguishing some weak phase boundaries through transparent labware.This is an open challenge in the community 52,53 and we invite computer vision experts to try and improve on our benchmark.We envisage the application of this work in a smart R&D laboratory where a similar webcam could be set up to analyse a conveyor belt of formulations, where the stable ones are carried forward for further processing, as in this study, and the others are safely discarded. Turbidity and Rheology measurements.We now discuss the experimental uncertainty for our two main characterisation measurements in this study, turbidity and viscosity.The turbidity calibration curve, highlighted in red in Fig. 5, with the original calibration points in Supplementary Figure S4, has a R 2 value of 0.997.Figure 5 shows the liquid formulations' proxy turbidities computed from their measured UV absorbances at 420 nm, along with their 95% confidence interval.As expected, we see a linear relationship between turbidity and absorbance at low turbidity.However, at higher turbidities, far fewer photons can pass through to the UV-vis detector, so we see a sharp rise in the calibration curve.This is the primary factor for the larger uncertainty at higher turbidity values, as the same absolute uncertainty in absorbance measurement will result in a larger uncertainty in the proxy variable.Except for a few points, there is a low (<10%) uncertainty for most reported measurements, particularly, at the lowest turbidities, which is the region of greatest interest for formulators. With respect of rheometry experiments, we optimised the set-up of two rotational rheometers, settling on parallel plates with the specific diameters and operating gap widths stated in the Methods section.This was a non-trivial task, as we had a wide-range of expected viscosities (from water-like samples >1 mPa•s to viscous shampoo formulations ~5,000 mPa•s) to measure with a single rheometry protocol (to maintain consistency with the dataset generation).The operating limits of a rheometer are dictated by its torque sensitivity.For non-viscous samples, one would typically use a Couette cell configuration ("cup-and-bob") which is more suitable at the lower viscosity range due to the geometry's extra surface area.However, for the more gel-like samples expected at the upper viscosity range, a "cup-and-bob" is not suitable, and parallel plates are better across the study.We used relatively larger diameter parallel plates to ensure accuracy at the lower viscosity limit.Our calibration runs were with DI water (standard 1) and three general-purpose Newtonian fluid viscosity standards (Paragon Scientific ™ ); results are shown in Fig. 6.The average viscosity over three repeats and error bars corre- sponding to a 95% confidence interval are shown.We see for the two more viscous standards (1,275 and 6,695 mPa•s) strong agreement on both rheometers with the expected viscosities for the full range of shear-rates, however, the measurements are noisy and only typically settling to an accurate steady-state value after a shear rate of ~10 s −1 for the two non-viscous standards (0.89 and 3.32 mPa•s).At higher shear rates, a greater stress is exerted back on to the plate, and therefore, for the non-viscous standards (which are close to the rheometer's lower sensitivity limit at low shear rates), their viscosity can be more accurately determined at the higher shear-rates.Overall, if we look at the measured viscosity at a fixed high shear rate, e.g., 100 s −1 , then both rheometers are within 3% of the ground truth across the viscosity range of our prepared liquid formulations. Figure 7 shows viscosity results grouped by their rheology type (Newtonian, shear-thinning, or another type of non-Newtonian fluid).Nearly 50% of the formulations have a water-like low viscosity and these are all classified as Newtonian formulations.We were unable to resolve the viscosity-shear rate profile for these non-viscous samples with sufficient accuracy at low shear rates, as seen in Fig. 6, to be able to make a more detailed deduction of the sample's rheology.However, it is also known from our domain expertise that it is highly unlikely for such a non-viscous formulation to present a non-Newtonian behaviour, and therefore, this is acceptable.Conversely, Fig. 7 shows that the more viscous the formulation, the greater the proportion of non-Newtonian samples. In conclusion, our dataset is statistically diverse in its input features (formulation ingredient concentrations), which allows for accurate predictions across the tested design space, and the resulting samples encompass the entire spectrum of property targets explored in our study, including the preferred product profile articulated by our industrial collaborator: a stable, low turbidity, highly viscous, and shear-thinning sample.Notably, this dataset offers sample-specific uncertainty measurements, which are valuable for training more informative surrogate models.Furthermore, these uncertainties are lowest in the regions of greatest interest, low turbidity and high viscosity. Usage Notes In this work, we present the formulations dataset as a single JSON file that can be read using all major programming languages (e.g., Python, MATLAB, R etc.) and a supplementary image dataset, both uploaded publicly at 45 .We provide a code snippet in the SI to read the JSON file into a Pandas dataframe (Python), and further, how to unnest the rheology data so that you can further study the viscosity -shear rate profiles for each sample, as shown for the illustrative examples in Fig. 8.There is detailed information beyond the simple classifications presented as "Viscosity" and "Rheology_Typev", which are of interest to rheologists and formulators alike. In summary, we have presented a large open-source dataset of liquid formulations.To generate this dataset, we developed a specific workflow and several underlying methods/experiments.We envisage the primary use case of this work to train property prediction models for liquid formulations ("the forward model"), and later to tackle the inverse design problem.Furthermore, this work is broadly useful to computational materials scientists/chemists, as it is a high-dimensional experimental dataset with multiple conflicting target objectives which can be used to benchmark ML methods for discovery and optimisation. Fig. 1 Fig. 1 An overview of the liquid formulation workflow.(i) From an ML-guided design of experiments 29 ;(ii) preparation of the liquid formulations by automated viscous liquids handling 30 ; (iii) mixing on a multiposition stirrer plate; followed by (iv) pH adjustment on the pHbot31 ; and finally, (v) characterisation including optical imaging for phase stability, proxy UV-vis absorbance for turbidity, and rotational rheometry for viscosity measurements. Fig. 2 Fig. 2 Details of liquid handling of formulations.(a) Mass balance integrated with an Opentrons OT-2 liquid handling robot 30 -pipettes formulation ingredients according to an input CSV file with desired ingredient mass fraction and outputs a CSV file with a mass vs. time profile; (b) mass profile signal (purple) as well as its first (green) and second derivatives (blue) used by our signal processing algorithm to resolve the formulation compositions. Fig. 4 Fig.4 Representative sample images for phase stability classification.The red bounding boxes show the part of the image identified by the YOLOv5s object detection model as the region of interest, the liquid formulation, which is to be classified as stable or not. Fig. 8 Fig.8Viscosity -shear rate profiles of selected samples highlighting the rheological behaviours observed from low to high viscosity, and from Newtonian to non-Newtonian fluids. Table 1 . Samples excluded from the cleaned formulations dataset.
6,150.2
2024-07-03T00:00:00.000
[ "Chemistry", "Computer Science", "Engineering", "Materials Science" ]
Word-based Japanese typed dependency parsing with grammatical function analysis We present a novel scheme for word-based Japanese typed dependency parser which integrates syntactic structure analysis and grammatical function analysis such as predicate-argument structure analysis. Compared to bunsetsu-based dependency parsing, which is predominantly used in Japanese NLP, it provides a natural way of extracting syntactic constituents, which is useful for downstream applications such as statistical machine translation. It also makes it possible to jointly decide dependency and predicate-argument structure, which is usually implemented as two separate steps. We convert an existing treebank to the new dependency scheme and report parsing results as a baseline for future research. We achieved a better accuracy for assigning function labels than a predicate-argument structure analyzer by using grammatical functions as dependency label. Introduction The goal of our research is to design a Japanese typed dependency parsing that has sufficient linguistically derived structural and relational information for NLP applications such as statistical machine translation. We focus on the Japanesespecific aspects of designing a kind of Stanford typed dependencies (de Marneffe et al., 2008). Syntactic structures are usually represented as dependencies between chunks called bunsetsus. A bunsetsu is a Japanese grammatical and phonological unit that consists of one or more content words such as a noun, verb, or adverb followed by a sequence of zero or more function words such as auxiliary verbs, postpositional particles, or sentence-final particles. Most publicly available Japanese parsers, including CaboCha 1 (Kudo et al., 2002) and KNP 2 (Kawahara et al., 2006), return bunsetsu-based dependency as syntactic structure. Such parsers are generally highly accurate and have been widely used in various NLP applications. However, bunsetsu-based representations also have two serious shortcomings: one is the discrepancy between syntactic and semantic units, and the other is insufficient syntactic information (Butler et al., 2012;Tanaka et al., 2013). Bunsetsu chunks do not always correspond to constituents (e.g. NP, VP), which complicates the task of extracting semantic units from bunsetsubased representations. This kind of problem often arises in handling such nesting structures as coordinating constructions. For example, there are three dependencies in a sentence (1): a coordinating dependency b2 -b3 and ordinary dependencies b1 -b3 and b3 -b4. In extracting predicate-argument structures, it is not possible to directly extract a coordinated noun phrase "wine and sake" as a direct object of the verb "drank". In other words, we need an implicit interpretation rule in order to extract NP in coordinating construction: head bunsetsu b3 should be divided into a content word and a function word , then the content word should be merged with the dependent bunsetsu b2. list 'A list of wine and sake that (someone) drank' Therefore, predicate-argument structure analysis is usually implemented as a post-processor of bunsetsu-based syntactic parser, not just for assigning grammatical functions, but for identifying constituents, such as an analyzer SynCha 3 (Iida et al., 2011), which uses the parsing results from CaboCha. We assume that using a word as a parsing unit instead of a bunsetsu chunk helps to maintain consistency between syntactic structure analysis and predicate-argument structure analysis. Another problem is that linguistically different constructions share the same representation. The difference of a gapped relative clause and a gapless relative clause is a typical example. In sentences (2) and (3), we cannot discriminate the two relations between bunsetsus b2 and b3 using unlabeled dependency: the former is a subject-predicate construction of the noun "cat" and the verb "eat" (subject gap relative clause) while the latter is not a predicate-argument construction (gapless relative clause). ( hanashi story 'the story about having eaten fish' We aim to build a Japanese typed dependency scheme that can properly deal with syntactic constituency and grammatical functions in the same representation without implicit interpretation rules. The design of Japanese typed dependencies is described in Section 3, and we present our evaluation of the dependency parsing results for a parser trained with a dependency corpus in Section 4. Mori et al. (2014) built word-based dependency corpora in Japanese. Related work The reported parsing achieved an unlabeled attachment score of over 90%; however, there was no information on the syntactic relations between the words in this corpus. Uchimoto et al. (2008) also proposed the criteria and definitions of word-level dependency structure mainly for annotation of a spontaneous speech corpus, the Corpus of Spontaneous Japanese (CSJ) (Maekawa et al., 2000), and they do not make a distinction between detailed syntactic functions either. We proposed a typed dependency scheme based on the well-known and widely used Stanford typed dependencies (SD), which originated in English and has since been extended to many languages, but not to Japanese. The Universal dependencies (UD) (McDonald et al., 2013;de Marneffe et al., 2014) has been developed based on SD in order to design the cross-linguistically consistent treebank annotation 4 . The UD for Japanese has also been discussed, but no treebanks have been provided yet. We focus on the feasibility of word-based Japanese typed dependency parsing rather than on cross-linguistic consistency. We plan to examine the conversion between UD and our scheme in the future. Typed dependencies in Japanese To design a scheme of Japanese typed dependencies, there are three essential points: what should be used as parsing units, which dependency scheme is appropriate for Japanese sentence structure, and what should be defined as dependency types. Parsing unit Defining a word unit is indispensable for wordbased dependency parsing. However, this is not a trivial question, especially in Japanese, where words are not segmented by white spaces in its orthography. We adopted two types of word units defined by NINJL 5 for building the Balanced Corpus of Contemporary Written Japanese (BC-CWJ) (Maekawa et al., 2014;: Short unit word (SUW) is the shortest token conveying morphological information, and the long unit word (LUW) is the basic unit for parsing, consisting of one or more SUWs. Figure 1 shows ex-ample results from the preprocessing of parsing. In the figure, "/" denotes a border of SUWs in an LUW, and "∥" denotes a bunsetsu boundary. Dependency scheme Basically, Japanese dependency structure is regarded as an aggregation of pairs of a leftside dependent word and a right-side head word, i.e. right-headed dependency, since Japanese is a head-final language. However, how to analyze a predicate constituent is a matter of debate. We define two types of schemes depending on the structure related to the predicate constituent: first conjoining predicate and arguments, and first conjoining predicate and function words such as auxiliary verbs. As shown in sentence (4), a predicate bunsetsu consists of a main verb followed by a sequence of auxiliary verbs in Japanese. We consider two ways of constructing a verb phrase (VP). One is first conjoining the main verb and its arguments to construct VP as in sentence (4a), and the other is first conjoining the main verb and auxiliary verbs as in sentence (4b). These two types correspond to sentences (5a) and (5b), respectively, in English. The structures in sentences (4a) and (5a) are similar to a structure based on generative grammar. On the other hand, the structures in sentences (4b) and (5b) are similar to the bunsetsu structure. We defined two dependency schemes Head Final type 1 (HF 1 ) and Head Final type 2 (HF 2 ) as shown in Figure 2, which correspond to structures of sentences (4a) and (4b), respectively. Additionally, we introduced Predicate Content word Head type (PCH), where a content word (e.g. verb) is treated as a head in a predicate phrase so as to link the predicate to its argument more directly. Dependency type We defined 35 dependency types for Japanese based on SD, where 4-50 types are assigned for syntactic relations in English and other languages. (Ikehara et al., 1997) to generalize the nouns. Table 1 shows the major dependency types. To discriminate between a gapped relative clause and a gapless relative clause as described in Section 1, we assigned two dependency types rcmod and ncmod respectively. Moreover, we introduced gap information by subdividing rcmod into three types to extract predicate-argument relations, while the original SD make no distinction between them. The labels of case and gapped relative clause enable us to extract predicate-argument structures by simply tracing dependency paths. In the case of HF 1 in Figure 2, we find two paths between content words: "fried fish"(NN)←pobj←dobj← "eat"(VB) and (VB)←aux←aux←rcmod nsubj← "calico cat"(NN). By marking the dependency types dobj and rcmod nsubj, we can extract the arguments for predicate , i.e., as a direct object and as a subject. Evaluation We demonstrated the performance of the typed dependency parsing based on our scheme by using the dependency corpus automatically converted from a constituent treebank and an off-the-self parser. Resources We used a dependency corpus that was converted from the Japanese constituent treebank (Tanaka et al., 2013) built by re-annotating the Kyoto University Text Corpus (Kurohashi et al., 2003) with phrase structure and function labels. The Kyoto corpus consists of approximately 40,000 sentences from newspaper articles, and from these 17,953 sentences have been re-annotated. The treebank is designed to have complete binary trees, which can be easily converted to dependency trees by adapting head rules and dependency-type rules for each partial tree. We divided this corpus into 15,953 sentences (339,573 LUWs) for the training set and 2,000 sentences (41,154 LUWs) for the test set. Parser and features In the analysis process, sentences are first tokenized into SUW and tagged with SUW POS by the morphological analyzer MeCab (Kudo et al., 2004). The LUW analyzer Comainu (Kozawa et al., 2014) chunks the SUW sequences into LUW sequences. We used the MaltParser (Nivre et al., 2007), which marked over 81 % in labeled attachment score (LAS), for English SD. Stack algorithm (projective) and LIBLINEAR were chosen as the parsing algorithm and the learner, respectively. We built and tested the three types of parsing models with the three dependency schemes. Features of the parsing model are made by combining word attributes as shown in Table 2. We employed SUW-based attributes as well as LUW-based attributes because LUW contains many multiword expressions such as compound nouns, and features combining LUW-based attributes tend to be sparse. The SUW-based attributes are extracted by using the leftmost or rightmost SUW of the target LUW. For instance, for LUW in Figure 1, the SUW-based attributes are s LEMMA L (the leftmost SUW's lemma "fish") and s LEMMA R (the rightmost SUW's lemma "fry"). Results The parsing results for the three dependency schemes are shown in Table 3 (a). The dependency schemes HF 1 and HF 2 are comparable, but PCH is slightly lower than them, which is probably because PCH is a more complicated structure, having left-to-right dependencies in the predicate phrase, than the head-final types HF 1 and HF 2 . The performances of the LUW-based parsings are considered to be comparable to the results of a bunsetsu-dependency parser CaboCha on the same data set, i.e. a UAS of 92.7%, although we cannot directly compare them due to the difference in parsing units. dobj and iobj) resulted in relatively high scores in comparison to the temporal (tmod) and locative (lmod) cases. These types are typically labeled as belonging to the postpositional phrase consisting of a noun phrase and particles, and case particles such as "ga", "o" and "ni" strongly suggest an argument by their combination with verbs, while particles and "de" are widely used outside the temporal and locative cases. Predicate-argument structure We extracted predicate-argument structure information as triplets, which are pairs of predicates and arguments connected by a relation, i.e. (pred , rel , arg), from the dependency parsing results by tracing the paths with the argument and gapped relative clause types. pred in a triplet is a verb or an adjective, arg is a head noun of an argument, and rel is nsubj, dobj or iobj. The gold standard data is built by converting predicate-argument structures in NAIST Text Corpus (Iida et al., 2007) into the above triples. Basically, the cases "ga", "o" and "ni" in the corpus correspond to "nsubj", "dobj" and "iobj", respectively, however, we should apply the alternative conversion to passive or causative voice, since the annotation is based on active voice. The conversion for case alternation was manually done for each triple. We filtered out the triples including zero pronouns or arguments without the direct dependencies on their predicates from the converted triples, finally 6,435 triplets remained. Table 4 shows the results of comparing the extracted triples with the gold data. PCH marks the highest score here in spite of getting the lowest score in the parsing results. It is assumed that the characteristics of PCH, where content words tend to be directly linked, are responsible. The table also contains the results of the predicate-argument structure analyzer SynCha. Note that we focus on only the relations between a predicate and its dependents, while SynCha is designed to deal with zero anaphora resolution in addition to predicateargument structure analysis over syntactic dependencies. Since SynCha uses the syntactic parsing results of CaboCha in a cascaded process, the parsing error may cause conflict between syntactic structure and predicate-argument structure. A typical example is that case where a gapped relative clause modifies a noun phrase A B "B of A", e.g., [ VP ] [ NP ] "footprints of the cat that escaped from a garden." If the noun A is an argument of a main predicate in a relative clause, the predicate is a dependent of the noun A; however, this is not actually reliable because two analyses are separately processed. There are 75 constructions of this type in the test set; the LUW-based dependency parsing captured 42 correct predicate-argument relations (and dependencies), while the cascaded parsing was limited to obtaining 6 relations. Conclusion We proposed a scheme of Japanese typeddependency parsing for dealing with constituents and capturing the grammatical function as a dependency type that bypasses the traditional limitations of bunsetsu-based dependency parsing. The evaluations demonstrated that a word-based dependency parser achieves high accuracies that are comparable to those of a bunsetsu-based dependency parser, and moreover, provides detailed syntactic information such as predicate-argument structures. Recently, discussion has begun toward Universal Dependencies, including Japanese. The work presented here can be viewed as a feasibility study of UD for Japanese. We are planning to port our corpus and compare our scheme with UD to contribute to the improvement of UD for Japanese.
3,343.2
2015-07-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Loss of Igf2 imprinting in monoclonal mouse hepatic tumor cells is not associated with abnormal methylation patterns for the H19, Igf2, and Kvlqt1 differentially methylated regions. IGFII, the peptide encoded by the Igf2 gene, is a broad spectrum mitogen with important roles in prenatal growth as well as cancer progression. Igf2 is transcribed from the paternally inherited allele, whereas the linked H19 is transcribed from the maternal allele. Igf2 imprinting is thought to be maintained by differentially methylated regions (DMRs) located at multiple sites such as upstream of H19 and Igf2 and within Kvlqt1 loci. Biallelic expression (loss of imprinting (LOI)) of Igf2 is frequently observed in cancers, and a subset of Wilms' and intestinal tumors have been shown to exhibit abnormal methylation at H19DMR associated with loss of maternal H19 expression, but it is not known whether such changes are common in other neoplasms. Because cancers consist of diverse cell populations with and without Igf2 LOI, we established four independent monoclonal cell lines with Igf2 LOI from mouse hepatic tumors. We here demonstrate retention of normal differential methylation at H19, Igf2, or Kvlqt1 DMR by all of the cell lines. Furthermore, H19 was found to be expressed exclusively from the maternal allele, and levels of CTCF, a multifunctional nuclear factor that has an important role in the Igf2 imprinting, were comparable with those in normal hepatic tissues with no mutational changes detected. These data indicate that Igf2 LOI in tumor cells is not necessarily linked to abnormal methylation at H19, Igf2, or Kvlqt1 loci. IGFII, the peptide encoded by the Igf2 gene, is a broad spectrum mitogen with important roles in prenatal growth as well as cancer progression. Igf2 is transcribed from the paternally inherited allele, whereas the linked H19 is transcribed from the maternal allele. Igf2 imprinting is thought to be maintained by differentially methylated regions (DMRs) located at multiple sites such as upstream of H19 and Igf2 and within Kvlqt1 loci. Biallelic expression (loss of imprinting (LOI)) of Igf2 is frequently observed in cancers, and a subset of Wilms' and intestinal tumors have been shown to exhibit abnormal methylation at H19DMR associated with loss of maternal H19 expression, but it is not known whether such changes are common in other neoplasms. Because cancers consist of diverse cell populations with and without Igf2 LOI, we established four independent monoclonal cell lines with Igf2 LOI from mouse hepatic tumors. We here demonstrate retention of normal differential methylation at H19, Igf2, or Kvlqt1 DMR by all of the cell lines. Furthermore, H19 was found to be expressed exclusively from the maternal allele, and levels of CTCF, a multifunctional nuclear factor that has an important role in the Igf2 imprinting, were comparable with those in normal hepatic tissues with no mutational changes detected. These data indicate that Igf2 LOI in tumor cells is not necessarily linked to abnormal methylation at H19, Igf2, or Kvlqt1 loci. Genomic imprinting is defined as an epigenetic change leading to differential expression of the two parental alleles in somatic cells (1)(2)(3). Igf2 is one of the known imprinted genes for which only the paternal allele is expressed, the maternal allele being silent. In cancers, the Igf2 imprinting is frequently relaxed so that the silent maternal allele becomes active, resulting in biallelic expression (4,5). Such loss of imprinting (LOI) 1 also occurs in normal tissues adjacent to some cancers with LOI (6,7) and has been implicated in the Beckwith-Wiedemann syndrome (BWS), a congenital overgrowth disorder that pre-disposes to embryonal tumors (8,9). Imprinting of the Igf2 gene is thought to be controlled by sequences located at the H19, Igf2, and other loci (10,11). Such cis elements usually contain the region that is differentially methylated on the parental alleles (DMR) (12)(13)(14)(15)(16), containing CpG-rich repeats, which are postulated to facilitate heterochromatination and gene silencing at imprinted loci (17). In addition, antisense RNA has been shown to be transcribed from regions including the DMRs at Igf2 (18) and Kvlqt1 (KvDMR) (15,16), which has been proposed to serve a regulatory role in silencing the sense orientation transcript (1,2). H19DMR is located upstream of the H19 promoter and is capable of binding CTCF, a highly conserved zinc finger DNAbinding protein with multiple roles in gene regulation. One of the important functions of CTCF is as a chromatin insulator acting in a methylation-dependent manner (19 -22). H19 is expressed from the maternal allele, whereas the paternal allele is silent, indicating that the H19/Igf2 genes are reciprocally imprinted. Competition of these two genes for the use of sets of common enhancers located downstream of H19 is now a well documented model that explains this opposite allele specific expression (19 -22). On the paternal allele, CTCF cannot bind to the hypermethylated H19DMR, and the common enhancer may be utilized by the Igf2 promoters. Such hypermethylation on the paternal H19DMR is established during male gametogenesis and is maintained throughout development (23). On the other hand, on the maternal allele where the H19DMR is unmethylated, CTCF binds to the H19DMR to insulate Igf2, and then the common enhancer may be utilized only by the H19 promoter. The Igf2 gene contains three DMRs, two of which (Igf2DMR0 and DMR1) are located far upstream of the Igf2 coding region, and the other (Igf2DMR2) existing within the gene body (12)(13)(14)18). Igf2DMR0 is methylated on the inactive maternal allele (18), whereas Igf2DMR1 and DMR2 are methylated on the active paternal allele (12)(13)(14). Deletion of the region including maternal Igf2DMR1 has been reported to result in Igf2 LOI (24), and it has been proposed that a putative repressor may bind to the maternal unmethylated Igf2DMR1 to silence the maternal allele (12)(13)(14)24). Furthermore, there seems to be a functional link between the H19DMR and Igf2DMRs, because deletion of the maternal H19 gene results in decreased methylation at the maternal Igf2DMR0 (18) and paternal Igf2DMR1 and DMR2 (25). Moreover, deletion of a large segment of Igf2 upstream, either from the maternal or paternal alleles, is reported to lead to LOI not only of Igf2 but also H19 (26). KvDMR is located within an intron of the Kvlqt1 gene, where the maternal allele is hypermethylated, whereas the paternal allele is unmethylated (15,16). Although the Kvlqt1 gene is transcribed from the maternal allele, antisense transcripts, * This work was supported by grants from the Japanese Ministry of Education, Culture, Sports, Science and Technology, and the grant-inaid for Cancer Research from the Japanese Ministry of Health, Labor, and Welfare. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ‡ To whom correspondence should be addressed. referred to as LIT1, are expressed from the unmethylated paternal allele in both humans and mice (15,16). Demethylation of the maternal KvDMR associated with biallelic expression of LIT1 has been detected in the majority of BWS cases (15,16), Smilinich et al. (15) reporting IGF2 LOI to occur independently of changes in methylation or expression of H19. Thus, KvDMR appears to be an additional control center for the Igf2 imprinting, independent of Igf2/H19 loci. However, this remains controversial, because Lee et al. (16) reported similar results but found that KvDMR demethylation was not correlated with the IGF2 LOI, although it was consistently associated with LIT1 LOI. The molecular events involved in the Igf2 LOI in tumors have not been determined. In a subset of Wilms' tumors (27,28) and intestinal tumors (7), it has been shown to be accompanied by methylation at the maternal H19DMR together with silencing of the maternal H19 allele (27,28). Although this pattern of Igf2/H19 expression is consistent with the Igf2/H19 chromatin insulation model, there is no reciprocal pattern of Igf2/H19 expression as well as no strict relationship between the Igf2 LOI and methylation at the H19DMR in various tumors (29 -34). Furthermore, some cases of Wilms' tumor with the Igf2 LOI show hypomethylation at the region corresponding to mouse Igf2DMR0 (34), suggesting the possibility that altered methylation at Igf2DMRs may be also involved. In the present study, we therefore investigated whether Igf2 LOI is associated with specific alterations of methylation at H19DMR, Igf2DMR1, and KvDMR. In an earlier report, we documented that cells of hepatic tumors (HTs) chemically induced in C3H/HeJ ϫ MSM (Mus musculus molossinus Mishima, Japanese wild mice) frequently show Igf2 LOI and express Igf2 at high levels (35). Furthermore, consistent polymorphisms found between the parental C3H/HeJ and MSM strains allow investigation of allele-specific methylation correlating with allele-specific expression of Igf2/H19. We here report that HT cells with Igf2 LOI are composed of diverse cell populations, with and without Igf2 LOI, but even the monoclonal cells with the Igf2 LOI retain the maternal H19 expression with normal methylation patterns at H19DMR, Igf2DMR1, and KvDMR. EXPERIMENTAL PROCEDURES Mouse HT Cell Lines-Male F1 hybrid mice derived from breeding female C3H/HeJ and male MSM mice were intraperitoneally administered diethylnitrosamine (5 g/g, body weight) at the age of 2 weeks. Cells were isolated by the collagenase perfusion method from HTs that developed 12-14 months later. The HT cells were cultured at a low density as described previously (36), and colonies that were present 4 -5 days after the start of cultivation were individually isolated using a cloning ring and expanded to establish cell lines. These were then examined for Igf2 LOI, and when LOI (ϩ) clones were found, they were further cloned to determine how many subclones showed the Igf2 LOI. Reverse Transcription-PCR (RT-PCR)-RNA was extracted from HT cells and tissues and from normal liver tissues of male C3H/HeJ, MSM, and C3H/HeJϫMSM F1 mice at various ages. Total RNAs were treated with RNase-free DNase (GenHunter, Nashville, TN) to destroy contaminating genomic DNA, and first strand cDNAs were generated using Superscript II (Invitrogen). The specific primers and conditions for each RT-PCR are listed in Table I. Igf2 LOI-The Igf2 exon 6 containing the polymorphic CA repeat (35) was amplified by genomic or RT-PCR using the 5Ј-6-carboxyfluoresceinlabeled forward and unlabeled reverse primers, and the products were analyzed by an Applied Biosystem automatic sequencer (ABI Prism 377; Foster City, CA) with the GeneScan software. RT negative controls were run in parallel and demonstrated to be consistently negative. H19 LOI-To find the polymorphism in the H19 coding region, the H19 exons were amplified by PCR from the genomic DNA of C3H/HeJ and MSM mice, and the products were sequenced with the automatic sequencer. The region covering H19 exons 2-3, including a polymorphic base at nt 7144 (GenBank TM accession number AF049091), G for C3H/ HeJ and A for MSM, was then amplified by RT-PCR. The products were also cloned using a TOPO TA cloning kit (Invitrogen, Carlsbad, CA), and the clones were individually sequenced for each sample. Methylation-specific DNA Sequencing-The polymorphic bases at the H19DMR, Igf2DMR1, and KvDMR in C3H/HeJ and MSM mice were determined by sequencing the genomic DNA. Bisulfite treatment of the DNA was carried out using a CpGenome DNA modification kit (Intergen, Purchase, NY). The regions within the DMRs containing the polymorphic bases were then amplified from the bisulfite-treated DNA using the primers listed in Table I. These PCR products were cloned using the TA cloning kit, and plasmid clones were individually sequenced. CTCF Mutations-CTCF exons 2-7, including 11 zinc finger domains, were separately amplified from cDNA using the primers listed in Table I and sequenced as above. Western Blotting-Cells were isolated from the subconfluent cultures and lysed using solution containing 50 mM Tris (pH 8.0), 120 mM NaCl, 0.5% Nonidet P-40, and 100 mM NaF. The cell lysates, each with 50 g of denatured protein, were run on 7% polyacrylamide gels containing 0.1% SDS and then blotted onto nitrocellulose membranes. The filters were then blocked with 5% nonfat dry milk in phosphate-buffered saline plus 0.2% Tween 20 and incubated with 1:100 diluted anti-CTCF antibody (Santa Cruz Biotechnology, Inc., Santa Cruz, CA). After extensive washing in phosphate-buffered saline plus 0.2% Tween 20, the filters were reacted with the horseradish peroxidase-conjugated antigoat secondary antibody (Santa Cruz Biotechnology) and washed again. Finally, bound antibodies were visualized using the ECL TM system (Amersham Biosciences, Uppsala, Sweden). Statistics-The data were statistically evaluated with Statview software (SAS Institute, Cary, NC). Differences were analyzed using the 2 or Fisher's test, and significance was concluded at p Ͻ 0.05. Establishment of Monoclonal HT Cell Lines with Igf2 LOI- Since the RT-PCR fragments from Igf2 exon 6 including the polymorphic CA repeat were four bases longer in the paternal MSM allele than the maternal C3H/HeJ allele (35), it was possible to distinguish the parental alleles from which Igf2 was expressed (Fig. 1). Although Igf2 was expressed in all 10 HT tissues and 54 of 62 cell lines, only 4 of 54 HT cell lines were demonstrated to show the Igf2 LOI (Table II). Then the four cell lines with the Igf2 LOI were subcloned, and 11-33 sublines were produced from each. Analysis of each subclone revealed mosaicism in terms of monoallelic/biallelic as well as positive/ negative Igf2 expression (Table II). Therefore, three Igf2 LOI (ϩ) clones from the first cell lines were further subcloned, and 9 -12 sublines were produced from each. Analysis of these revealed 75-100% of sublines to show the Igf2 LOI (Table II). Then one of the subclones derived from the first Igf2 LOI (ϩ) cell line and three subclones derived from the second were employed together with four Igf2 LOI (Ϫ) cell lines in the next experiment. H19 Imprinting-RT-PCR analysis directed to H19 exons 2-3 revealed that the normal adult mouse livers and all the cell lines expressed H19 regardless to the Igf2 LOI ( Fig. 2A). Sequencing of the individual RT-PCR fragments revealed that they were all derived from the maternal H19 allele in all the cell lines, indicating that the H19 expression and imprinting a RT-PCR products from H19 mRNA were cloned into the TA cloning vector, and each clone was sequenced for determination of the maternal/paternal allele expression. are retained (Table III and Fig. 2B). Methylation at the H19DMR-Allele-specific methylation at the H19DMR, including 13 CpGs and one of the four CTCF sites (the fourth CTCF site at the 3Ј-end in Refs. 19 and 20), was then analyzed (Fig. 3A). This region includes a single polymorphic base at nt 3534 (GenBank TM accession number AF049091), C for C3H/HeJ and A for MSM, allowing distinction of the parental alleles. This region was found to be generally heavily methylated on the paternal allele and hypomethylated on the maternal allele in the normal liver in two mice as previously described (23), but one sample showed less methylation on the paternal allele (Fig. 3, A-1). Nonmethylated CpGs on the paternal allele were more frequent on the 5Ј side, including the CTCF binding site. When comparing cells with and without the Igf2 LOI (Fig. 3, B and C), aberrant methylation in the region including the CTCF site was found on the maternal allele at low frequency. This change was detected in all of the four Igf2 LOI (ϩ) cells (Fig. 3C) but not in the Igf2 LOI (Ϫ) cells (Fig. 3B) (p Ͻ 0.05). Although the same region was less methylated on the paternal allele in the Igf2 LOI (Ϫ) cells (Fig. 3B) than the Igf2 LOI (ϩ) cells (Fig. 3C), this could be due to physiological variation as observed in normal hepatic tissues, because the Igf2 LOI (Ϫ) cell lines were derived from different tumors that were present in different hepatic lobes in a single mouse. Status of CTCF-RT-PCR revealed CTCF to be expressed in all of the cell lines (Fig. 4A), and direct sequencing of the CTCF cDNA detected no mutations (data not shown). Western blotting analysis revealed that the expression levels of CTCF protein were not different between the cell lines with and without Igf2 LOI (Fig. 4B), indicating that the CTCF may not be altered in any of them. Methylation at the Igf2DMR1-Five CpGs in the Igf2DMR1 were examined for allele-specific methylation using the polymorphic base at nt 12826 (GenBank TM accession number MMU71085), A for C3H/HeJ and G for MSM as a marker. Although degrees of methylation were variable in each normal hepatic tissue and cell line, some characteristic methylation patterns were noted (Fig. 5). First, the CpGs at the first and fifth positions were more methylated than the others, although this was not constant either in normal livers or cell lines. Second, the CpG at the fifth position tended to be differentially methylated on the paternal allele, but such a tendency was not apparent for other CpGs, for which the results were variable in each sample. However, when cells with and without the Igf2 LOI were compared, there was no clear cut difference in the methylation patterns. Methylation at the KvDMR-Twelve CpGs in the KvDMR, including a polymorphic base at nt 2494 (GenBank TM accession number AF119385), with G for C3H/HeJ and C for MSM, were generally hypermethylated on the maternal allele but hypomethylated on the paternal allele in the normal livers (Fig. 6A), as previously described (15). However, some PCR fragments derived from the paternal allele showed hypermethylation, and a few maternal fragments showed hypomethylation, indicating that the KvDMR creates a mosaic in terms of methylation at the individual DNA strand level in normal hepatic tissues. On the other hand, such a mosaic pattern was less prominent in the HT cell lines, presumably due to extensive selection of the clones. However, there was no significant difference in methylation patterns between the cells with and without Igf2 LOI. 3. Allele-specific methylation status in the H19DMR. A, livers of 9-week-old C3H ϫ MSM F1 mice. Methylation-specific sequencing of the bisulfitetreated DNA shows that the paternal allele is heavily methylated, whereas the maternal allele is nonmethylated or hypomethylated. Two CpGs enclosed by a square are one of the four CTCF sites. In one mouse (A-1), the paternal allele is less methylated in the 5Ј region than the other mice (A-2, 3). B, Igf2 monoallelic cell lines. Degrees of methylation on the maternal allele are not changed as compared with the normal livers, but the paternal allele is less methylated, especially at the CTCF site and its franking region. The nonmethylated CpG is most frequent at the 5Ј-site. C, Igf2 biallelic cell lines. Although aberrant methylation at the CTCF site and its flanking region is seen on the maternal allele in all four cell lines at low frequency (*), the differential methylation pattern on the paternal and maternal alleles is maintained in most plasmid clones. DISCUSSION Although the examined mouse HT cell lines showed Igf2 LOI at low frequency, all 10 HT tissues were negative in this study. This suggests that the Igf2 LOI (ϩ) cells may be derived from rare cells within the HT tissues or generated de novo during establishment of the lines. Because Igf2 LOI has been reported in cultured mouse and rat fibroblasts (37,38) and human T lymphocytes stimulated to proliferate by phytohemagglutinin in vitro (39), culture conditions may be responsible. Furthermore, Ungaro et al. (38) reported that the Igf2 LOI that occurred in rat fibroblasts held in the confluent state persisted over cell generations, when the cell confluence was released by trypsinization and dilution, suggesting that the Igf2 LOI may be irreversible. On the other hand, we demonstrated that even the established HT cell lines were composed of heterogenous cell populations with/without the Igf2 LOI, as well as with/ without Igf2 expression. Heterogeneity of cell populations in solid tumors was reported according to Igf2 LOI in Wilms' tumors (40) and to E-cadherin gene methylation and expression in breast cancers (41). Previous studies have demonstrated Igf2 LOI to be associated with loss of H19 expression, together with aberrant methylation on the maternal H19 gene, in a subset of Wilms' and other tumors (7,27,28), which lends support to the Igf2/H19 chromatin insulation model (19 -22). However, in the present study, all the Igf2 LOI (ϩ) cells retained monoallelic H19 expression from the maternal allele. Such a pattern of Igf2/H19 expression may be due to mixed populations of cells, with and without Igf2 LOI (i.e. if one population had biallelic Igf2 expression without the H19 expression and another maintained the normal Igf2/H19 imprinting, the outcome would be biallelic Igf2 expression with maternally monoallelic H19 expression). We therefore cloned the Igf2 LOI (ϩ) cells and isolated four subclones of cells with biallelic Igf2 expression. Allele-specific expression analysis, however, revealed the biallelic Igf2 expression with maternally monoallelic H19 expression to still be evident in all cases. Most CpGs on the paternal allele are hypomethylated, whereas those on the maternal allele are hypermethylated. However, DNA strands with either hyper-or hypomethylation are seen, respectively, in the paternal and maternal alleles in the normal liver, although such aberrantly hypo-or hypermethylated strands are less frequent in HT cell lines. There is no significant difference in the methylation patterns between the Igf2 LOI (Ϫ) and (ϩ) cell lines. Igf2 Imprinting Relaxation and Methylation Although methylation-specific sequencing detected aberrant methylation at the maternal H19DMR at low frequency in the Igf2 LOI (ϩ) cells, the normal differential methylation pattern was retained in individual DNA strands in both Igf2 LOI (ϩ) and (Ϫ) cells. This is in contrast to the reported cases of Wilms' and other tumors (7,27,28), cells of individuals affected by Beckwith-Wiedemann syndrome (8,9), and human T lymphocytes stimulated to proliferate in vitro (39), in which IGF2 LOI occurs in association with aberrant methylation at the maternal H19DMR. Thus, Igf2 LOI definitely occurs independently of altered methylation at H19DMR, and aberrant methylation of the maternal H19DMR may not, to a major extent, contribute to the Igf2 LOI in our cell lines. It was noted that some samples showed less methylation at the paternal H19DMR, especially at one of the four CTCF sites (19,20) and its flanking regions (Fig. 3, A-1 and B-1-4), indicating variation in the degree of methylation at the paternal H19DMR in individual mice. Such demethylation might allow the binding of CTCF to paternal H19DMR, resulting in the insulation of the paternal Igf2. However, the fact that the paternal Igf2 and maternal H19 expressions were maintained in such cells indicates that such partial demethylation on the paternal H19DMR does not affect the Igf2/H19 imprinting. The observed low frequency of aberrant methylation of the maternal H19DMR raises the possibility that the Igf2 LOI may be caused by alterations of other components in the Igf2/H19 insulation machinery. We therefore investigated alterations in CTCF, because chromosomal loss at human Ch16q, where CTCF is localized, has been frequently detected, in many types of tumor including HTs (42,43). Mutational changes in the CTCF gene have been found in some human tumors (44), and overexpression of CTCF in tumor cells leads to growth arrest and apoptosis (45). Furthermore, CTCF can act as a silencer for c-myc (44), which is frequently overexpressed in various tumors including HTs. However, because no CTCF mutations were detected, and also because its protein levels were not different between Igf2 LOI (ϩ) and (Ϫ) cells, the CTCF function may be intact in these cells, although further experiments are required to confirm this. The Igf2DMR1 is suggested to be able to bind to a putative silencer in a tissue-specific manner (12,13,24). Deletion of the 5-kb region corresponding to the mouse Igf2DMR1 results in activation of the silent maternal Igf2 in mesenchymal tissues, while not affecting the maternal H19 expression (24). Such tissue specificity is speculated to be due to the possibility that, although the access of the endoderm-specific common H19/Igf2 enhancer to the Igf2 promoter is efficiently blocked by the CTCF insulator on the maternal chromosome, the mesodermspecific common enhancer may further need the Igf2 silencer to suppress maternal Igf2 expression. Methylation analysis revealed that the patterns of differential methylation at the Igf2DMR1 were not apparent as compared with the H19DMR and KvDMR in the normal hepatic tissue. Especially, although differential methylation was observed at the fifth position of the five CpGs analyzed in the some samples, it was not apparent for other CpGs. The fact that there was no significant difference in the methylation pattern between the Igf2 LOI (ϩ) and (Ϫ) cells suggests that this region may not be responsible for the Igf2 LOI in these cell lines. In addition to the H19DMR and Igf2DMRs, the elements so far substantiated according to the Igf2 LOI are the KvDMR (15) and some newly found enhancers upstream and downstream of H19 (46 -48). Aberrant demethylation at the maternal KvDMR, associated with biallelic LIT1 expression, is thought to be a major cause of BWS (8,9,15,16). Because such BWS cases do not show any abnormality at the H19DMR and loss of H19 expression, the KvDMR is thought to be the locus for BWS independent of H19DMR. LIT1 expression from the paternal KvDMR is indicated to be related to regulation of various maternally expressed neighboring genes at human 11p15.5 and mouse distal Chr7, with a special importance for p57 Kip2 (2). However, because such demethylation at the maternal KvDMR has not been detected in Wilms' tumors (49,50) and also because the frequency of embryonic tumors in BWS cases with the maternal KvDMR demethylation and LIT LOI is much lower than in the cases with maternal H19DMR methylation and loss of H19 expression (51,52), KvDMR may be less important than H19DMR, or not relevant to pathogenesis of Wilms' tumors. The present observation of no abnormality at KvDMR in our cell lines may support the notion that the KvDMR may not have a major role in maintenance of Igf2 imprinting (16). Further potential candidates are newly found tissue-specific enhancers located upstream and downstream of H19, which are conserved between mouse and humans (46 -48). These novel sequences may not only interact with H19DMR and Igf2DMR1 but also affect the methylation status of H19DMR, and deletion of these sequences may lead to the Igf2 LOI. Furthermore, in brain, where Igf2 expression is biallelic, the relevant enhancers are located upstream of the H19DMR (53). Therefore, the possibility that alteration(s) of these elements may result in Igf2 LOI in tumors remains to be investigated. In conclusion, the present study demonstrated that Igf2 LOI can occur independently of changes in H19DMR, Igf2DMR1, and KvDMR in mouse HT cell lines.
6,097
2003-02-21T00:00:00.000
[ "Biology" ]
Neutrino Magnetic Moments Meet Precision $N_{\rm eff}$ Measurements In the early universe, Dirac neutrino magnetic moments due to their chirality-flipping nature could lead to thermal production of right-handed neutrinos, which would make a significant contribution to the effective neutrino number, $N_{\rm eff}$. We present in this paper a dedicated computation of the neutrino chirality-flipping rate in the thermal plasma. With a careful and consistent treatment of soft scattering and the plasmon effect in finite temperature field theories, we find that neutrino magnetic moments above $2.7\times 10^{-12}\mu_B$ have been excluded by current CMB and BBN measurements of $N_{\rm eff}$, assuming flavor-universal and diagonal magnetic moments for all three generation of neutrinos. This limit is stronger than the latest bounds from XENONnT and LUX-ZEPLIN experiments, and comparable with those from stellar cooling considerations. Cosmological observables such as the effective neutrino number, N eff , can also be used to constrain NMM. For Dirac neutrinos, NMM could flip the chirality and thermalize righthanded neutrinos, which would contribute to N eff significantly. Under this line of thought, previous studies derived cosmological bounds on NMM, µ ν < O(1) × 10 −11 µ B [34,35] and µ ν < 2.9 × 10 −10 µ B [36]. Alternatively, one may consider Majorana neutrinos which can only possess transition magnetic moments. In this case, no additional species are produced but large NMM could modify the SM neutrino decoupling at the MeV epoch. However, the cosmological constraint on this scenario is found to be weak [37]. Our work contains a careful calculation of the chirality-flipping rate, which was not treated consistently when an infrared (IR) divergence is involved in previous studies [34][35][36]. In Ref. [34], the chirality-flipping rate was obtained from a straightforward computation with a naive cut for the momentum transfer and the result exhibited a logarithmic dependence on the cut. In Ref. [35], the authors computed the rate in the real-time formalism of thermal quantum field theory (QFT), with a resummed photon propagator in the Hard-Thermal-Loop (HTL) approximation [57,58], but the imaginary part of the neutrino self-energy is time-ordered rather than retarded 2 . Later in Ref. [36], the chirality-flipping rate was computed with a retarded loop amplitude under the HTL approximation [57,58], and a momentum-cut approach was used to separate the hard-and soft-momentum transfers [60]. These different treatments are one of the reasons that lead to the different upper bounds of µ ν mentioned above. In our more elaborated computation of the chirality-flipping rate, we adopt the realtime formalism of thermal QFT and take into account the resummed photon propagator with the damping rate not limited to the HTL approximation. We present the master integral for the collision rate that automatically contains contributions of both photonmediated scattering processes and plasmon decay γ * →ν L + ν R . Since the separation of hard-and soft-momentum contributions is known to be nontrivial [36,61], we will follow a numerical approach to compute the master integral, which allows us to obtain the collision rate more efficiently. Our calculation is focused on the Dirac neutrino case, but the collision rate computed in this work can be readily applied to Majorana neutrinos. The paper is organized as follows. In Sec. 2, we start with a straightforward calculation Figure 1. The t-and s-channel scattering processes for ν R production in the early universe. The red blobs denote neutrino magnetic moments and ψ denotes a generic charged fermion. of the collision rate from the tree-level scattering amplitude, and show that the calculation relies on the infrared cut imposed on the momentum transfer. A more consistent treatment requires loop calculations in the thermal QFT, which will be elaborated in Sec. 3. Using the obtained collision rate, we compute the NMM correction to N eff and derive cosmological bounds on NMM in Sec. 4. Finally, we draw our conclusions in Sec. 5. 2 ν L → ν R flipping rate from tree-level scattering amplitudes The effective Lagrangian of a Dirac NMM is formulated as where ν denotes the Dirac spinor of a neutrino, σ αβ = i[γ α , γ β ]/2, and F αβ = ∂ α A β − ∂ β A α is the electromagnetic field tensor. In the chiral basis, ν = (ν L , ν R ) T , we can write it as which implies that the NMM operator flips the chirality of the neutrino. Neutrinos could also possess electric dipole moments, L ⊃ ν νσ αβ iγ 5 νF αβ /2, similar to Eq. (2.1) except for an additional iγ 5 . Our calculations for NMM can be applied to electric dipole moments by simply replacing µ 2 ν → 2 ν in the ν L → ν R rates because the squared amplitudes of all processes considered in this work are not affected by the additional iγ 5 , which can be seen manifestly in terms of Weyl spinors. In the presence of such a chirality-flipping interaction, ν R can be produced in the thermal bath of the early universe via the tree-level scattering processes (see Fig. 1), ψ + ψ → ν L + ν R and ν L + ψ → ν R + ψ, where ψ denotes a generic charged fermion. In this paper, we assume that NMM are flavor universal and flavor diagonal for simplicity, which implies that each NMM is responsible for the thermal production of a single species of ν R . Let us first calculate the chirality-flipping rate for the tree-level scattering processes in the zero-temperature limit and then show the necessity of considering finite-temperature corrections. For the s-channel process ψ + ψ → ν L + ν R , the squared amplitude reads where α ≡ e 2 /(4π) is the fine-structure constant, s, t, u are the Mandelstam variables, and all the fermion masses are neglected. The cross section is found to be constant in energy, The cross section is to be used in the Boltzmann equation, where n ν R denotes the number density of ν R , H is the Hubble parameter, and C (gain) ν R and C (loss) ν R are collision terms accounting for the gain and loss of ν R due to reactions. Since our discussions will be mainly on C for brevity. The s-channel contribution to C ν R , denoted by C ν R ,s , is computed as follows [62]: where T is the temperature of the SM thermal bath and K 1 is the modified Bessel function of order one. In Eq. (2.6) we have taken the same approximations adopted in Ref. [62] such as neglecting the Pauli-blocking effects and using the Boltzmann distributions. For the purpose of studying when ν R can be in thermal equilibrium, we define the thermally averaged collision rate, where n eq ν R is the equilibrium value of n ν R . For the s-channel collision term, it reads With σvn defined, the condition of ν R in thermal equilibrium is formulated as The Hubble parameter is determined by GeV is the Planck mass and g is the number of effective degrees of freedom. Neglecting the temperature dependence of g , we see that H is proportional to T 2 while σvn is proportional to T 3 . Therefore, at a sufficiently high temperature, Eq. (2.9) is always satisfied. By solving H = C ν R /n eq ν R with respect to T , one can obtain the temperature of ν R decoupling, T dec . For T > T dec , ν R is in thermal equilibrium with the SM plasma. For T < T dec , ν R is decoupled and its temperature can be computed using entropy conservation. Next, let us include the t-channel contribution to the collision rate. The squared amplitude of ν L + ψ → ν R + ψ is given by which is known to have an IR divergence in the soft-scattering limit, t → 0. In previous studies [34,63], an IR cut was imposed manually on the momentum transfer. This is qualitatively correct since the photon at finite temperatures has modified dispersion relations and acquires an effective thermal mass m γ ∼ eT in the relativistic QED plasma [57,58]. Keeping the photon thermal mass as an IR regulator, the cross section reads (2.11) Using m γ = eT / √ 6 to be derived in Sec. 3, we obtain the thermally averaged collision rate: where both ψ + ν L → ν R + ψ andψ + ν L →ψ + ν R have been taken into account. By comparing Eq. (2.12) to Eq. (2.8), one can see that σvn t / σvn s ≈ 55, which implies that the t-channel scattering dominates the ν R production, at least according to the above calculation with the simple IR regulator. A precise computation of ν R production rate that can consistently remove the IR divergence involves thermal QFT, as we will present in the next section. 3 ν L → ν R flip rate at finite temperatures Computation method The IR divergence in the t channel can be canceled by taking the finite-temperature effects into account. Previously, a momentum-cut approach was introduced to separate the hardand soft-momentum contributions [60]. The former is calculated in the tree-level scattering amplitude with a vacuum photon propagator, while the latter takes into account the resummed photon propagator at finite temperatures. Although it has been shown that the dependence on the momentum cut would be canceled after combining the hard-and soft-momentum contributions, this approach is based on some particular sum rules of the thermal propagators [36,61]. Sometimes finding the rules is a nontrivial task. In particular, the analytic extraction of the momentum-cut dependence in the soft regime is not so simple as that in the hard domain. If we are only concerned with the total rate, a full momentum integration that automatically combines the hard and soft regimes could be more efficient, as applied in Ref. [35]. Below we present the calculations in details. Readers who are not interested in the finite-temperature calculations are referred to Tab. 1 for the final results. Collision rate in the Boltzmann equation The evolution of phase-space distribution function of ν R is governed by the following Boltzmann equation, Figure 2. The self-energy diagram of ν R in the presence of neutrino magnetic moments (red blobs). The photon propagator is resummed to include the charged-fermion loop. In thermal QFT, the evaluation of this diagram can be used to obtain the total ν R production rate which automatically includes the contributions of plasmon decay and scattering processes. For instance, the dashed cut leads to a tree-level diagram corresponding to the on-shell scattering shown in Fig. 1. where Γ ν R ,gain/loss denotes the gain/loss rate of ν R , respectively. Their sum, Γ ν R ,tot ≡ Γ ν R ,gain + Γ ν R ,loss , can be physically interpreted as the rate of f ν R evolving towards equilibrium [64] 3 . At finite temperatures, it is related to the imaginary part of the retarded ν R self-energy as follows [64] (see also Appendix A for further details): where Σ R (p µ ) denotes the retarded self-energy of ν R and p µ = (E p , p) is the neutrino fourmomentum. In the real-time formalism of thermal QFT, the main task is to calculate the imaginary part of Σ R . The diagram of Σ R , as shown in Fig. 2, consists of a ν L propagator and a resummed γ propagator which includes the contribution of charged-fermion loops. The tree-level scattering amplitudes in Fig. 1 correspond to a half of the loop diagram after cutting it symmetrically along the blue dashed line in Fig. 2. According to the optical theorem, the squared amplitude of the tree-level diagram can be computed from the imaginary part of the loop diagram. Since in the resummed photon propagator an infinite number of one-particle-irreducible (1PI) loops are actually included, one can also add another 1PI loop to the photon propagator in Fig. 2 and then cut it symmetrically. This corresponds to the contribution of plasmon decay. The collision term C ν R for ν R production previously introduced in Eq. (2.5) is related to Γ ν R ,gain via where f eq ν R is the distribution function in thermal equilibrium, and we have used the unitary condition Γ ν R ,gain = f eq ν R Γ ν R ,tot [64]. Figure 3. Two contributions, Σ −+ (p) and Σ +− (p), to the imaginary part of the retarded righthanded neutrino self-energy. The black blob denotes the charged-fermion loop in the resummed photon propagator, as shown in Fig. 2. Neutrino self-energy from NMM interaction Let us now compute the retarded self-energy of right-handed neutrino Σ R (p). The imaginary part of retarded self-energy can be evaluated by 4 where Σ −+ (p) and Σ +− (p) represent the two amplitudes in Fig. 3. The explicit forms are where k ≡ p + q and q denote the momentum of ν L and the photon. S ±∓ denote the free thermal propagators of the neutrino [57]: where f ν L is the thermal distribution function of ν L . The last parts of Eqs. (3.5) and (3.6), G ±∓,µν , denote the resummed photon propagators, which will be elucidated in Sec. 3.4. Polarization tensors For the retarded photon self-energy amplitude, Π R,µν , the most general tensor structure obeying the Ward identity (q µ Π R,µν = 0) is given by 5 where the longitudinal and transverse polarization tensors are given by 6 [65] (3.10) and u µ the 4-velocity of the plasma. In the rest frame, u µ = (1, 0) and the polarization tensors reduce to The retarded propagator for free photon is given by where the free 2 × 2 thermal propagator matrix elements, G ±±,µν (q), are given by and G −−,µν (q) = −G * ++,µν (q). Then, the Dyson-Schwinger equation in terms of Π L,T R reads where we have used the orthogonality, for A = L, T , and The modifications to the photon dispersion relation arising from the longitudinal and transverse parts are now encoded in the different thermal scalar functions Π L,T R (q). Given the free and resummed retarded propagators, we can compute any thermal components of resummedG AB,µν for A, B = ± via the diagonalization approach [66]. Explicitly, we havẽ where the diagonalization matrices U, V are given by [66] where f B/F are distribution functions for bosons and fermions respectively. Note that the physical result is independent of the unspecified scalar functions b q , c q . The diagonal matri-cesĜ µν ,Π µν are constructed by retarded/advanced propagators G R/A and loop amplitudes with G A,µν = G * R,µν and Π A,µν = Π * R,µν . We can then obtainG +−,µν ,G −+,µν in the retarded self-energy amplitude of ν R as follows: where the spectral densities ρ L,T (q) are given by (3.25) In the free limit, Π L,T R = 0,G +−,µν andG −+,µν reduce to the form given in Eqs. (3.15)-(3.16). Self-energy at high temperatures The remaining task towards determining the resummed photon propagator is to compute the transverse and longitudinal functions Π L,T R (q) from the retarded photon self-energy diagram, as shown in Fig. 4. For a given ψ in the photon self-energy loop, the amplitude is given by where S −+ and S +− are given in Eqs. (3.7) and (3.8) while the time-ordered propagator neglecting the mass of ψ is given by Note that in Eq. For q 2 < 0, it leads to cos θ 1 : while for q 2 0, it gives We can also see from Eq. (3.28) that if we neglect the q 2 term, the conditions | cos θ 1,2 | 1 are only supported by spacelike photon propagation, i.e., −| q| < q 0 < | q|. This observation is equivalent to the results under the HTL approximation, where only the corrections from the q 2 < 0 regime are considered in the resummed photon propagator. Evaluating the longitudinal part ImΠ L R = q 2 ImΠ 00 R /| q| 2 in the q 2 < 0 and q 2 0 respectively with the full quantum statistics, we obtain where x 0 ≡ q 0 /T, x q ≡ | q|/T , and Li n is the polylogarithm function of order n. Similarly, the transverse part In the above equations, we have dropped the contributions of ImΠ L,T R in the x 0 > x q region. As will be found out in Sec. 3.5, this region is not supported by the integration of photon momentum q. It should be pointed out that the above results for ImΠ L,T R are exact without the HTL approximation. If we go to the limit (| q| ± q 0 )/2 = 0 in Eqs. (3.29)-(3.30) and take the Fermi-Dirac statistics, ImΠ L,T R at q 2 < 0 would reduce to the known results in the HTL approximation [57], i.e., Since the imaginary parts can be analytically integrated even with the full quantum statistics due to the presence of two Dirac δ-functions, we will use ImΠ L,T R without the HTL approximation. Taking further into account the contributions in the q 2 0 region, the contributions of scattering with both hard-and soft-momentum transfers will be included simultaneously. On the other hand, an analytic integration with the full quantum statistics cannot be obtained for the real part of Π L,T R . Given that the real part ReΠ L,T R (q) ∼ αT 2 plays the role of IR regulator in the soft-momentum transfer q 2 T 2 but only serves as sub-leading corrections to the photon spectral density in the hard-momentum transfer q 2 αT 2 , we can apply the HTL-approximated results [57,67]: for the photon spectral density ρ L,T (q) in the full momentum space. The computation thus far considers only the contribution of the electron. At temperatures above the QCD phase transition, one needs to include quarks into the charged-fermion loop. In general, for ν R decoupling above a few hundred GeV, all the SM charged fermions and the charged W boson can contribute to the photon dispersion relation in Fig. 2. Their contributions can be taken into account by replacing where Q ψ denotes the electric charge of ψ. Here the summation goes over all charged particles that are relativistic in the thermal bath. All quarks at a sufficiently high temperature contribute a combined factor of (1/3) 2 × 3 × 3 + (2/3) 2 × 3 × 3 = 5 and all charge leptons contribute a factor of 3. For the W boson, we assume its contribution is the same as the electron, though strictly speaking it would require a more dedicated calculation. When the temperature is not sufficiently high, we treat c ψ as a step function of T , including only charged particles with masses below T . Full result of the collision rate Combining results in the above calculations, we can write the imaginary retarded amplitude as Note that due to the presence of δ(k 2 ), which dictates k 2 = (p+q) 2 = 2(q 0 p 0 − p· q)+q 2 = 0, we obtain For q 2 > 0 (i.e. q 0 > | q| or q 0 < −| q|), the second inequality in Eq. (3.41) would not be satisfied if we take the positive branch q 0 > | q| > 0. In this case, we find that k 0 is always negative. Similarly, for q 2 < 0 we find that k 0 is always positive. Hence the sign function takes Assembling all the pieces in the previous calculation, the trace in Eq. (3.3) is given by where the Heaviside functions are defined as Finally, we obtain the master integral for the full collision rate as follows: where , and c ψ is the enhancement factor defined in Eq. (3.39). The three-dimensional integral can be integrated numerically, which yields 7 where we have used c ψ = 9 to include the contribution of all charged particles. If only ψ = e is included (i.e. c ψ = 1), we would have σvn ψ=e ≈ 1.53αµ 2 ν T 3 . If we use the HTL-approximate results given in Eq. (3.36), we obtain σvn ≈ 1.84αµ 2 ν T 3 and Γ tot ≡ Thermally averaged ν L → ν R rate σvn /αµ 2 ν T 3 Zero-T QFT + m γ cut + e ± plasma + Boltzmann statistics 2.26 Finite-T QFT + e ± plasma + HTL approximation 1.84 Finite-T QFT + e ± plasma + full quantum statistics 1.53 Finite-T QFT + ψψ plasma + full quantum statistics 6.47 Table 1. The thermally averaged chirality-flipping rates computed in different methods. "Zero-T QFT + e ± plasma + Boltzmann statistics" denotes the calculation in Sec. 2, where we use Feynman rules of zero-temperature QFT with a finite photon mass cut m γ = eT / √ 6 and the Boltzmann statistics in the relativistic electron-positron (e ± ) plasma. In the finite-T QFT approach from Sec. 3, finite-temperature effects and full quantum statistics (Fermi-Dirac/Bose-Einstein) are taken into account. The latter should be compared with the rate Γ tot ≈ 1.81αµ 2 ν T 3 obtained in Ref. [35]. Note that the difference between σvn , which can be written as (1 − f ν R )Γ tot according to the definition, and Γ tot is that the former is slightly suppressed by the Pauli-blocking factor (1 − f ν R ). From Eq. (2.12), we can see that the result derived from the tree-level scattering amplitude with a photon thermal mass cut m γ = eT / √ 6 is overestimated. The comparison of our final results obtained in the finite-temperature approach (together with quantum statistics) to the result obtained in Sec. 2 is presented in Tab. 1. Plasmon decay At the end of this section, we would like to present an estimate of the plasmon decay (γ * →ν L + ν R ) effect, which would be important when the charged fermions become non-relativistic while their number densities remain high, such as in some stellar environments [29]. The contribution of plasmon decay is already included in the master formula, Eq. (3.46), in which it corresponds to q 2 = Re Π L,T R (q). For a simple estimate, the decay rate can be calculated by using the S-matrix formalism. With the effective Lagrangian Eq. (2.1), it is straightforward to obtain the decay width as follows: where q 2 = E 2 γ * − | q| 2 . The thermally averaged rate can be written as In the non-relativistic regime (T m ψ ) and the HTL approximation, we obtain from Eq. (3.26) the photon thermal mass which depends on the photon polarization: where n ψ denotes the number density of ψ. Here we would like to make a comparison to the results in Ref. [68] where the photon dispersion relation for T m e has also been calculated in the stellar plasma. The main difference in the stellar plasma is that, as an electrically neutral medium, its positively charged particles are protons (including protons in nuclei), whose contributions to the photon thermal masses are negligible due to the heavy proton/nucleus mass. In the thermal plasma of the early universe, we have equally high densities of both electrons and positrons. Therefore, the photon thermal masses in Eq. (3.50) are twice as large as the results for the stellar plasma. Taking the approximate dispersion relation q 2 ≈ m 2 γ with m 2 γ ≈ 2e 2 n ψ /m ψ (i.e. only the transverse polarization mode is used) and the nonrelativistic limit of n ψ , we obtain the thermally averaged production rate We can see that the rate from plasmon decay is at O(α 2 ) and is exponentially suppressed when T m ψ . In the relativistic regime, on the other hand, the calculation is similar and we find Overall, the contribution of plasmon decay to ν R production in the early Universe is subdominant because the results in Eqs. (3.52) and (3.53) are proportional to α 2 . NMM bounds from N eff constraints With the ν R production rate obtained (see Tab. 1), we are ready to relate NMM to N eff and use cosmological measurements of N eff to constrain NMM. As we have discussed in Sec. 2, due to σvn ∝ T 3 and H ∝ T 2 , we expect that ν R is in thermal equilibrium at sufficiently high temperatures and decouples from the thermal bath at low temperatures. The decoupling temperature, T dec , is determined by solving H = σvn , i.e., 1.66 In general, as g and σvn /T 3 both vary with the temperature, one has to solve Eq. The full numerical relation between T dec and µ ν is shown in Fig. 5, where the temperature dependence of g (T ) is taken from Ref. [69] and c ψ takes the step function as mentioned below Eq. (3.39). For comparison, the approximate relation in Eq. (4.2) is also shown in Fig. 5. The contribution of ν R to the effective neutrino number excess, ∆N eff , depends on T dec , and the relation can be derived from entropy conservation-see, e.g., [50,51,56]. Given T dec , ∆N eff is determined by where N R is the number of thermalized light ν R . Although N R = 3 is the most natural assumption because for three Dirac neutrinos with NMM all ν R 's are expected to thermalize at a sufficiently high T , it is still possible to have N R = 2 or 1. For instance, it could be that not all neutrinos are Dirac so that there are only one or two light ν R species present. Even if all neutrinos are Dirac, the actual temperature required by H = σvn could be too high so that the universe has never reached such a high temperature (e.g. if the temperature is above the reheating temperature after inflation). Therefore, we keep the possibilities of N R = 2 and 1 in our discussion below. Results Applying the numerical T dec (µ ν ) relation in Fig. 5 to Eq. (4.3), we plot ∆N eff as a function of µ ν for N R = 1, 2, 3 in Fig. 6, together with cosmological bounds on ∆N eff and other known bounds on µ ν . The bounds are explained as follows. Currently, the best measurements of N eff come from combinations of CMB, BAO, and BBN observations. We adopt results of the CMB+BAO combination from the Planck 2018 Figure 6. The dependence of ∆N eff on µ ν , assuming the number of ν R thermalized via NMM is N R = 1, 2, or 3. The red horizontal line represents the current best limit on ∆N eff from the combination of latest CMB (Planck 2018) and BBN data. For N R = 3, it can be recast to the corresponding bound on µ ν , as indicated by the red vertical line. Future experiments such as SO/SPT-3G and CMB-S4 will be able to probe the N R = 2 and N R = 1 scenarios. Other constraints are explained in the text. publication [70] and the CMB+BBN combination from Ref. [71] Subtracting the standard value, N st. eff = 3.045 [43,72,73] 9 , we obtain the upper limits ∆N eff < 0.285 and ∆N eff < 0.163 at 95% (2σ) C.L., respectively. Future CMB experiments such as the Simons Observatory (SO) and CMB Stage-IV (CMB-S4) can reach ∆N eff < 0.1 [40,77] and ∆N eff < 0.06 [38,78] at 2σ C.L., respectively. They are plotted in Fig. 6 as dashed lines. The South Pole Telescope (SPT-3G) [39] has approximately the same sensitivity reach as SO. So we refer to their common limit as SO/SPT-3G. Low-energy elastic ν + e − scattering data in neutrino and dark matter detectors can be used to constrain NMM. In Tab. 2, we list such bounds from the very recent XENONnT [30] and LUX-ZEPLIN (LZ) [31] experiments as well as that from Borexino [16] which used to be the strongest laboratory constraint. Note that these bounds are all derived from solar neutrinos which change flavors when arriving at the Earth. At low energies, the probabilities of solar neutrinos appearing as ν e , ν µ , and ν τ are around 56%, 22%, and 22%, Upper Limit Ref. respectively [79]. Here we assume flavor-universal and flavor-diagonal NMM, for which the reported bounds in these experiments can be directly compared to our results. If one focuses on a NMM of a specific flavor, then the above probabilities should be taken into account and would lead to weaker bounds. We refer to Ref. [80] for a more dedicated treatment of this issue. In addition to laboratory bounds, we also include the astrophysical bounds derived from the red-giant branch, µ ν < 2.2 × 10 −12 µ B [32] and µ ν < 1.5 × 10 −12 µ B [33]. In Fig. 6, we only plot the latter to avoid cluttering. Due to potential astrophysical uncertainties, this bound is presented as a dash-dotted line. As can be read from Fig. 6, if three ν R 's are thermalized via NMM in the early Universe the current BBN+CMB data would set an upper limit on µ ν with BBN+CMB : µ ν < 2.7 × 10 −12 µ B (for N R = 3) . (4.6) It should be noted that for N R = 3, the minimal value of ∆N eff is 0.14, as can be seen from the N R = 3 curve at lower µ ν values in Fig. 6. Future experiments like SO/SPT-3G and CMB-S4 are sensitive to ∆N eff below this level. If ∆N eff is found to be smaller than 0.14 in future measurements, the N R = 3 scenario (i.e. three ν R 's being thermalized via NMM) would be ruled out. There are multiple possibilities to go beyond this scenario: (i) the effective NMM vertex opens up at high temperatures, leading to the transition from freeze-out to freeze-in (see e.g. [51,54]); (ii) as T increases above the electroweak scale, g (T ) continues increasing due to new particles beyond the SM; (iii) the number of ν R that can be possibly thermalized via NMM is less than three. In either (i) or (ii), the curves in Fig. 6 will further decrease as µ ν decreases, but this part would be quite model dependent. We will investigate this in future work. As for (iii), one may consider N R = 2 or 1. For N R = 2, the SO/SPT-3G sensitivity is able to reach µ ν < 2.0 × 10 −12 µ B while for N R = 1 the future CMB-S4 sensitivity can set µ ν < 3.7 × 10 −12 µ B . We see from Fig. 6 that all these cosmological bounds are stronger than the laboratory ones from XENONnT, LZ, and Borexino, and comparable to astrophysical ones. Note that the strongest astrophysical bound cuts the N R = 3 curve at ∆N eff = 0.143 while the curve becomes flat at 0.141. So there is the possibility for upcoming precision measurements of N eff to make a discovery. If a small ∆N eff between 0.143 and 0.141 is probed, it can be attributed to NMM smaller than the strongest astrophysical bounds. What would be more interesting is that future experiments could measure an even smaller ∆N eff below 0.141. In this case, it could probe the freeze-in regime which is sensitive to how the effective vertex opens up at high energies. We leave this possibility for future exploration. Discussions We would like to make a few comments on the underlying assumptions of our results. Large NMM arising from new physics models typically involve new energy scales (denoted by Λ), as well as new heavy particles that directly couple to ν R . At temperatures around or above Λ, the effective NMM vertex might be invalid in the calculations of collision terms and the new particles could contribute to g significantly. So our results rely on the assumption that the new particle masses and Λ are well above the temperatures relevant to our calculations. More specifically, for the bounds presented in Fig. 6, we are only concerned with ν R decoupling at temperatures around or below the electroweak scale. Therefore, it is reasonable to assume that the new physics scales are well above the decoupling temperature so that our results are independent of the UV theories of NMM. Let us consider NMM generated at the one-loop level, for which the magnitude of µ ν is roughly given by [12] µ ν ∼ m X 16π 2 Λ 2 , (4.7) where 16π 2 is the one-loop suppression factor, m X denotes the chirality-flipping fermion mass, and Λ corresponds to the heaviest particle mass in the loop. Taking the SM prediction µ ν = 3eG F m ν /(8 √ 2π 2 ) as an example, Λ corresponds to the electroweak scale G −1/2 F and m X corresponds to m ν because the chirality has to flip only via the neutrino mass termchirality-flipping via a charged-lepton mass is not feasible because ν R does not participate in the SM gauge interactions. In left-right symmetric models, due to the presence of gauge interactions with ν R , chirality-flipping can be achieved by the charged fermion running in the loop [4]. In this case, we would have m X = m with = e, µ, or τ , leading to a great enhancement of the NMM. Taking µ ν ∼ 10 −12 µ B in Eq. (4.7), we have Λ ∼ 4.6 TeV · m X /GeV, which implies that for m X = m τ the new physics scale would be well above TeV. For new heavy fermions playing the role of chirality flipping, Λ could be even higher. Conclusion In this paper, we investigate cosmological constraints on Dirac NMM from current and future precision measurements of the effective number of neutrinos, N eff . As has been realized in previous studies, a straightforward calculation of the ν R production rate using tree-level scattering amplitudes is IR divergent. Therefore, in this work we compute the rate in a thermal QFT approach which, by taking into account the effects of modified photon dispersion relations in the ν R self-energy, is free from the IR divergence. Furthermore, this approach automatically includes the contributions of the s-and t-channel scattering processes and plasmon decay. Using the refined ν R production rate, we obtain accurate relations between µ ν and N eff , in which the precision measurements of the latter can be recast to constrain the former. The main results we have obtained are presented in Fig. 6 and summarized in Tab. 2. For three ν R being thermalized via flavor-universal NMM, the combination of current CMB and BBN measurements of N eff puts a strong bound, µ ν < 2.7 × 10 −12 µ B at 2σ C.L. This is better than the latest laboratory bounds from XENONnT and LZ. Future measurements from SO/SPT-3G and CMB-S4 will even be able to exclude scenarios with three or two ν R being thermalized. Our results are applicable to a broad class of NMM models in which the new physic scale is much higher than the electroweak scale. A Weldon's formula with chiral fermions. In this appendix, we briefly review Weldon's formula [64] which relates the gain/loss rate of a particle in the thermal bath to its self-energy computed in finite-temperature field theories, as we have applied in Eq. (3.2). Although the formula has been elaborated in great detail in the original paper, when applying to chiral fermions, there could be a subtle issue regarding factors of two. Hence we would like to take this appendix to clear potential issues. Let us first consider a non-chiral toy model: where ψ 1,2 are two Dirac fermions and φ is a real scalar. For simplicity, we ignore the masses of ψ 1 and φ, and concentrate on the production and decay of ψ 2 in the thermal bath. In this context, Weldon's formula (see Eq. (2.31) in Ref. [64]) reads where u 2 (u 2 ) denotes the initial (final) state of the ψ 2 particle, Σ denotes its self-energy shown in Fig. 7, E p and p are the particle energy and momentum, and Γ tot ≡ Γ gain + Γ loss with Γ gain and Γ loss defined similar to those in Eq. where the momenta p, k, q have been specified in Fig. 7, S ψ 1 and D φ denote the propagators of ψ 1 and φ at a finite temperature, T . Following Ref. [64], here we use the imaginary-time formalism, in which the energy take discrete values i2πT b where b is an integer for a boson or a half-integer for a fermion. For this reason, the energy integral dk 0 /(2π) has been discretized to the summation of b. After computing the summation, one can get where "· · · " represents other terms proportional to δ( We refer to Eq. (2.22) in Ref. [64] for their explicit forms. Now let us turn to the right-hand side of Eq. (A.2). Recall that the collision term in the Boltzmann equation for f ψ 2 is given by where s 1 denotes the spin of ψ 1 and dΠ x ≡ d 3 p x / (2π) 3 2E x does not include internal degrees of freedom. Instead, we have s 1 to sum over the spin of ψ 1 explicitly. The squared amplitudes read After the spin summation, we have which is identical to the part between δ( where in the second step we have integrated out dΠ φ with a part of the delta function. By comparing Eq. (A.8) with Eq. (A.4), we see that Weldon's formula is explicitly verified, except for the "· · · " part in Eq. (A.4) accounting for contributions of other processes. Note that so far we have not summed the spin of ψ 2 , nor have we included its internal degrees of freedom in the calculation. Hence both sides of Eq. (A.2) should be interpreted as quantities for only one of its degrees of freedom (if there are many). For the non-chiral model considered above, since the results for the two possible spin polarizations of a ψ 2 particle are equal, one can average over the spin polarizations: Now let us revisit the calculation for the following chiral toy model: i.e., we assume that only the left-handed part of ψ 2 participates in the yukawa interaction. For this model, one can still explicitly verify Weldon's formula with a few modifications as follows. First, the two vertices in Fig. 7 are accompanied with the chiral projectors P L and P R . So we have S ψ 1 (k) → P R S ψ 1 (k)P L in Eq. (A.3) and u 2 / ku 2 → u 2 P R / kP L u 2 in Eq. (A.4). The squared amplitude is also modified: |M| 2 → y 2 |u 1 P L u 2 | 2 = y 2 |u 2 P R u 1 | 2 . (A.11) Here the projectors automatically guarantee that only left-handed ψ 2 and right-handed ψ 1 are involved in the amplitude. So one can still sum over the spin because the contribution of the wrong spin polarization automatically vanish due to P L/R . With the above details being noted, we can see that Eq. (A.2) for chiral fermions still holds. However, due to the absence of right-handed ψ 2 , Eq. (A.9) should be modified as where in principle the projectors P L/R should be included in Σ but here we prefer to write them out explicitly. For our application to neutrinos in Sec. 3, the projectors have been included implicitly in Eq. (3.2).
9,423.4
2022-11-09T00:00:00.000
[ "Physics" ]
Processing of Boron Nitride Nanotubes Reinforced Aluminum Matrix Composite Aluminum and its alloys are one of the most favored metal-based materials for engineering applications that require lightweight materials. On the other hand, composites are getting more preferable for different kinds of applications recently. Boron nitride nanotubes (BNNTs) are one of the excellent reinforcement materials for aluminum and its alloys. To enhance mechanical properties of aluminum, BNNTs can be added with different processes. BNNT reinforced aluminum matrix composites also demonstrate extraordinary radiation shielding properties. This study consists of BNNT reinforced aluminum matrix composite production performed by casting method. Since wetting of BNNT in liquid aluminum is an obstacle for casting, various casting techniques were performed to distribute homogeneously in liquid aluminum. Different methods were investigated in an aim to incorporate BNNT into liquid method as reinforcement. It was found that UTS was increased by 20% and elongation at fracture was increased by 170% when BNNT was preheated at 800 o C for 30 minutes. Introduction Aluminum is one of the important engineering metals thanks to its improved properties.Since aluminum has lower density compared to other metals that are suitable for industry, it is mostly used in lightweight applications.The "composite" term requires one reinforcement material in a continuous phase called "matrix".Aluminum matrix composite (AMC) is material which is based on aluminum matrix exhibiting enhanced mechanical properties.Metal matrix composites such as AMC contain at least one metal and one reinforcement material such as oxide, fiber, particle, or compound.Since the aluminum used as the main structure in AMC materials, they exhibit high specific strength, high stiffness and good wear resistance.These are the important features of AMC structures which makes them suitable for advanced aviation and space applications.Recent researches show that AMCs which are reinforced with Al2O3 and SiC are the most common applications. All of these ceramics which are used as reinforcement improve different features of material.In addition to these reinforcements, B4C is also used to obtain different enhanced properties for AMCs [1].Ceramics such as SiC and Al2O3 have higher hardness than aluminum however because of the higher density of these ceramic materials, they have higher weight than aluminum.This difference brings important restrictions for industrial applications.Since AMC materials have ceramic reinforcement in aluminum structure, they exhibit high strength values while having lightweight properties.So, AMC materials have combinations of advanced properties of both ceramics and aluminum [1]. There are different properties which are improved by reinforcement of aluminum with materials such as ceramic, compound or oxide.AMC structures exhibit high strength, high hardness and high stiffness.Also, these composites show better performance even at elevated temperatures.In addition to wellknown improved properties, reinforcement of AMCs with ceramic particles provides advanced coefficient of thermal expansion and wear resistance [2].Also, AMC structures exhibit good electrical and thermal conductivity which provides enhanced properties at elevated temperatures [3]. Production techniques that are used in composites have a great effect for determining the final properties of composites.For metal matrix composites (MMCs), there are several techniques providing high production quality.However, manufacturing of aluminum matrix composites in large-scale is possible with 3 main different techniques: Deposition (Liquid -Solid) Processes Distribution of reinforcements in composites should be homogeneous to obtain desired properties.Therefore, obtaining the most suitable particle size and shape is crucial for applying proper manufacturing techniques.Inhomogeneities of reinforcements in composites is one of the most challenging topics for metal matrix composites [4]. Boron nitride nanotubes (BNNTs) are the materials that promise to be an alternative to carbon nanotubes (CNTs) in many areas, especially in space technology.The main reason that allows BNNT to be used as an alternative to CNT is that the BNNT structure consisting of a co-axial hexagonal boron nitride (h-BN) network [5] is analogous to the CNT structure.Other similar properties of these two materials are due to the peculiar properties, structure and polymorphism of BNNT [6].In spite of the limited studies on BNNTs compared with CNTs due to the challenging production of a high amount of BNNTs, in recent years, the interest in theoretical experiments of BNNT is increasing. The excellent mechanical property of BNNTs, which is a high stiffness with Young's Modulus of 1.2 TPa, and their density of 1.3-1.4(g/cm 3 ) [7] make them proper lightweight reinforcements for metallic, polymeric and ceramic composites [5].According to Bettinger et al. [8], even though the modulus of elasticity of single layered BNNT is slightly lower than that of CNT, BNNT stands out due to its yield resistance and thermal stability [8].Another property that makes BNNT surpass CNT is its high oxidation resistance.CNT can be easily oxidized in air at 400 °C [9] and burn completely under an effective oxygen source at 700 °C due to low oxidation resistance caused by its large surface area and defects formed at the tip of tubes [10].Chen et al. [11] reported that BNNTs, on the other hand, remain stable up to 700 °C in any structure, while they resist up to 900 °C in a cylindrical structure.In another experimental study, it has been observed that BNNT remains stable up to 850 °C without any deterioration in its morphology.Besides all these, the most extraordinary properties of BNNT, which also distinguishes it from CNT, comes from its ionic bonding nature.The dipole moment between H2 and nanotube structure induced by ionic B-N bonds, provides a unique hydrogen storage capacity [5].This property is quite important in radiation shielding applications since the charge-to-mass ratio of hydrogen, which is the highest of any elements, offers the best shielding feature [7].Neutron radiation is the main type of radiation that is dangerous for spacecraft.Elements of B and N are extremely suitable for shielding from neutron radiation due to their larger neutron absorption cross-sections than that of C and other elements [7].Moreover, due to their ionic bonding nature, BNNTs are wide band gap materials having a bandgap of 5.5 eV [12][13] which makes them electrical insulators, and the electrical properties of BNNT are not dependent on their diameter of tube, chirality and morphology as they are for CNTs [12]. In the production of aluminum matrix CNT composite, the reaction occurs between CNT and aluminum at the interfaces and forming the Al4C3 (aluminum carbides) structure has led to very low mechanical properties [14].As a result of the research made to solve this serious problem, the issue of using BNNTs, which are claimed to be used as reinforcement in composite structures due to their properties, has come to the fore. One of the most important properties determining the strength of the bonding formed at the interface is the orientation relationship between Al and the reaction product.AlN provides a stable and strong interface bond by forming a low-energy and coherent surface with Al, allowing BNNT to bond to the matrix.This relationship is the opposite of Al and AlB2.Since the planes harmony between these two is much less, a partial coherent structure is provided.For this reason, AlB2 provides a much lower bonding strength than AlN [15].In BNNT reinforced Al composites, thanks to the diffusion event mentioned above, more coherent AlN formation causes a stronger interface bonding.The interface between Al and BNNT is schematically given in Figure 1.Fig. 1.Schematic representation of Al-BNNT interface [16] In the light of all the above information, as can be seen in the figure, the AlN coherent structure, which is mostly formed at the interface, provides a strong interface bond.In the light of the researches and comments made, it was determined that BNNT can be used as reinforcement in Al metal matrix. Cong et al. [17] investigated the effect of BNNT volume fraction and size on mechanical behavior in his studies.In this study, he simulated the data to be obtained when applying tensile stress to the composite using Molecular Dynamics simulation.As a result of his studies at standard ambient conditions and at 0.0005 /ps strain rate, the samples with BNNT added in different size and volume fractions were exposed to tensile loading.He obtained different stress-strain curves for each sample.But the interesting point is that in all of these samples, 2 different peaks were observed in certain strain values.As explained in the schematized visual in Figure 2.15, it has been observed that the Al matrix at a certain strain value breaks under tensile loading but BNNT does not rupture and provides a secondary strength to the composite.The second stress reduction occurs when the BNNT reinforcement is also broken. Many studies have been conducted in which the production of BNNT reinforced Al composite was tested with spark plasma sintering and the resulting mechanical properties and strengthening mechanisms were examined.One of the most enlightening researches on the subject has been done by Lahiri [16].Within the scope of this study, powders containing 2% and 5% BNNT as volume fraction and pure Al powder were prepared.According to the values given, a 54% increase in Yield strength was observed in the structure with the addition of 5% BNNT compared to pure Al.In addition, the BNNT bridge structure and the "sword in sheathe" structure observed on the fracture surface in the sample with 5% BNNT addition clearly show the strengthening mechanism in composites with BNNT reinforcement Al matrix [16].While an increase of 21% is observed in the hardness of pure Al due to strain hardening, this increase is observed as 59% in the composite structure [16]. In this study, the production of boron nitride nanotube reinforced aluminum matrix composites by casting method was tried.Different methods were investigated, and mechanical properties were evaluated. Experimental work Aluminum A356 alloy was used as the matrix material in the composite production.The powder used as reinforcement contained 52 wt% boron nitride with 99.8% purity and 48 wt% CNT with 97% purity.The ratio of the reinforcer was targeted to be 0.1 wt% in all casting trials. In order to characterize the effect of BNNT reinforced A356 alloy, a series of castings were made into a sand mould that produced 10 cylindrical bars.The bars were 8.5 mm diameter with 160 mm height.The melt was prepared in ICS Induction Furnace A50 SiC crucible at 750 o C. Degassing was carried out with Pyrotek ceramic lance with nitrogen gas purges for 12 minutes. After the castings were complete, all samples were subjected to X-ray for porosity analysis using YXLON MU-2000.Then, T6 heat treatment was applied to all samples: 6 hours of solution heat treatment at 540 °C, quenching at 80 °C and 4 hours of artificial aging at 160 °C.The tensile test samples were machined to ASTM E8 standard and tests were carried out on ZWICK 250 kN Tensile Testing Machine.Reduced pressure test (RPT) samples were collected to ensure that the melt cleanliness was high enough. Bifilm index [18][19][20] measurements were made for each casting.For the casting trials, different ways of introducing BNNT into the liquid aluminum was investigated.A summary of test setup is given in Table 1.In the first casting sets (Method 1), base A356 was cast with no reinforcement addition as a reference.In Method 2, powders were placed in the pouring basin in an aim to be carried out by the flowing liquid metal.In Method 3, powders were placed in the runner.In Method 4, powders were added to the liquid metal during pouring towards the down sprue with a vibration plate.In Method 5, powders were added to the liquid metal in the crucible.In Method 6, the powders were added through the degassing shaft together with carrier gas (nitrogen) during degassing.In Method 5 and 6, BNNT was preheated to 800 o C for 30 minutes and then they were added to melt. Method Condition (1) Base alloy -A356, no addition of BNNT (2) BNNT was placed at the pouring basin (3) BNNT was placed at the runner (4) BNNT was added to the sprue by vibrating plate during filling (5) BNNT was preheated at 800 o C for 30 minutes and added to melt in crucible prior to casting (6) BNNT was preheated at 800 o C for 30 minutes and added to melt through degassing lance during degassing operation Results and Discussion The melt quality was measured by using reduced pressure test.Cross-section of RPT samples are given in Fig 3.For the Methods 5 and 6 (Table 1), the powders were heated to 800 o C and held for 30 minutes in an aim to remove CNT and also increase the wettability of powders.XRD analysis was carried out and the result is given in Fig 5 .The change from wide to narrow and sharp peaks in XRD diagram indicated that CNT was removed from BNNT.It is important to note that the color of powders was changed from grey to white after 30 minutes of holding at 800 o C. In the tensile test, it was found that the average yield strength of base A356 alloy was 231.6 MPa.The yield strength of the BNNT reinforced castings were found to be very close to the base alloy.There was only a 3.8% increase in the yield strength that corresponded to 9 MPa of increase overall. Among the methods investigated in this work, the standard deviation of yield strength values was quite low in Methods 4,5 and 6.It can be said that in these methods, homogeneous mixing and distribution of the powder was achieved.Similar observation was made for UTS values as well. In the computational research made by Rohmann et al. [21], it has been reported that the tensile strength value of BNNT reinforced aluminum matrix composite produced by powder metallurgy can only reach 300 MPa with the addition of 3 to 5 wt% powder [17].Ultimate tensile test results (Fig 7) are varying between 250-350 MPa.Method 5 reveals the highest UTS value of 334.9 MPa while Method 3 has the highest scatter.The increase of UTS value compare to base alloy was around 16% in Method 5, followed by 14.3% increase in Method 4 with a value of 329.8 MPa.It is concluded that higher UTS can be achieved by casting method compared to powder metallurgy with even less amount of reinforcer addition. For the elongation at fracture values (Fig 8 ), there is a significant difference between the different methods.While Methods 1-3 are in the range of 2% elongation, Methods 4-6 are showing an average of approximately 5% elongation at fracture.The casting method that provides the highest increase of 175% in elongation is Method 5 where the pre-heated powder was added to the crucible and mixed with molten alloy prior to casting. Conclusions The main objective of this study was to observe the effects of BNNT reinforcement in aluminum matrix and produce BNNT reinforced aluminum matrix composite with casting method.Different methods were applied.Amongst the trials, the highest mechanical properties were achieved when BNNT was heated to 800 o C and held for 30 minutes.UTS was increased approximately 20% from 288.5 MPa in base alloy to 334.9 MPa in the composite.Elongation at fracture was increased by 175% compare to the base alloy from 2% to 5.50% in the composite alloy.Similarly, toughness was increased from 500 J/m 3 to 1500 J/m 3 by three folds.Weibull analysis showed that the scatter of the results was too low, thus the reproducibility and reliability of the castings were found to be high with survivability plots showing the same trend as the base alloys.Addition of BNNT through vibrating table into the sprue, mixing the BNNT in the crucible and purging BNNT through the degassing lance revealed the highest tensile properties. Fig. 3 . Fig. 3. Cross section of RPT samples Before degassing, the bifilm index was measured as 188.3 mm (Fig 3a).After degassing, the bifilm index was measured to be 12.5, 47.1 and 27.9 mm (Fig 3 b, c and d, respectively) which lies in the category of good melt quality [18-20].Therefore, it was determined that the tensile test results were not affected by the melt cleanliness level.Additionally, YXLON MU-2000 X-ray machine was used to check the porosity levels in the cast cylinders.As seen in Fig 4, no porosity was observed in the cast bars. Fig. 5 . Fig. 5. XRD analysis of a) untreated powders with a composition of 52%BN and 48% CNT, b) Heat exposed powders Tensile test results are summarised in Figs 6-9 as bar charts for different methods studied in this work.The yield strength appears to be close to each other (Fig 6) with Method 6 having the lowest value of 205.8 MPa and Method 4 giving the highest as 240.6 MPa in which the BNNT was added with vibration towards the sprue during filling.Compare to the base alloy with no reinforcement, the increase in yield strength is approximately 5% for most of the methods. Fig. 6 . Fig. 6.Yield strength change of composites with regard to method they were produced Fig. 7 .Fig. 8 . Fig. 7. Ultimate tensile strength change of composites with regard to method they were produced Fig. 9 . Fig. 9. Toughness change of composites with regard to method they were produced Fig. 10 . Fig. 10.Survivability plot of yield stress Based on Figure 11, Methods 2 and 4 show the least reliable UTS values giving high scatter.For example, Method 2 has the potential to give 485 MPa, but also, it has the potential to have 135 MPa as well.The upper and lower limit values show how reproducible the parameter is.Thus, Method 5 being on the far right with the steepest line reveals the highest UTS with most reliable values ranging only between 290-330 MPa with ±20 MPa.
4,023.6
2023-05-08T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Horava-Lifshitz Gravity From Dynamical Newton-Cartan Geometry Recently it has been established that torsional Newton-Cartan (TNC) geometry is the appropriate geometrical framework to which non-relativistic field theories couple. We show that when these geometries are made dynamical they give rise to Horava-Lifshitz (HL) gravity. Projectable HL gravity corresponds to dynamical Newton-Cartan (NC) geometry without torsion and non-projectable HL gravity corresponds to dynamical NC geometry with twistless torsion (hypersurface orthogonal foliation). We build a precise dictionary relating all fields (including the scalar khronon), their transformations and other properties in both HL gravity and dynamical TNC geometry. We use TNC invariance to construct the effective action for dynamical twistless torsional Newton-Cartan geometries in 2+1 dimensions for dynamical exponent 1<z\le 2 and demonstrate that this exactly agrees with the most general forms of the HL actions constructed in the literature. Further, we identify the origin of the U(1) symmetry observed by Horava and Melby-Thompson as coming from the Bargmann extension of the local Galilean algebra that acts on the tangent space to TNC geometries. We argue that TNC geometry, which is manifestly diffeomorphism covariant, is a natural geometrical framework underlying HL gravity and discuss some of its implications. Introduction In the search for consistent theories of quantum gravity, Hořava-Lifshitz (HL) gravity [1,2] has appeared as a tantalizing possibility of a non-Lorentz invariant and renormalizable UV completion of gravity. While observational constraints and the matching to general relativity in the IR put severe limitations on the phenomenological viability of this proposal, HL gravity is of intrinsic theoretical interest as an example of gravity with anisotropic scaling between time and space. In particular, in the context of holography it holds the prospect of providing an alternative way [3,4] of constructing gravity duals for strongly coupled systems with non-relativistic scaling, including those of interest to condensed matter physics. More generally, one might expect that HL gravity has a natural embedding in the larger framework of string theory [5]. In parallel to this development, and with in part similar motivations, there has been considerable effort to extend the original AdS-setup in (conventional) relativistic gravity to space-times with non-relativistic scaling [6,7,8,9]. Such space-times typically exhibit a dynamical exponent z that characterizes the anisotropy between time and space on the boundary. This includes in particular holography for Lifshitz space-times, for which it was found that the boundary geometry is described by a novel extension of Newton-Cartan (NC) geometry 1 with a specific torsion tensor, called torsional Newton-Cartan (TNC) geometry. The aim of this paper is to construct the theory of dynamical TNC geometry and show that it exactly agrees with the most general forms of HL gravity. TNC geometry was first observed in [17,18] as the boundary geometry for a specific action supporting z = 2 Lifshitz geometries, and subsequently generalized to a large class of holographic Lifshitz models for arbitrary values of z in [19,20]. In parallel, it was shown in detail in [21] how TNC geometry arises by gauging the Schrödinger algebra, following the earlier work [22] on obtaining NC geometry from gauging the Bargmann algebra. In this paper we will show that TNC geometry can also be obtained by generalizing directly the work of [22] to include torsion without using the Schrödinger algebra. In its broadest sense the results of [19,20] imply that Lifshitz holography describes a dual version of field theories on TNC backgrounds. In [23] it was shown that the Lifshitz vacuum (in Poincaré type coordinates) exhibits the same symmetry properties as a flat NC space-time. In particular it was found that the conformal Killing vectors of flat NC space-time span the Lifshitz algebra. In order to understand the properties of field theories on TNC backgrounds some simple scale invariant scalar field models on flat NC space-time were studied in [24,23]. It was shown that two scenarios can occur: i). either the theory has an internal local U(1) symmetry related to particle number or ii). it does not. In case i). there is a mechanism that enhances the global Lifshitz symmetries to include particle number and Galilean boosts (and possibly even special conformal transformations) whereas in the other case no such symmetry enhancement can take place. This means that the notion of global symmetries depends on the type of matter fields one considers on such a background. In support of this it was demonstrated in Ref. [23] that one can define probe scalars on a Lifshitz background that have a global Schrödinger invariance. The field-theoretic perspective of coupling Galilean invariant field theories to TNC 2 was independently considered in [29]. The relevant geometric fields in TNC are a time-like vielbein τ µ , an inverse spatial metric h µν and a vector field M µ = m µ − ∂ µ χ where χ is a Stückelberg scalar whose role in TNC geometry will be elucidated in section 6. The torsion in TNC geometry is always proportional to ∂ µ τ ν − ∂ ν τ µ where τ µ defines the local flow of time. The amount of torsion depends on the properties of τ µ and we distinguish the three cases: 3 • Newton-Cartan (NC) geometry • twistless torsional (TTNC) geometry • torsional Newton-Cartan (TNC) geometry where the first possibility has no torsion and the latter option has general torsion with the twistless case being an important in-between situation. More specifically, in the first case the time-like vielbein of the geometry is closed and defines an absolute time. In the second case the time-like vielbein is hypersurface orthogonal and thereby allows for a foliation of equal time spatial surfaces described by Riemannian (i.e. torsion free) geometry. In the third, most general, case there is no constraint on τ µ . As is clear from holographic studies of the boundary energy-momentum tensor as for example in [30,17,18,19,23] the addition of torsion to the NC geometry is crucial in order to be able to calculate the energy density and energy flux of the theory. This is because they are the response to varying τ µ (see also [29]). Hence in order to be able to compute these quantities τ µ better be unconstrained, i.e. one should allow for arbitrary torsion. If we work with TTNC geometry one can only compute the energy density and the divergence of the energy current [20] because in that case τ µ = ψ∂ µ τ where one has to vary ψ and τ with ψ sourcing the energy density and τ sourcing the divergence (after partial integration) of the energy current. In any case the point is that, contrary to the relativistic setting, adding torsion is a very natural thing to do in NC geometry. Moreover, as will be shown later, the torsion is not something one can freely pick and is actually fixed by the formalism. In all of these works the TNC geometry appears as a fixed background and is hence not dynamical. The purpose of this paper is to consider what theory of gravity appears when letting the TNC geometry fluctuate. We find, perhaps not entirely unexpected 4 , that depending on the amount of torsion the resulting theories include HL gravity and all of its known extensions. Our focus in this paper will be mainly on the first two of the three cases listed above, leaving the details of the dynamics of the most general case (TNC gravity) for future work. In particular, we will show that: • dynamical NC geometry = projectable HL gravity • dynamical TTNC geometry = non-projectable HL gravity. The khronon field introduced by [31] (to make HL gravity generally covariant whereby making manifest the presence of an extra scalar mode) naturally appears (see also [32]) in our formulation. We furthermore show that the U(1) extension of [33] (see also [34,35]) emerges as well in a natural fashion. The essential identification between the covariant 5 NC-type geometric structures and those appearing in the ADM parametrization that forms the starting point of HL gravity is as follows τ µ ∼ lapse ,ĥ µν ∼ spatial metric , m µ ∼ shift + Newtonian potential , where the fieldsĥ µν and m µ are defined in section 4. We will show that the effective action for the TTNC fields leads to two kinetic terms for the metricĥ µν (giving rise to the λ parameter of HL gravity [1,2]) including the potential terms computed in Refs. [37,31,35]. Furthermore the Stückelberg scalar χ entering in the TNC quantity M µ = m µ − ∂ µ χ (see [17,18,19,24,21,23]) will be directly related to the Newtonian prepotential introduced in [33]. The relation to TTNC geometry will, however, provide a new perspective on the nature of the U(1) symmetry studied in the context of HL gravity. As a further confirmation that TNC geometry is a natural framework for HL gravity we will demonstrate in this paper that when we include dilatation symmetry (local Schrödinger invariance) one obtains conformal HL gravity. As we will review in this paper, the various versions of TNC geometry defined above arise by gauging non-relativistic symmetry algebras (Galilean, Bargmann, Schrödinger). In particular, in this procedure the internal symmetries are made into local symmetries, and translations are turned into diffeomorphisms. This is in the same way that Riemannian geometry comes from gauging the Poincaré algebra, thereby imposing local Lorentz symmetry and turning translations into space-time diffeomorphisms. Thus HL gravity theories (and more generally TNC gravity) can be seen as the most general 4 A HL-type action in TNC covariant form was already observed in [18] where the anisotropic Weylanomaly in a specific z = 2 holographic four-dimensional bulk Lifshitz model was obtained via null Scherk-Schwarz reduction of the AdS 5 conformal anomaly of gravity coupled to an axion. 5 Note that in e.g. Ref. [36] there is also a type of covariantization of HL gravity (see also eq. (3.9) of [3]), but there is still inherently a Lorentzian metric structure present. This only works up to second order in derivatives so that it only captures the IR limit of HL gravity. gravity theories for which the Einstein equivalence principle (that locally space-time is described by flat Minkowski space-time) is applied to local non-relativistic (Galilean) symmetries, rather than to the local Lorentz symmetry that one has in special relativity. We point out that in general relativity (GR) the global symmetries (Killing vectors) of Minkowski space-time (the Poincaré algebra) form the same algebra from which upon gauging (and replacing local space-time translations by diffeomorphisms as explained in appendix A) we obtain the geometrical framework of GR. On the other hand the Killing vectors of flat NC space-time only involve space and time translations and spatial rotations [23] while the local tangent space group that we gauge in order to obtain the TNC geometrical framework is the Galilean algebra (where again we also replace local time and space translations by diffeomorphisms), which also contains Galilean boosts and is thus not the same algebra as the algebra of Killing vectors of flat NC space-time. We bring this up to highlight the fact that the local tangent space symmetries and the Killing vectors of flat space-time are in general two very different concepts that are often mistakenly assumed to be the same. Basically this happens because the M µ vector allows for the construction of a new set of vielbeins (defined in section 4) that are invariant under G transformations and that only see diffeomorphisms and local rotations which agrees with the Killing vectors of flat NC space-time. Nevertheless the fact that M µ is one of the background fields to which we can couple a field theory can, under special circumstances, lead to additional symmetries such as G and N (and even special conformal symmetries) [23]. Our results on dynamical TNC geometry and its relation to HL gravity provide a new perspective on these theories of gravity. For one thing, the vacuum of HL gravity (without a cosmological constant) has so far been taken to be Minkowski space-time, but since the underlying geometry appears to be TNC geometry, it seems more natural to take this as flat NC space-time [24,23]. Thus it would seem worthwhile to reexamine HL gravity and the various issues 6 that have been raised following its introduction. As another application, we emphasize that, independent of a possible UV completion of gravity, our results on dynamical TNC geometry are of relevance to constructing IR effective field theories of non-relativistic systems following the recent developments of applying this to condensed matter systems. For these kinds of applications, the question whether HL gravity flows to a theory with local Lorentz invariance (λ = 1) in the IR is of no concern. Finally, from a broader perspective our results might be useful towards a proper description of the non-relativistic quantum gravity corner of the "( , G N , 1/c)cube", perhaps aiding the formulation of a well-defined perturbative 1/c expansion around such a theory. Outline of the paper The first part of the paper (sections 2 to 7) is devoted to setting up the geometrical framework for torsional Newton-Cartan geometry, presented in such a way that the subsequent connection to HL gravity is most clearly displayed. We thus take a pedagogical approach that introduces the relevant ingredients in a step-by-step way. To this end we begin in section 2 with the geometry that is obtained by gauging the Galilean algebra, extending the original work of [22] to include torsion. We exhibit the transformation properties of the relevant geometrical fields under space-time diffeomorphisms and the internal transformations, consisting of Galilean boosts (G) and spatial rotations (J). We also discuss the vielbein postulates and curvatures entering the field strength of the gauge field. We point out that the only G, J invariants are the time-like vielbein τ µ and the inverse spatial metric h µν . In section 3 we then present the most general affine connection that satisfies the property that the latter quantities are covariantly conserved. In section 4, we go one step further and add the central element (N) to the Galilean algebra, and consider the gauging of the resulting Bargmann algebra (as also considered in [22] for the case with no torsion). We show that the extra gauge field m µ that enters in this description, does not alter the transformation properties of the objects considered in section 2, but allows for the introduction of further useful G, J, invariants, namely an inverse time-like vielbeinv µ , a spatial metrich µν (orĥ µν ) and a "Newtonian potential" Φ. We then return to the construction of the affine connection in section 5 and employ the geometric quantities of section 2 and 4 to construct the most general connection that can be built out of the invariants. We discuss two special choices of affine connections with particular properties, one of them being especially convenient for the comparison with HL gravity. We point out that, in the case of non-vanishing torsion, there is no choice of affine connection that is also N-invariant, but that one can formally remedy this by introducing a Stückelberg scalar χ (defining M µ = µ µ − ∂ µ χ) to the setup that cancels this non-invariance. This has the advantage that one can deal simultaneously with theories that have a local U(1) symmetry and those that do not have this, and further it will prove useful when comparing to HL gravity (especially [33,34,35]). We also show how the TNC invariants can be used to build a non-degenerate symmetric rank 2 tensor with Lorentzian signature, which will later be used to make contact with the ADM decomposition that enters HL gravity. In section 6 we discuss the specific form of the torsion tensor that emerges from gauging the Bargmann algebra and introduce the three relevant cases for torsion (NC, TTNC and TNC) that were already mentioned above. We also introduce a vector a µ that describes the TTNC torsion, which will turn out to be very useful in order to make contact with the literature on non-projectable HL gravity. Further we will identify the khronon field of [31]. Then in section 7 we give some basic properties of the curvatures (extrinsic curvature and Ricci tensor for TTNC) that will be useful when constructing HL actions. In section 8 we relate the TNC invariants introduced in the previous sections to those appearing in the corresponding ADM parameterization employed in HL gravity. This identification and the match of the properties and number of components and local symmetries in the case of NC and TTNC already strongly suggest that dynamical (TT)NC is expected to be the same as (non)-projectable HL gravity. We then proceed in section 9 by showing that the generic action that describes dynamical TTNC geometries agrees on the nose with the most general HL actions appearing in the literature. For simplicity we treat the case of 2 spatial dimensions with 1 < z ≤ 2 and organize the terms in the action according to their dilatation weight. In particular, we construct all G, J invariant terms that are relevant or marginal, using as building blocks the TNC invariants (including the torsion tensor and curvature tensor) and covariant derivatives. The resulting action is written in (9.18), (9.19) and gives the HL kinetic terms [1,2] while the potential is exactly the same as the 3D version of the potential given in [37,31,35]. We then proceed in section 10 to consider the extension of the action to include invariance under the central extension N, leading to HL actions with local Bargmann invariance. This can be achieved by including couplings toΦ, which did not appear yet in section 9. Importantly, in the projectable case with the HL coupling constant λ = 1 we reproduce the U(1) invariant action of [33]. When we consider the non-projectable version or λ = 1 we need additional terms to make the theory U(1) invariant which is precisely achieved by adding the Stückelberg field χ that we introduced in section 5 (see also [23]). We can then write a Bargmann invariant action that precisely reproduces the actions considered in the literature, where in particular the χ-dependent pieces agree with those in [35]. This comes about in part via coupling to the natural TNC Newton potential,Φ χ , which is the Bargmann invariant generalization ofΦ, and the simple covariant form of the action (10.14) is one of our central results. We emphasize that adding the χ field to the action means that we have trivialized the U(1) symmetry by Stückelberging it or in other words we have removed the U(1) transformations all together. We further expand on this fact in section 11, commenting on statements in the literature regarding the relevance of the U(1) invariance (which is not there unless we have zero torsion and λ = 1) in relation to the elimination of a scalar degree of freedom. In particular, we will present a different mechanism that accomplishes this and which involves a constraint equation obtained by varying the TNC potentialΦ χ . Finally in section 12 we consider the case where we add dilatations to the Bargmann algebra, i.e. we consider the dynamics we get from a geometry that is locally Schrödinger invariant. We will show that the resulting theory is conformal HL gravity, providing further evidence for our claim that TNC geometry is the underlying geometry of HL gravity. In particular, employing the local Schrödinger algebra we will arrive at the invariant z = d action (12.50) for conformal HL gravity in d + 1 dimensions. We end in section 13 with our conclusions and discuss a large variety of possible open directions. For comparison to general relativity and as an introduction to the logic followed in sections 2 to 7, we have included appendix A which discusses the gauging of the Poincaré algebra leading to Riemannian geometry (possibly with torsion added). Local Galilean Transformations The present section until section 7 is devoted to setting up the general geometrical framework for torsional Newton-Cartan geometry. We will follow an approach that is very similar to what in general relativity is known as the gauging of the Poincaré algebra. This provides us in a very efficient manner with all basic geometrical objects used in the formulation of general relativity (and higher curvature modifications thereof). For the interested reader unfamiliar with this method we give a short summary of it in appendix A. To obtain torsional Newton-Cartan geometry we follow the same logic as in appendix A for the case of the Galilean algebra and its central extension known as the Bargmann algebra. This was first considered in [22] for the case without torsion. Here we generalize this interesting work to the case with torsion. Adding torsion to Newton-Cartan geometry can also be done by making it locally scale invariant, i.e. gauging the Schrödinger algebra as in [21]. However upon gauging the Schrödinger algebra the resulting geometric objects are all dilatation covariant which is useful for the construction of conformal HL gravity as we will study in section 12 but it is less useful for the study of general non-conformally invariant HL actions which is why we start our analysis by adding torsion to the analysis of [22]. Consider the Galilean algebra whose generators are denoted by H, P a , G a , J ab and whose commutation relations are Let us consider a connection A µ taking values in the Galilean algebra 7 This connection transforms in the adjoint as With this transformation we can associate another transformation denoted byδ as follows. Write (without loss of generality) where Σ = G a λ a + 1 2 J ab λ ab , (2.5) is chosen to only include the internal symmetries G and J. We defineδA µ as where F µν is the curvature Often in works on gauging space-time symmetry groups it is suggested that diffeomorphisms can only be obtained once specific curvature constraints are imposed 8 . We emphasize that the transformationδA µ exists no matter what we choose for the curvature F µν . If we write in components what (2.6) states we obtain the transformation properties where L ξ is the Lie derivative along ξ µ and λ a , λ ab the parameters of the internal G, J transformations, respectively. We can now write down covariant derivatives that transform covariantly under these transformations. They are where Γ ρ µν is an affine connection transforming as 14) It is in particular inert under the G and J transformations. The form of the covariant derivatives is completely fixed by the local transformationsδA µ . However any tensor redefinition of the connections Γ ρ µν , Ω µ a and Ω µ ab that leaves the covariant derivatives form-invariant leads to an allowed set of connections with the exact same transformation properties. We impose the vielbein postulates which allows us to express Γ ρ µν in terms of Ω µ a and Ω µ ab via where we defined inverse vielbeins v µ and e µ a via The vielbein postulates for the inverses read Using that Ω µ ab is antisymmetric we find that The components of the field strength F µν in (2.7) are given by The first two appear in the antisymmetric part of the covariant derivatives D µ τ ν and D µ e a ν . More precisely we have (2.28) In other words they are equal to the torsion tensor, i.e. The other two curvature tensors can be found by computing the Riemann tensor defined as together with (2.17) tells us that So far all components of A µ are independent or what is the same τ µ , e a µ and Γ ρ µν (obeying (2.21) and (2.22)) are all independent. The inverse vielbeins v µ and e µ a transform asδ There are only two invariants, i.e. tensors invariant under G and J transformations, that we can build out of the vielbeins. These are τ µ and h µν = δ ab e µ a e ν b . This is not enough to construct an affine connection that transforms as (2.14). The reason we cannot build any other invariants is because v µ and h µν = δ ab e a µ e b ν undergo shift transformations under local Galilean boosts λ a (also known as Milne boosts [29]). The Affine Connection: Part 1 The most general Γ ρ µν obeying (2.21) and (2.22) is of the form It follows that Y σµν can be written as where X 1 µν and X 2 σµ and X 3 σµν = −X 3 νµσ are arbitrary. We write X 2 where L σµν = −L νµσ is defined as Since Y σµν is defined as h ρσ Y σµν we can drop the part in (3.4) that is proportional to τ σ . We thus find the following form for the connection Γ ρ The variation of Γ ρ µν under local Galilean boosts yields where λ µ = λ a e a µ . In section 5 we will choose K µν and L σµν such that δ G Γ ρ µν = 0. Local Bargmann Transformations It is well known that the Galilean algebra admits a central extension with central element N called the Bargmann algebra. This latter element appears via the commutator [P a , G b ] = δ ab N. We denote the associated gauge connection by m µ . Following the same recipe as in section 2 with whereδ is defined in the same way as in (2.6). Note that we have an extra parameter σ associated with the N transformation. Because N is central, all results of the previous section remain unaffected. Our primary focus in this section is local Galilean boost invariance. The new field m µ is shifted under the λ a transformation and so in combinations such aŝ the Galilean boost parameter λ a is cancelled. However we now have two other things to worry about. First of all the new field m µ also transforms under a local U(1) transformation with parameter σ and secondly we have introduced more than is strictly necessary to have local Galilean invariance. This is because the component is G invariant (and of course also J invariant). In previous works we have introduced another background field χ, a Stückelberg scalar, transforming asδχ = L ξ χ + σ so that the combination M µ = m µ − ∂ µ χ is invariant under the local N transformation and replaced everywhere m µ by M µ . Here it will prove convenient, for the sake of comparison with work on HL gravity to postpone this step until later 9 . Hence for now we will work with m µ as opposed to M µ . We introduce a new set of Galilean invariant vielbeins: τ µ ,ê a µ whose inverses arev µ and e µ a whereê a µ = e a µ − m a τ µ with m a = e µa m µ . They satisfy the relationŝ We also have the completeness relation e µ aê a ν = δ µ ν +v µ τ ν . The introduction of m a thus leads to the G, J invariantsv µ and whereh µν is given in (4.6). The part of m µ that is responsible for the Galilean boost invariance is m a that transforms as (ignoring the σ transformation) We can write where the last term is an invariant. The Affine Connection: Part 2 In section 2 we realized the Galilean algebra on the fields τ µ , e a µ , Ω µ a and Ω µ ab or what is the same on τ µ , e a µ and Γ ρ µν where the affine connection obeys (2.21) and (2.22). Now that we have introduced a new field m µ transforming as in (4.4) we will see that we can realize the Galilean algebra on a smaller set of fields, namely τ µ , e a µ and m µ . We can also realize the Galilean algebra on τ µ , e a µ and m a with m a transforming as in (4.10), i.e. no dependence onΦ or realize it on τ µ , e a µ , m a andΦ which is another way of writing the dependence on τ µ , e a µ and m µ . These different options lead to different choices for the affine connection as we will now discuss. The most straightforward way of constructing a Γ ρ µν that is made out of vielbeins and either i). m µ or ii). m a , that obeys (2.21) and (2.22) and transforms as in (2.14), 9 In previous work [19,24,21,23] we denoted byv µ ,h µν andΦ the invariants with m µ replaced by M µ . Here we temporarily work with the forms (4.5)-(4.7) for reasons that will become clear as we go on. We return to our notation from previous works in section 12. We also point out that compared to [19,24,21,23] we denote by m µ here what was referred to asm µ in these papers and vice versa we denote bym µ here what was denoted by m µ in these respective works. is to use the invariants τ µ ,h µν ,v µ , h µν andΦ. The most general connection we can build out of these invariants reads [23] where H µν is given by where α is any constant. If we want the connection to depend linearly on m µ , which is a special case of case i). above, we should take α = 0. If we wish that the connection is independent ofΦ as in case ii). we should take α = 2 because of the identity (4.9) so that H µν =ĥ µν whereĥ µν only depends on m a . For the general case i). i.e. general dependence on m a andΦ, we can take any α. For case i). with a linear dependence on m µ we denote Γ ρ µν byΓ ρ µν which is given bȳ This form of Γ ρ µν has been used in [19,24,21,29,46]. The form ofΓ ρ µν corresponds to taking in (3.6) the following choices for K µν and L σµν , namely For case ii). we denote Γ ρ µν byΓ ρ µν which readŝ The two connectionsΓ ρ µν andΓ ρ µν differ by a tensor as follows from In this work it will prove most convenient to use the connection (5.6) as this eases comparison with HL gravity. We stress though that in principle one can take any of the above choices, i.e. any value for α, and that the final form of the effective action for HL gravity will take the same form regardless which Γ ρ µν one chooses as all dependence on α drops out when forming the scalar terms appearing in the action 10 . 10 This statement can be made more precise in the following way. The Hořava-Lifshitz actions of section 9 such as (9.18) take exactly the same form when written in terms ofΓ ρ µν as when expressed in terms ofΓ ρ µν . To show this one needs to use the fact that in section 9 it is assumed that τ µ is hypersurface orthogonal which is something that we do not yet impose at this stage. This is because the difference between covariant derivatives using either one or the other connection involves terms proportional to τ µ and since the scalars in the action are formed by using inverse spatial metrics h µν those terms drop out. The same comments apply when using the general α of (5.1), i.e. there is no dependence on α. The reader familiar with the literature on NC geometry without torsion might wonder which of these connections relates to the one of NC geometry (as written for example in [22] and references therein). The usual NC connection is obtained by taking (5.3) with K µν as given in (5.5) and L σµν = 0 which follows from (5.5) and the fact that for NC geometry we have ∂ µ τ ν − ∂ ν τ µ = 0. The possibility of modifying these connections by terms proportional to α was never considered before probably because this breaks manifest local N invariance of the NC connection which depends on m µ only via its curl. In the presence of torsion the fact that L σµν is given by (5.5) tells us that we have no manifest N invariance of the connection. Further, for no value of α can we find such an invariance. This can be formally solved by adding a new field to the formalism, a Stückelberg scalar χ, that cancels the non-invariance. This will be discussed in the next section. One can also take the point of view as in [29] that we should just accept the fact thatΓ ρ µν is not N invariant as a mere fact and organize couplings to these geometries and fields living on it in such a way that the action is N invariant. This is certainly a viable point of view and agrees with our approach in all these cases where the dependence on χ can be removed from the theory by field redefinition or simply because it drops out when one tries to make its appearance explicit. If one includes χ there is the benefit that one can also deal with theories that do not have a local U(1) symmetry (because there is an explicit dependence on χ so that the U(1) invariance disappears in the Stückelberg coupling between m µ and χ). This is what allows us to use fixed TNC background geometries for both Lifshitz field theories (explicit dependence on χ) as well as Schrödinger field theories (no dependence on χ) as discussed in [24,23]. The χ field also allows us, as we will see in section 10, to construct two types of HL actions: those that have a local U(1) symmetry without any dependence on χ and those that have no local U(1) because m µ always appears as From now on we will work with (5.6) and simply denote it by Γ ρ µν unless specifically stated otherwise. With this realization of Γ ρ µν the other connections Ω µ ab and Ω µ a are fixed by the vielbein postulates. For an invariant such asv µ the covariant derivatives ∇ µ and D µ are the same so we can write where we used (2.19) and (2.20) and where D µ m a is given by In this section we focussed on making the affine connection G invariant (J invariance is automatic). It so far is not N invariant. This will be fixed in the next section. We could have made the connection N but not G invariant by taking K µν as in (5.5) and L σµν = 0. However in this case we are not achieving anything as the connection without K µν is also N invariant and so imposing N invariance does not constrain Γ ρ µν . Furthermore since in the transformation of m µ the G boost parameter λ a appears without a derivative, whereas the N transformation parameter σ appears with a derivative, it is more natural to use m µ to make various tensors G invariant. Using the invariants τ µ , h µν ,v µ ,ĥ µν we can build a non-degenerate symmetric rank 2 tensor with Lorentzian signature g µν that in the case of a relativistic theory we would refer to as a Lorentzian metric. The metric g µν and its inverse g µν are given by for which we have However the natural Galilean metric structures are τ µ and h µν . For example, as we will see in section 9, g µν does not transform homogeneously under local scale transformations and so it is not on the same footing as the Riemannian metric in GR. Torsion and the Stückelberg Scalar In the case of gauging the Poincaré algebra (appendix A) the torsion is the part of Γ ρ µν that is not fixed by the vielbein postulates. In the case of the Bargmann algebra we see on the other hand that it is the torsion that is fixed, namely it is given by the antisymmetric part of (5.1), which reads It follows that the curvature (2.28) obeys we see that the right hand side of (6.2) transforms in exactly the same way as the left hand side (ignoring the central extension N). The right hand side of (6.2) can be matched to transform correctly under the N transformation by adding the Stückelberg scalar χ, i.e. by replacing m a by M a = e µa (m µ − ∂ µ χ). This explains why in the presence of torsion, i.e. when ∂ µ τ ν − ∂ ν τ µ = 0, we need the scalar χ. In section 10 we will see that there is a similar field in HL gravity whose couplings are precisely obtained by replacing everywhere m µ by M µ = m µ − ∂ µ χ. From a purely geometrical point of view χ is needed whenever we have torsion, i.e. when the right hand side of (6.2) is nonzero to ensure correct transformations under the N generator. This does not automatically mean that any field theory coupled to such a background has a nontrivial χ dependence. There are important cases where the χ field can be removed by a field redefinition or it simply drops out of the action once one tries to make its appearance explicit. We refer to [23] for field theory examples of the first possibility of removing χ by field redefinition and to section 10 for a HL action that exhibits the second property, namely that χ drops out. The χ field also allows us to make the curvature R µν (N) appearing in (4.3), which so far played no role, visible. This goes via the following commutator We note that by covariance D µ D ν χ involves the Galilean boost connection Ω µ a . Using the general form of Γ ρ µν given in (3.6) as well as the vielbein postulate (2.16) to express Ω µ a in terms of Γ ρ µν we obtain For the choice Γ ρ µν =Γ ρ µν (5.3), i.e. for K µν and L σµν as in (5.5) and (5.5) we find This curvature constraint is in agreement with the curvature constraint (6.2) because it obeys the transformation rule for the curvatures under Galilean boosts which according to (2.9) and (2.10) reads δ G R µν (N) = λ a R µνa (P ). Again in order that R µν (N) remains inert under N transformations in the presence of torsion we need to replace inΓ ρ µν (more precisely in L σµν as given in (5.5)) m µ by M µ = m µ − ∂ µ χ. The field χ is an essential part of NC geometry with torsion. The curvature constraints derived here by using the approach of section 2 agree with [22] where the torsionless case was studied. The analysis of sections 2-6 can thus be viewed as adding torsion to the gauging of the Bargmann algebra (without adding dilatations as in [21]). By employing the relation (5.7) betweenΓ ρ µν andΓ ρ µν we can find the curvature constraint for R µν (N) that relates to this choice of affine connection. The curvature constraint (6.2) is the same for all affine connections (5.1). 3. No constraint on τ µ which is a novel extension of Newton-Cartan (TNC) geometry. TTNC geometry goes back to [16] but in that work a conformal rescaling was done to go to a frame in which there is no torsion. The benefit of adding torsion to the formalism was first considered in [17,18] including the case with no constraint on τ µ . We will see below that making NC and TTNC geometries dynamical corresponds to projectable and non-projectable HL gravity. In this work we will always assume that we are dealing with TTNC geometry which contains NC geometry as a special case. For twistless torsional Newton-Cartan (TTNC) geometry we have by definition This implies that the geometry induced on the slices to which τ µ is hypersurface orthogonal is described by (torsion free) Riemannian geometry. To make contact with the HL literature concerning non-projectable HL gravity it will prove convenient to define a vector a µ as follows a µ = Lvτ µ . (6.9) In section 8 we will exhibit a coordinate parameterization of a µ (see equations (8.15) and (8.16)) that will appear more familiar in the context of HL gravity, where this becomes the acceleration of the unit vector field orthogonal to equal time slices. For TTNC we have the following useful identities The first of these two identities tells us that the twist tensor (the left hand side) vanishes which is why we refer to the geometry as twistless torsional NC geometry. The last identity tells us that a µ describes the TTNC torsion. We will thus refer to it as the torsion vector. Curvatures We start by giving some basic properties of the Riemann tensor (2.31) with connection (5.6). Using that Γ ρ µρ = e −1 ∂ µ e , where e = det τ µ , e a µ , we obtain R µνρ ρ = 0 . Note that because of torsion we have From the definition of the Riemann tensor and our choice of connection we can derive the identity The trace of this equation gives us the antisymmetric part of the Ricci tensor R µν = R µρν ρ . The covariant derivative ofv µ is essentially the extrinsic curvature. Using the connection (5.6) we find the identity where the extrinsic curvature is defined as For TTNC geometries the antisymmetric part of the Ricci tensor is given by using (6.11) and (7.4). We can also derive a TTNC Bianchi identity that reads Contracting λ and κ and the remaining indices withv µ h νσ leads to the identity where we used (7.3) and (7.5). Since we will mostly work in 2+1 dimensions we focus on what happens in that case. Using (2.32) we find where we used that in 2 spatial dimensions R abcd (J) = 1 2 (δ ac δ bd − δ ad δ bc ) R . Coordinate (ADM) Parametrizations Even though we treat the NC fields τ µ andĥ µν as independent we can parametrize them in such a way that g µν in (5.10) is written in an ADM decomposition. Writing leads toĥ For the inverse metric (5.11) the ADM decomposition reads From this we conclude that The choice (6.8) implies that τ µ is hypersurface orthogonal, i.e. If we fix our choice of coordinates such that τ = t we obtain Using that τ µ h µν = 0 and (8.12) we obtain h tt = h ti = 0 as well asĥ ti = γ ij N j andĥ ij = γ ij . Further using that h µρĥ νρ = δ µ ν +v µ τ ν we find h ij = γ ij . This in turn tells us thatv i = N i N −1 , so that h tt = h ti = 0 leads tov t = −N −1 . Sincev µ τ µ = −1 we also obtain τ t = ψ = N so thatĥ tt = γ ij N i N j . Since h tt = h ti = 0 we also havê v t = v t = −N −1 which in turn tells us that h ti = h tt = 0, so that we find Furthermore we have h ij = γ ij and v i = 0. For the time component of m µ we obtain where we used (4.11) or alternatively (4.9) and (4.6). In general τ t = N = N(t, x) so that we are dealing with non-projectable HL gravity. Projectable HL gravity corresponds to N = N(t) which is precisely what we get when we impose ∂ µ τ ν − ∂ ν τ µ = 0. In these coordinates the torsion vector (6.9) reduces to which contains no time derivatives. The determinant e in this parametrization is given by N √ γ where γ is the determinant of γ ij so that using (7.3) we find Γ ρ ρi = ∂ i log √ γ making an object such as ∇ µ (h µν X ν ) a γ-covariant spatial divergence. The number of components in g µν in d + 1 space-time dimensions is (d + 1)(d + 2)/2 whereas the total number of components in τ µ andĥ µν is (d+1)(d+2)/2+d+1−1 where the extra d + 1 originate from τ µ and the −1 comes from the fact thatĥ µν = δ abê a µê b ν so that it has zero determinant. If we furthermore use the fact that τ µ is hypersurface orthogonal, i.e. τ µ = ψ∂ µ τ , we can remove another d − 1 components ending up with (d+1)(d+2)/2+1 which is one component more than we have in g µν . If we next restrict to coordinate systems for which τ = t we obtain the same number of components in the ADM decomposition as we have for our TTNC geometry withoutΦ. Later we will see what the scalarsΦ and the Stückelberg scalar χ (mentioned below (4.7)) correspond to in the context of HL gravity. This counting exercise also shows that in general for arbitrary τ µ TNC gravity is much more general than HL gravity. We leave the study of this more general case for future research. Here we restrict to a hypersurface orthogonal τ µ . We thus see that the field τ µ describes many properties that we are familiar with from the HL literature. For example the TTNC form of τ µ in (8.11) agrees with the Khronon field of [31]. More precisely the Khronon field ϕ of [31] corresponds to what we call τ and what is called u µ in [31] corresponds to what we call τ µ . Further the torsion field a i that we defined via (6.9) and that has the parametrization (8.16) agrees with the same field appearing in [31] where it is referred to as the acceleration vector. We will now show that the generic action describing dynamical TTNC geometries agrees on the nose with the most general HL actions appearing in the literature. Hořava-Lifshitz Actions We will consider the dynamics of geometries described by τ µ , e a µ and m a (in the next section we will addΦ and χ) by ensuring manifest G and J invariance and by constructing in a systematic manner (essentially a derivative expansion) an action for these fields. Since we demand manifest G and J invariance the generic theory will be described by the independent fields τ µ andĥ µν and derivatives thereof. For simplicity we will work with twistless torsion and in 2 spatial dimensions with 1 < z ≤ 2. It is straightforward to consider higher dimensions. We will do this in section 12 where we treat the conformal case. A convenient way to organize the terms in the action is according to their dilatation weight. The dilatation weights of the invariants are given in table 1 where e is the determinant of the matrix (τ µ , e a µ ). The assignment of these dilatation weights to the TNC fields is consistent with the fact that adding dilatations to the Bargmann algebra leads to the Schrödinger algebra for general z [19,21]. These assignments agree with [2]. If we choose the foliation as in the previous section with τ i = 0 and assign the length dimensions z and 1 to the coordinates t and x i , respectively, we obtain that [τ t ] = [N] = L 0 , [N i ] = L 1−z and γ ij = L 0 . Note that in table 1 we do not assign any dilatation weights to the coordinates. In the last two columns we have added the scalarsΦ and χ that will not be used in this section but that will appear in the following sections. Even though the fields transform in representations of the Schrödinger algebra this does not mean that this a local symmetry of the action. This case will be studied in section 12 leading to conformal HL actions. There are three ways of building derivative terms, namely by i). employing the torsion tensor (6.1), ii). taking covariant derivatives of τ µ andĥ µν as well as covariant derivatives of the torsion tensor and iii). by building scalars out of the G, J (and later N) invariants and the curvature tensor R µνσ ρ . Option one amounts to using the combination ∂ µ τ ν − ∂ ν τ µ which because of our choice (6.8) means that the only relevant component is the one obtained by contracting ∂ µ τ ν − ∂ ν τ µ withv µ which equals the Lie derivative of τ ν alongv µ . In other words we can employ the vector a µ defined in (6.9). Option two reduces to just the covariant derivative ofĥ µν and a µ because of what was just said about the torsion tensor and the fact that ∇ µ τ ν = 0. If we contract ∇ ρĥµν with h λµ h κν we obtain zero because of the fact that ∇ ρ h λκ = 0. This means that the only relevant part of ∇ ρĥµν is obtained by contracting it with onev µ (two would give zero). Since we havev µ ∇ ρĥµν = −ĥ µν ∇ ρv µ we can reduce option 2 to taking covariant derivatives ofv µ and h µν a ν (note thatv µ a µ = 0). Because of the identity (7.5) or what is the sameĥ the extrinsic curvature can be viewed as the covariant derivative ofv µ . Options 1 and 2 thus amount to taking the vectors h µν a ν andv µ as well as products thereof and to form scalar invariants by acting on these tensors with covariant derivatives and/or (products of) a µ . We will now first classify these terms before discussing option 3. We will classify all terms that are at most second order in time derivatives and that have no dilatation weights higher than z + 2 (which is the negative of the dilatation weight of e). In other words we only consider relevant and marginal couplings. The only terms containing time derivatives are extrinsic curvature terms which as we observed are covariant derivatives ofv µ . In the previous section we observed that a µ does not contain any time derivatives, see equations (8.15) and (8.16). We start by writing down all products ofv µ and h µν a ν that have dilatation weight at most z + 2, taking into consideration that we restrict our attention to the range 1 < z ≤ 2. The possibilities arev µ z h µν a ν 2 v µvν 2ẑ v µ h νρ a ρ z + 2 h µρ a ρ h νσ a σ 4 , where the dilatation weights are indicated in the second column. Terms with weight 4 are only relevant for the case z = 2. We now hit these terms with ∇ µ and a µ in all possible ways to form scalars. This does not change the dilatation weights because both ∇ µ and a µ have weight zero. Keeping in mind thatv µ a µ = 0 the first two terms in (9.2) give rise to the following scalars h µν a µ a ν 2 . It follows that the first term in (9.3) is a total derivative and the second equals minus the third up to a total derivative. Nevertheless these quantities will be useful as they can be multiplied with a Ricci-type curvature scalar as we will see later. We now focus on the last three terms in (9.2). There are two free indices so we can contract them with a µ a ν , a µ ∇ ν and ∇ µ ∇ ν . Using two a µ 's only leads to one possibility which is (h µν a µ a ν ) 2 4 . (9.5) Contracting the termv µvν with a µ ∇ ν gives always zero because we have a µv µ ∇ νv ν = 0 and a µ (∇ νv µ )v ν = 0 where the last identity follows from (7.5). Doing the same with the termv µ h νρ a ρ in the list (9.2) we obtain the following three allowed scalars However, because of the identity the last of these three terms brings nothing new. Finally the last term in the list (9.2) when contracted with one a µ and one ∇ ν provides two more scalars, namely h µρ a µ a ρ ∇ ν (h νσ a σ ) 4 h µρ a ρ a ν ∇ µ (h νσ a σ ) 4 . (9.8) The second term however brings nothing new because of the identity h µρ a µ a ρ ∇ ν (h νσ a σ ) + tot.der. . (9.9) Finally we can contract the last three terms in (9.2) with two ∇ µ 's leading to the following set of scalars There is one other set of scalar terms containing two covariant derivatives that follow by acting with a µ where = h ρσ ∇ ρ ∇ σ , which is a dimension 2 operator, on the first two terms appearing in the list (9.2). This leads to Both of these however give nothing new as can be shown by partial integration and upon using the TTNC identity (6.10). We are left with the possibility to add scalar curvature terms. To this end we first introduce a Ricci-type scalar curvature R defined as which has dilatation weight 2. Using the scalars (9.3) we can thus build the following list of scalar terms Rh µν a µ a ν 4 . (9.13) The last term in (9.13) makes it possible to remove ∇ µ (h µρ a ρ ) ∇ ν (h νσ a σ ) from the list (9.10). This is due to the identity Rh ρσ a ρ a σ + tot.derv. , (9.14) where we used (2.30), (2.32), (7.3), (7.11) and partial integrations. In d = 2 spatial dimensions there are no other curvature invariants other than R. The reason is that all curvature invariants built out of the tensor R µνσ ρ only involve the spatial Riemann tensor R µν ab (J). The tensor R abcd = e µ a e ν b R µν cd (J) has the same symmetry properties as the Riemann tensor of a d-dimensional Riemannian geometry. Hence since here d = 2 the only component is the Ricci scalar R. Any other term involving the curvature tensor contracted withv µ or h µν a ν can be written as a combination of terms we already classified using (2.30) and other identities. We thus conclude that for d = 2 and 1 < z ≤ 2 the scalar terms that can appear in the action are h µν a µ a ν 2 Rh µν a µ a ν 4 . (9.15) Consequently, we arrive at the action The coefficients c 1 and c 2 have mass dimension z and the coefficients c 3 and c 4 have mass dimension 2 − z. All the others are dimensionless. The terms with coefficients c 3 and c 4 are the kinetic terms because The terms with coefficients c 1 , c 2 and c 10 to c 15 only involve spatial derivatives and belong to the potential term V. They agree with the potential terms in [37,31,35] taking into consideration that we are in 2+1 dimensions. The terms with coefficients c 5 to c 9 involve mixed time and space derivatives and are in particular odd under time reversal. Hence in order to not to break time reversal invariance we will set these coefficients equal to zero. All other terms are time reversal and parity preserving. We thus obtain (9.18) where the potential V is given by Rh µν a µ a ν , (9.19) which also includes a cosmological constant Λ. The kinetic terms in (9.18) display the λ parameter of [1,2]. The potential is exactly the same as the 3D version of the potential given in [37,31,35]. We will not impose that V obeys the detailed balance condition. In the ADM parametrization of section 8 the extrinsic curvature terms in (9.17) are just where K ij is given by is the covariant derivative that is metric compatible with respect to γ ij . 10 Local Bargmann Invariance of the HL Action: Local U (1) vs Stückelberg Coupling The action (9.18) is by construction invariant under local Galilean transformations because it depends only on the invariants τ µ andĥ µν . So far we did not consider the possibility of addingΦ. The action (9.18) is not invariant under the central extension of the Galilean algebra. We will now study what happens when we vary m µ in (9.18) as δm µ = ∂ µ σ. We have that the connection (5.6) transforms under the central element N of the Bargmann algebra as Further h µν a ν is gauge invariant. Using the above results it can be shown that the whole potential V in (9.19) is gauge invariant. What is left is to transform the kinetic terms under N. We have where we used (7.10). The first term can be cancelled by addingΦR to the action using thatΦ transforms as where in the second equality we expressed the results in terms of the ADM parameterization of section 8. In [33] the following U(1) transformation was introduced Together with two new fields A and ν transforming as with ν called the Newtonian prepotential [33]. We see that the α transformation is none other than the Bargmann extension (the σ transformation here) as follows from the identification of m i in (8.13). More precisely we have α = −σ. We thus see that the A and ν fields can be identified withΦ and χ as follows: A = −NΦ and ν = χ. The term d 3 xeRΦ is what in [33] is denoted by d 3 x √ γAR. If we work in the context of projectable HL gravity for which a µ = 0 the action (9.18) with λ = 1 can be made U(1) invariant by writing However if we work with the non-projectable version or with λ = 1 we still need to add additional terms to make the theory U(1) invariant. To see this we use the Stückelberg scalar χ that we already mentioned under (4.7) (see also [23]). Using the field χ that transforms as δχ = σ we can construct the following gauge invariant action (the invariance is up to a total derivative) for λ = 1 The χ dependent terms agree with the result of [35] (eq. (3.8) of that paper) 11 . We thus see that when there is torsion a µ = 0 we need to introduce a Stückelberg scalar χ 11 To ease comparison it is useful to note that in the notation of [35] one has the identity to make the action U(1) invariant. While when there is no torsion we can use (10.10). This nicely agrees with the comments made below (6.2). In [33] the χ field is denoted by ν. This means that we have the following invariance δ N m µ = ∂ µ σ and δ N χ = σ. As a consequence we may simply replace everywhere m µ by M µ = m µ − ∂ µ χ. This is consistent with the observations made in [34] (see in particular eq. (20) of said paper). Essentially adding the χ field to the action means that we have trivialized the U(1) symmetry by Stückelberging it or in other words we have removed the U(1) transformations all together (see the next section). Let us define K χ µν as (7.6) with m µ replaced by M µ . It can be shown that which is now by construction manifestly U(1) invariant. Similarly we can write a manifestly U(1) invariantΦ as obtained by replacing m µ by M µ inΦ. Instead of (10.11) we then write 14) It can be checked that this is up to total derivative terms the same as (10.11). It is now straightforward to generalize this to arbitrary λ and to add for example the ΩΦ coupling considered in [33] leading to If we isolate the part of the action that depends on χ we find precisely the same answer as in eq. (3.12) of [35] specialized to 2+1 dimensions 12 . As a final confirmation that TNC geometry is a natural framework for HL gravity we will show in Section 12 that the conformal HL gravity theories can be obtained by adding dilatations to the Bargmann algebra, i.e. by considering the Schrödinger algebra. A Constraint Equation What we have learned is that unless the χ field drops out of the action, as in (10.10) for the case of projectable HL gravity with λ = 1, we no longer have a non-trivial local G ijkl (∇ i ∇ j ϕ) a k ∇ l ϕ + 1 2 a j a k ∇ i ϕ∇ l ϕ + tot.der. , where in the notation of [35] the field ϕ is what we call χ. We also note that the coefficients of these terms are dimension independent. 12 This simply means that we can take in the notation of [35] G ij = 0. U(1) invariance. This is because we can express everything in terms of M µ which is inert under the U(1). Essentially the fact that we had to introduce a Stückelberg scalar tells us that the U(1) was not there in the first place. There are several statements in the literature expressing that one can remove a scalar degree of freedom from the theory by employing the U(1) invariance, but since we have just established that unless we are dealing with (10.10) there is no U(1) these statements are not clear to us. What we will show is that there is a different mechanism that essentially accomplishes the same effect, via a constraint equation obtained by varyingΦ χ in (10.15), to the claims made in the literature. SinceΦ χ is a field like any other we should, in order to be fully general, allow for arbitrary couplings toΦ χ that do not lead to terms of dimension higher than z + 2. Put another way the most general HL action can be obtained by writing down the most general action depending on τ µ ,ĥ µν andΦ χ containing terms up to order (dilatation weight) z + 2. The first thing to notice is that we typically cannot write down a kinetic term forΦ χ because the dilatation weight of v µ ∂ µΦχ 2 is 6z − 4 which is larger than z + 2 whenever z > 6/5. The same is true for Kv µ ∂ µΦχ while a term likev µ ∂ µΦχ or what is the same upon partial integration KΦ χ breaks time reversal invariance. Let us assume that we have a z value larger than 6/5 so that we cannot write a kinetic term. This means thatΦ χ will appear as a non-propagating scalar field. Let us enumerate the possible allowed couplings toΦ χ . Starting with the kinetic terms we can have schematicallyΦ α χ K 2 where by K 2 we mean both ways of contracting the product of two extrinsic curvatures. In order for this term to have a dimension less than or equal to z + 2 we need that α ≤ 2−z 2(z−1) . It follows that for z > 4/3 we need α < 1. Consider next a term of the formΦ β χ X where X is any term of dimension 2. The condition that the weight does not exceed z + 2 gives us β ≤ z 2(z−1) which means that if z > 4/3 we need β < 2. Finally we can have terms of the formΦ γ χ where γ ≤ 2+z 2(z−1) so that for z > 8/5 we need that γ < 3. In particular it is allowed for all values of 1 < z ≤ 2 to add a term of the formΦ 2 χ . Since for z > 6/5 we are not allowed to add a kinetic term forΦ χ we can integrate it out. We demand that the resulting action after integrating outΦ χ is local. This puts constraints on what α, β and γ can be since they influence the solution forΦ χ . We assume here that α, β and γ are non-negative integers. We will be interested in values of z close to z = 2 so we assume that z > 8/5. In that case we have the following allowed non-negative integer values: α = 0, β = 0, 1 and γ = 0, 1, 2. In other words we can add the followingΦ χ dependent terms (11.1) There are now two cases of interest: either d 5 = 0 or d 5 = 0. When d 5 = 0 we can solve forΦ χ and substitute the result back into the action. The resulting action will be of the same form as (9.18) where all the terms originating from solving forΦ χ and substituting the result back into the action can be absorbed into the potential terms by renaming the coefficients in V. The other possibility that d 5 = 0 leads to a rather different situation. In that case the equation of motion ofΦ χ leads to the constraint equation The remaining equations of motion for τ µ etc. will depend onΦ χ because there is no local symmetry (in particular no U(1)) that allows us to gauge fix this field to zero. Since there is no kinetic term forΦ χ , and hence its value will not be determined dynamically, we fix it by means of a Lagrange multiplier term. Recall that for any value of z in the range 1 < z ≤ 2 it is allowed by the effective action approach to add a term proportional toΦ 2 χ . Consider now the following action 3) where crucially now λ is a field, i.e. a Lagrange multiplier, so that its equation of motion tells us thatΦ χ = 0 and further the equation of motion ofΦ χ will lead to the constraint equation (11.2), which is a more general version of the constraint equation used in [33] and related works. SinceΦ χ = 0 theΦ χ dependent terms do not affect the remaining equations of motion. This essentially accomplishes thatΦ χ is not present in the theory and that we have the constraint equation (11.2). More generally we should think ofΦ χ as a background field whose value can be set to be equal to some fixed function f . This is accomplished by writing instead of (11.3) the following The λ equation of motion enforces the background valueΦ χ = f , the equation of motion ofΦ χ leads again to (11.2) while the remaining equations of motion involve terms depending on f through the variation of terms linear in f . Conformal HL Gravity from the Schrödinger Algebra In this section we will work with an arbitrary number of spatial dimensions d. In order to study conformal HL actions we add dilatations to the Bargmann algebra of section 4 and study the various conformal invariants that one can build. To this end we use the connection A µ that takes values in the Schrödinger algebra (where for z = 2 we leave out for now the special conformal transformations that will be introduced later) 13 A µ = Hτ µ + P a e a µ + G a ω µ a + 1 2 J ab ω µ ab + Nm µ + Db µ , (12.1) where the new connection b µ is called the dilatation connection. The reason that we renamed the connections in (12.1) as compared to (4.1) is because the dilatation generator D is not central so that it modifies the transformations under local D transformations as compared to how say Ω µ a and ω µ ab would transform using (2.13), (2.16) and (5.1). The transformation properties and curvatures of the various fields follow from the Schrödinger algebra: We perform the same steps as before (see (2.3) and onwards), namely we consider the adjoint transformation of A µ , i.e. where we write (without loss of generality) with now Σ = G a λ a + 1 2 J ab λ ab + Nσ + DΛ D , (12.5) and we defineδA µ as where F µν is the curvature where we put tildes on the curvatures to distinguish them from those given in sections 2 and 4. From this we obtain among others that the dilatation connection b µ transforms asδ b µ = L ξ b µ + ∂ µ Λ D . (12.8) 13 Compared to e.g. [19,21] we have interchanged the field m µ appearing in front of N in the Bargmann algebra and the fieldm µ appearing in front of N in the Schrödinger algebra, see also footnote 9. The following discussion closely follows section 4 of [21]. We will use this b µ connection to rewrite the covariant derivatives (2.12) and (2.13) in a manifestly dilatation covariant manner. As a note on our notation we remark that, now that we have learned that we should work with M µ = m µ − ∂ µ χ we take it for granted that we have replaced everywhere m µ by M µ and we from now on suppress χ labels as in (10.12) and (10.13). The Schrödinger algebra for general z tells us that the dilatation weights of the fields are as in table 1 while m µ and χ (and thus M µ ) have dilatation weight z − 2. This also agrees with the weights assigned to A and ν in [33]. Coming back to the introduction of b µ , to make expressions dilatation covariant we takeΓ ρ µν of equation (5.3) and replace ordinary derivatives by dilatation covariant ones leading to a new connectionΓ ρ µν that is invariant under the G a , J ab , N and D transformations and which is given by [21] (12.9) For the most part of this section we will work withΓ ρ µν and its dilatation covariant generalizationΓ ρ µν . The final scalars out of which we will build the HL action, i.e. for dynamical TTNC geometries, are such that it does not matter whether we useΓ ρ µν or Γ ρ µν which are related via (5.7). With the help of b µ andΓ ρ µν we can now rewrite the covariant derivatives (2.12) and (2.13) as follows The ω µ a and ω µ ab connections are such that they can be written in terms of Ω µ a and Ω µ ab together with b µ dependent terms such that all the b µ terms drop out on the right hand side of (12.10) and (12.11) when expressing it in terms of the connections Γ ρ µν , Ω µ a and Ω µ ab . The field M µ = m µ − ∂ µ χ can be expressed in terms of the Schrödinger connectioñ m µ as follows. According to (12.2) and (12.6) the Schrödinger connectionm µ transforms asδm The Stückelberg scalar χ transforms as A Schrödinger covariant derivative D µ χ is given by Defining M µ = −D µ χ = m µ − ∂ µ χ we see that M µ transforms as (12.15) and that m µ =m µ + (2 − z)b µ χ . (12.16) Hence the dilatation covariant derivative of M µ reads The torsionΓ ρ [µν] has to be a G, J, N and D invariant tensor. With our TTNC field content the only option is to take it zero, i.e.Γ ρ µν becomes torsionless [21]. This means that different from the relativistic case the b µ connection is not entirely independent, but instead reads Let X ρ be a tensor with dilatation weight w, i.e. A dilatation covariant derivative is given bỹ where∇ ν is covariant with respect toΓ ρ νµ as given in (12.9). Let us compute the commutator The introduction of the b µ connection has led to a new componentv µ b µ as visible in (12.18). We can introduce a special conformal transformation (denoted by K) that allows us to remove this component. Hence we assign a new transformation rule to b µ namely Under special conformal transformations we have (12.24) In order that ∇ µ + wb µ ∇ ν + wb ν X ρ transforms covariantly we define the K- where f µ is a connection for local K transformations that transforms as [21] δf In order not to introduce yet another independent field f µ (recall that we are trying to removev µ b µ ) we demand that f µ is a completely dependent connection that transforms as in (12.26). This is in part realized by setting the curvature of the dilatation connection b µ equal to zero, i.e. by imposinǧ This fixes all but thev µ f µ component of f µ . This latter component will be fixed later by equation (12.42). The notation is such that a tilde refers to a curvature of theδ transformation (12.6) without the K transformation while a curvature with a check sign refers to a curvature that is covariant under theδ transformations with the K transformation. We note that while for the Schrödinger algebra, i.e. with the δ transformations (12.3) we can only add special conformal transformations when z = 2 while for the (different) group of transformations transforming underδ we can define K transformations for any z [21]. Taking the commutator of (12.25) we find whereŘ µνλ ρ is given by Under K transformations the curvature tensorŘ µνλ ρ transforms as Besides this property, the tensorŘ µνλ ρ is by construction invariant under D, G, N and J transformations. Using the vielbein postulates (12.10) and (12.11) we can writẽ With this result we can derivě R µνσ ρ = −e ρd e c σR µνcd (J) + e ρc τ σصνc (G) , (12.32) whereR µνcd (J) andŘ µνc (G) are given bỹ We next present some basic properties ofŘ µνσ ρ . The first iš We now turn to the question whatv σvνŘ σν should be equal to. Following [21] we will take this to be equal tov because the right hand side has the exact same transformation properties under all local symmetries asv σvνŘ σν . The combination ofŘ µν (D) = 0 together with (12.42) fixes f µ entirely in terms of τ µ , e a µ , m µ and χ in such a way that it transforms as in (12.26). where K µν is the extrinsic curvature, we see that is invariant under the K transformation because the termv µ b µ cancels out from the above difference. Another scalar quantity of interest is h µνŘ µν = −R ab ab (J) , (12.45) which is K invariant and has dilatation weight 2. With these ingredients we can build a z = d conformally invariant Lagrangian This is an example of a Lagrangian for non-projectable HL gravity that is conformally invariant. The quantity h µνŘ µν can be expressed in terms of R and the torsion vector a µ defined in sections 6 and 7 as follows. Solving (12.11) for ω µ ab and using the relation betweenΓ ρ µν andΓ ρ µν given in (5.3) which reads we obtain (12.11), via where we used that ω µ ab and Ω µ ab are related, as follows from the vielbein postulates (2.13), (2.16) and where we furthermore used that for TTNCΩ µ ab =Ω µ ab as follows from (5.7) and the TTNC relation (6.10). In the relationΩ µ ab =Ω µ ab the connection Ω µ ab is found by employing the vielbein postulate expressed in terms ofΓ ρ µν and likewisê Ω µ ab is obtained by using the vielbein postulate written in terms ofΓ ρ µν . Then using (12.33) and (12.48) we find where we used (12.18) and R cd cd (J) = R which is merely a definition of R. By fully employing the local Schrödinger algebra we arrive at the conformally invariant z = d action [1,33] For z = d the dilatation weight ofΦ is given by 2(d − 1) so that the terms can be added to the action in a conformally invariant manner. Assuming b = 0 we can integrate outΦ which leads to the action (12.50) with a different constant B. The case with b = 0 can be viewed as a constrained system as discussed in section 11. The integrand of (12.50) has been obtained in Lifshitz holography and field theory using different techniques and found to describe the Lifshitz scale anomaly [47,48,4,18,49] where A and B play the role of two central charges. In [18] it was shown that for d = z = 2 the integrand of (12.50) together with (12.51) for specific values of a and b arises from the (Scherk-Schwarz) null reduction of the AdS 5 conformal anomaly of gravity coupled to an axion. Discussion We have shown that the dynamics of TTNC geometries, for which there is a hypersurface orthogonal foliation of constant time hypersurfaces, is precisely given by non-projectable Hořava-Lifshitz gravity. The projectable case corresponds to dynamical NC geometries without torsion. One can build a precise dictionary, between properties of TNC and HL gravities, which we give below in table 2. We conclude with some general comments about interesting future research directions. TNC geometries have appeared so far as fixed background geometries for nonrelativistic field theories and hydrodynamics [25,26,50,51,29,24,52,23] as well as in holographic setups based on Lifshitz bulk space-times [17,18,19,21,23]. In all these cases the TNC geometry is treated as non-dynamical. This is a valid perspective provided the backreaction onto the geometry can be considered small, e.g. a small amount of energy or mass density should not lead to pathological behavior of the geometry when allowing it to backreact. This renders the question of the consistency of HL gravity in the limit of small fluctuations around flat space-time of crucial importance for applications of TNC geometry to the realm of non-relativistic physics. In this light we wish to point out that (in the absence of a cosmological constant) the ground state is not Minkowski space-time but flat NC space-time which has different symmetries than Minkowski space-time as worked out in detail in [23]. It would be interesting to work out the properties of perturbations of TTNC gravity around flat NC space-time. In particular we have shown that generically there is no local U(1) symmetry in the problem but that rather one can either integrate outΦ χ without modifying the effective action in an essential way or in such a way that it imposes a non-trivial constraint on the spatial part of the geometry. It would also be interesting to study the theory from a Hamiltonian perspective and derive the first and second class constraints and compare the resulting counting of degrees of freedom with the linearized analysis. Since it is well understood how to couple matter to TNC geometries the question of how to couple matter to HL gravity can be readily addressed in this framework. For example it would be interesting to find Bianchi identities for the TTNC curvature tensor (as studied in section 7) in such a way that they are compatible with the on-shell TNC gravity HL gravity twistless torsion: h µρ h νσ (∂ µ τ ν − ∂ ν τ µ ) = 0 non-projectable no torsion: ∂ µ τ ν − ∂ ν τ µ = 0 projectable τ µ = ψ∂ µ τ scalar khronon ϕ in u µ [31] τ invariant under Galilean foliation breaks local Lorentz tangent space group invariance torsion vector a µ acceleration a µ [31] TNC invariant: −τ µ τ ν +ĥ µν metric with Lorentz signature g µν diffeomorphism Ward identity for the energy-momentum tensor as defined in [19,24,23]. We emphasize once more that matter systems coupled to TNC geometries can have but do not necessarily need to have a particle number symmetry [19,23]. It would be important to study what the fate of particle number symmetry is once we make the geometry dynamical. In the matter sector particle number symmetry comes about as a gauge transformation acting on M µ in such a manner that the Stückelberg scalar χ can be removed from the matter action [19,23] making this formulation consistent with [29]. We have seen in section 10 that generically the χ field cannot be removed from the actions describing the dynamics of the TNC geometry. Hence, it seems that the dynamics of the geometry breaks particle number symmetry except when we use the model (10.10) for projectable HL gravity with λ = 1 in which case the central extension of the Bargmann algebra is a true local U(1) symmetry and the χ field does not appear in the HL action. Another interesting extension of this work is to consider the case of unconstrained torsion, i.e. TNC gravity, in which case τ µ is no longer restricted to be hypersurface orthogonal. In table 2 we refer to this as the vector khronon extension in the last row. The main difference with TTNC geometry is that now the geometry orthogonal to τ µ is no longer torsion free Riemannian geometry but becomes torsionful. This extra torsion is described by an object which we call the twist tensor (see e.g. [21]) denoted by T µν and defined as T µν = 1 2 δ ρ µ + τ µv ρ (δ σ ν + τ νv σ ) (∂ ρ τ σ − ∂ σ τ ρ ) . (13.1) Therefore apart from the fact that now the τ µ appearing in the actions of sections 9-12 is no longer of the form ψ∂ µ τ but completely free, we can also add additional terms containing the twist tensor T µν . Another such tensor is T (a)µν (see again [21] where it was denoted by T (b)µν ) which is defined as T (a)µν = 1 2 δ ρ µ + τ µv ρ (δ σ ν + τ νv σ ) (∂ ρ a σ − ∂ σ a ρ ) . (13.2) Hence we can add for example a term such as which has weight 4 − 2z so that it is relevant for z > 1. In fact for z = 2 this term has weight zero and so one can add an arbitrary function of the twist tensor squared. In the IR however the two-derivative term dominates. Another aspect that would be worthwhile examining using our results is whether one could learn more about non-relativistic field theories at finite temperature using holography for HL gravity [47,53,3,5,4,54]. Independently of whether HL gravity is UV complete, assuming it makes sense as a classical theory it may be a useful tool to compute properties such as correlation functions of the (non-relativistic) boundary field theory. In particular, this implies that there must exist bulk gravity duals to thermal states of the field theory, i.e. classical solutions of HL gravity that resemble black holes as we know them in general relativity. In light of this it would be interesting to re-examine the status of black hole solutions in HL gravity (see e.g. [55,56,57]). Moreover, it is expected that in a long-wave length regime some version of the fluid/gravity correspondence should exist, enabling the computation of for example transport coefficients in finite temperature non-relativistic field theories on flat (or more generally curved) NC backgrounds. TNC geometry also appears in the context of WCFTs [58] as the geometry to which these SL(2) × U(1) invariant CFTs couple to. This was called warped geometry and corresponds to TNC geometry in 1 + 1 dimensions with z = ∞ (or z = 0 if one interchanges the two coordinates). In that case there is no spatial curvature so the entire dynamics is governed by torsion. It would be interesting to write down the map to the formulation in [58] and furthermore explicitly write the HL actions for that case. It would also be interesting to explore the relation of TNC gravity to Einsteinaether theory. It was shown in [36] that any solution of Einstein-aether theory with hypersurface orthogonal τ µ is a solution of the IR limit of non-projectable HL gravity. It would thus be natural to expect that any solution of Einstein-aether theory with unconstrained τ µ is a solution to the IR limit of TNC gravity. In view of the relation [59,60] between causal dynamical triangulations (CDT) and HL quantum gravity, both involving a global time foliation, there may also be useful applications of TNC geometry in the context of CDT [61]. Finally, since HL gravity is connected to the mathematics of Ricci flow (see e.g. [62]), examining this from the TNC perspective presented in this paper could lead to novel insights. translations by diffeomorphisms. This can be achieved as follows. We define a new set of local transformations that we denote byδ. The main step is to replace the parameters in Λ corresponding to local space-time translations, i.e. ζ a by a space-time vector ξ µ via ζ a = ξ µ e a µ . This can achieved by the following way of writing Λ Λ = ξ µ A µ + Σ , (A.6) where Σ = 1 2 M ab λ ab , (A.7) with σ ab = ξ µ ω µ ab + λ ab . Next we defineδA µ as where the second equality is an identity and where F µν is the curvature Under theδ transformations, the connection e a µ associated with the Lorentz momenta P a , transforms as a vielbein while the connection ω µ ab associated with the Lorentz boosts M ab become the spin connection coefficients. In order to define a covariant derivative on the space-time we first introduce a covariant derivative D µ via so that it is inert under the local Lorentz (tangent space) transformations. We will now relate the properties of the curvatures R µν a (P ) and R µν ab (M) to those of Γ ρ µν . This goes via the vielbein postulate which reads D µ e a ν = ∂ µ e a ν − Γ ρ µν e a ρ − ω µ a b e b ν = 0 , (A.14) relating Γ ρ µν to ω µ ab . Taking the antisymmetric part of the vielbein postulate and moving Γ ρ [µν] to the other side we obtain From this we conclude that the curvature R µν a (P ) is the torsion tensor. To identify the other curvature tensor R µν ab (M) we compute the commutator of two covariant derivatives ∇ µ (containing only the connection Γ ρ µν ) leading to 16) where R µνρ σ is the Riemann curvature tensor that is related to R µν ab (M) (as follows from the vielbein postulate) via R µνρ σ = −e ρa e σ b R µν ab (M) , (A. 18) so that R µν ab (M) is the Riemann curvature 2-form. The vielbein postulate, because of the fact that ω µ ab is antisymmetric in a and b, also tells us that the metric g µν = η ab e a µ e b ν , which is the unique Lorentz invariant tensor we can build out of the vielbeins, is covariantly constant, i.e. ∇ ρ g µν = 0 . (A. 19) As is well known this fixes completely the symmetric part of the connection making it equal to the Levi-Cività connection plus torsion terms which are left unfixed. The common choice in GR to work with torsion-free connections then implies that from the gauging perspective one imposes the curvature constraint R µν a (P ) = 0. This in turn makes ω µ ab a fully dependent connection expressible in terms of the vielbeins and their derivatives. Without fixing the torsion e a µ and ω µ ab remain independent.
20,839
2015-04-28T00:00:00.000
[ "Mathematics", "Physics" ]
Histone 2B (H2B) Expression Is Confined to a Proper NAD+/NADH Redox Status* S-phase transcription of the histone 2B (H2B) gene is dependent on Octamer-binding factor 1 (Oct-1) and Oct-1 Co-Activator in S-phase (OCA-S), a protein complex comprising glyceraldehyde-3-phosphate dehydrogenase and lactate dehydrogenase (p38/GAPDH and p36/LDH) along with other components. H2B transcription in vitro is modulated by NAD(H). This potentially links the cellular redox status to histone expression. Here, we show that H2B transcription requires a proper NAD+/NADH redox status in vitro and in vivo. Therefore, perturbing a properly balanced redox impairs H2B transcription. A redox-modulated direct p38/GAPDH-Oct-1 interaction nucleates the occupancy of the H2B promoter by the OCA-S complex, in which p36/LDH plays a critical role in the hierarchical organization of the complex. As for p38/GAPDH, p36/LDH is essential for the OCA-S function in vivo, and OCA-S-associated p36/LDH possesses an LDH enzyme activity that impacts H2B transcription. These studies suggest that the cellular redox status (metabolic states) can directly feedback to gene switching in higher eukaryotes as is commonly observed in prokaryotes. The transcription of the histone 2B (H2B) gene depends on Octamer-binding factor 1 (Oct-1) 5 and Oct-1 Co-Activator in S-phase (OCA-S), a co-activator complex comprising the glycolytic enzymes p38/GADPH and p36/LDH among other subunits (4, 5, 8 -11). Oct-1 binds the essential octamer site in the H2B promoter throughout the interphase; however, OCA-S occupies the H2B promoter only in the S-phase (10). A direct Oct-1-p38/GAPDH interaction and H2B transcription are modulated by NAD(H) in vitro (10). This implicates a redoxmodulated H2B expression in vivo and potentially links the redox status (metabolic states) of the cell to gene switching. Quite common in prokaryotes, such links have been rarely reported in higher eukaryotes (12). Here, we provide evidence that a proper NAD ϩ /NADH redox status (ratio) is important for optimal H2B expression both in vitro and in vivo. The direct Oct-1-p38/GAPDH interaction plays a role nucleating the H2B promoter occupancy by OCA-S, which dictates the redox-modulated H2B transcription. p36/LDH is also essential for H2B expression in vivo and plays a critical role in the hierarchical organization of OCA-S. Finally, the OCA-S-associated p36/LDH possesses an intrinsic catalytic activity that exerts an impact upon H2B transcription. Our studies suggest that the nuclear moonlighting transcription functions of metabolic enzymes can be modulated by the cellular redox status, which directly links gene switching to the cellular metabolic states in higher eukaryotes. fly) construct (2). The in vitro transcription templates and procedures were as described (14). Real-time PCR Assay-Sybr Green Core Reagents (Applied Biosystems) were used in the real-time PCR assays to quantify histone mRNA levels normalized to ␤-actin mRNA levels. B Cell Nuclear Extract (NE)-based Transcription Mixture-A B cell NE (in buffer BC-100, 100 mM KCl) was passed through a P11 phosphocellulose column equilibrated in BC-100, and the column washed with BC-100. General transcription factor IIA, purified from the flow-through fraction, was reconstituted with BC-300, BC-500, and BC-1,000 (mM KCl) fractions (10). This procedure removed the pre-existing redox components, resulting in the B cell NE-based transcription mixture containing an intact set of transcription factors that support IgH and H2B transcription and allowed the modulation of the H2B transcription by (low level) exogenous NAD(H) in vitro (Ref. 10 and "Results"). Immunodepletion-Equilibrated in BC-500, the B cell NEbased transcription mixture was loaded onto a protein A-Sepharose column (mock) or a column coupled with the anti-p38/GAPDH, anti-p36/LDH, or anti-p60/Sti1 antibodies, equilibrated also in BC-500. After re-loading five times, the flow-through was dialyzed to BC-100 prior to assays. Immobilized H2B Promoter Pull-down Assays-A DNA fragment corresponding to Ϫ90 to Ϫ17 of the H2B promoter (15) was biotin-labeled and immobilized to streptavidin-agarose beads (100 ng of DNA for 25-l beads). The beads were made in 50% slurries. Then, 10 l of the B cell NE-based transcription mixture and 2 g of poly(dI-dC)-poly(dI-dC), and NAD(H), when appropriate, were mixed in 25 l and incubated with the immobilized DNA slurries. The beads were then extensively washed, and the bound proteins further analyzed by immunoblot and enzyme assays. GAPDH and LDH Enzyme Assays-1 l of the B cell NEbased transcription mixture, or 1 milliunit of GAPDH or LDH, was mixed with streptavidin-agarose beads (25 l in 50% slurries), as was the H2B promoter-bound OCA-S in the beads (see above). For the LDH enzyme reaction, slurries were mixed with 25 l of transcription buffer with 2 mM each of NAD ϩ , NADH, and sodium pyruvate, and for the GAPDH enzyme reaction, 2 mM each of NAD ϩ , NADH, glyceraldehyde 3-phosphate and KH 2 AsO 4 . The generated or consumed NADH levels were recorded on a fluorometer (340 nm excitation, 460 nm emission). Evaluations of Free Cellular NAD ϩ /NADH Ratios (Redox Status) and Total NAD ϩ , NADH, or ATP Levels-Free NAD ϩ / NADH ratios were scored on the premise that the coenzymes are in equilibrium with real-time pyruvate and lactate levels (16 -17). Thus, cells were lysed with 1 M perchloric acid to stop all metabolism and the lysate neutralized for enzymatic assays. A reaction for determining the pyruvate level contained 100 mM imidazole (pH 7.0), 40 M NADH, 4 units/ml LDH, and a lysate titration. A reaction for determining the lactate level contained a lysate titration, 450 mM glycine (pH 9.5), 200 mM hydrazine sulfate, 2.5 mM EDTA, 120 M NAD ϩ , and 8 units/ml LDH. All reactions were carried out at 25°C for 30 min. All generated or consumed NADH levels were recorded on a fluorometer (340-nm excitation, 460-nm emission). The total cellular NAD ϩ or NADH levels were determined by established assays (18). Cellular ATP levels were scored by the ATPlite™ Luminescence Assay System from PerkinElmer. BrdU-FACS Analyses-Cells grown in 6-well plates were treated with BrdU (10 M) for 45 min before harvesting, fixed with 70% cold ethanol, treated with 3 N HCl, washed, and incubated with the mouse anti-BrdU monoclonal antibody for 30 min. After washing, cells were incubated with the Alexa Fluor 488 anti-mouse IgG (Invitrogen) for 30 min. Finally, cells were treated with propidium iodide and RNase A for 30 min and subjected to FACS analyses. Luciferase Assays in Xenopus Oocytes and HeLa Cells-Increasing NAD ϩ , p38/GAPDH (or bovine serum albumin), and an H2B promoter-luciferase (firefly) reporter (2) were co-injected into oocyte nuclei (10 nl per nucleus, 1.4 ng of protein). At 40 h, the oocytes were lysed to measure the luciferase activities. To assess the effects of redox perturbations on H2B promoter in vivo, HeLa cells were transfected with the H2B promoter-luciferase reporter, treated with NaN 3 overnight, and then harvested to measure the luciferase activities. In both assays, the internal reference was a Renilla luciferase gene driven by the SV40 promoter. The reference activities were largely constant, which further supports H2B promoter specificity of redox effects (see "Results"). RESULTS Biphasic Response of H2B Transcription to Varied NAD ϩ / NADH Redox Status-It was earlier shown that in an NE-based transcription system in which the pre-existing redox components were removed, the H2B transcription was stimulated by exogenous NAD ϩ and inhibited by NADH (tested up to 0.4 mM; Ref. 10). The current study used the B cell NE-based transcription mixture, in which the pre-existing redox components were also removed chromatographically ("Experimental Procedures"). We tested a broader NAD ϩ titration and observed a biphasic response: the H2B transcription was stimulated by NAD ϩ up to 1 mM but inhibited by NAD ϩ at higher dosages (Fig. 1A). NADH was always inhibitory (Fig. 1B). Mixing 1 mM NAD ϩ with additional NAD ϩ or with NADH repressed the NAD ϩ -stimulated H2B transcription (Fig. 1C). In vivo, (free) NAD ϩ /NADH ratios define the redox status, prompting us to study whether a range of NAD ϩ /NADH ratios may modulate H2B transcription in vitro. Increasing NAD ϩ on top of 0.4 mM NADH stimulated H2B transcription (up to a 4:1 ratio; Fig. 1D, lane 8) but higher ratios had opposite effects. At a constant dosage of NAD(H) (2 mM), in vitro H2B transcription was enhanced by increasing the ratios up to 1:1 (Fig. 1E, lane 5), and higher ratios had opposite effects. OCA-S abets the role of an octamer-bound Oct-1 to regulate H2B transcription (10); thus, the transcriptional output should correlate with the H2B promoter occupancy by OCA-S (through a direct Oct-1-p38/GAPDH interaction; see Fig. 2). With the same set of NAD ϩ /NADH ratios used for in vitro transcription (Fig. 1E), we tested the H2B promoter occupancy by the OCA-S complex in an immobilized H2B promoter pull-down assay. As seen (Fig. 1F), while the recruitment of Oct-1 to the H2B promoter was largely constant, that of the OCA-S components exhibited a biphasic pattern. This is in line with the biphasic H2B transcription pattern (Fig. 1E). Hence, the NAD ϩ / NADH ratio (redox status) may modulate the H2B transcription via regulating the OCA-S recruitment to the H2B promoter through the octamer-bound Oct-1. p38/GAPDH, microinjected into Xenopus oocytes, was shown to activate an ectopic H2B gene (19). This offers a more physiological assay to examine an NAD ϩ response for Oct-1-dependent H2B promoter co-activation. We microinjected an H2B promoter-luciferase reporter, p38/GAPDH, and NAD ϩ into the oocytes, and recapitulated a biphasic luciferase reporter expression in responding to increasing NAD ϩ (Fig. 1G). Results in Fig. 1 prompted us to conclude that optimal H2B transcription in vitro requires a proper NAD ϩ /NADH ratio (redox status) and that maximal H2B transcription in Xenopus oocytes is ensured by a proper NAD ϩ level. A Nucleating Role of p38/GAPDH for the OCA-S Function-The role(s) of NAD(H) in H2B transcription in vitro is consistent with co-existing p38/GAPDH and p36/LDH in OCA-S (Ref. 10; Fig. 1). p38/ GAPDH directly interacts with Oct-1 (10) and may be central in regulating H2B transcription by sensing the redox changes that modulate an Oct-1-OCA-S interaction ( Fig. 1, also see below). First, we immunodepleted p38/GAPDH from the B cell NEbased transcription mixture under 500 mM KCl high salt conditions, which allowed for depletion of one OCA-S component while keeping others intact in the depleted samples (Ref. 10 and Fig. 2A; also see below). Then we carried out a subsequent B, NADH-inhibited H2B transcription. C, H2B transcription stimulated by NAD ϩ (1 mM) was reduced by NADH or additional NAD ϩ . D, H2B transcription in an NAD ϩ titration on top of 0.4 mM NADH. E and F, H2B transcription and H2B promoter occupancy by OCA-S (an immobilized H2B promoter pull-down assay) responded in a biphasic manner to an NADH/NAD ϩ ratio titration. In F, the bound proteins were examined by immunoblot with IgGs against Oct-1 or the OCA-S components with available antibodies: p38/GAPDH, p36/LDH, p60/Sti1 (stress inducible protein 1), p20/nm23-H1 (non-metastatic protein 23 in human, 1), and p18/nm23-H2 (nonmetastatic protein 23 in human, 2). The three panels below the immunoblot images show the largely constant recruitment of Oct-1 and the varied recruitment of p38/GAPDH and p36/LDH representing the OCA-S complex, as quantified by densitometry of relevant signals. G, H2B promoter activity in oocytes. The H2B promoterluciferase reporter gene was microinjected into oocytes with p38/GAPDH (ϩ) or BSA (Ϫ), without (Ϫ), or with NAD ϩ (0.04, 0.2, 1, 5, 25 mM). immobilized H2B promoter pull-down assay (Fig. 2B). The binding of Oct-1 to the H2B promoter was constant (all lanes) and that of OCA-S components was inhibited by NADH and stimulated by NAD ϩ (1 mM) in the control (lanes 1-3). In the p38/GAPDH-depleted B cell NE-based transcription mixture, however, the binding of other components was impeded (lanes 4 -6). In line with the above promoter recruitment patterns, H2B transcription in vitro was similarly modulated by exogenous NAD(H) at 1 mM level (Fig. 2C, top panel). Given that an octamer mutant (H2B Mut) template was not redox-sensitive ( Fig. 2C; bottom panel), the NAD(H) redox effects did not target the basal H2B transcription. In addition, NAD(H) were not converted to other metabolites nor interconverted in the in vitro reactions (Fig. 2D). This suggests primary or direct, and NAD(H)-specific, redox effects. The above results suggest that, by sensing the NAD ϩ /NADH redox, p38/GAPDH nucleates an Oct-1-OCA-S interaction as a key regulatory step for the H2B transcription in vitro. The redox effect is on an Oct-1-p38/GAPDH interaction, but not on an Oct-1-octamer interaction or the H2B basal transcription machinery. In living cells, the Oct-1-p38/GAPDH interaction is likely subject to direct redox-modulation as well, which determines the H2B transcription output in vivo (see below). Essential and Redox-modulator Roles of p36/LDH for H2B Transcription-p38/GAPDH (10), p36/LDH (Fig. 3A) and other tested OCA-S components (not shown) are essential for H2B transcription in living cells. Given a cytoplasmic interaction between GAPDH and LDH (20), a similar p38/GAPDH-p36/LDH interaction within OCA-S may help tether an intact OCA-S complex to the H2B promoter. This might help explain the essential role of p36/LDH for H2B expression in vivo. To support the above notion, we carried out immobilized H2B promoter pull-down assays in the B cell NE-based transcription mixture. In the p36/LDH-depleted sample, promoter occupancy by p20/nm23-H1, p18/nm23-H2, and p60/Sti1 was The exposure time to obtain the H2B Mut image was three times longer than that to obtain the H2B WT image. Thus, without an OCA-S function (lane 4) and reflecting a ϳ3-fold activation potential of Oct-1 (14), the H2B WT template was transcribed ϳ3 times more actively than the H2B Mut template, which was transcribed at a basal level. The transcription activities from the H2B WT template were quantified using a densitometer (bottom panel) and normalized to the largely constant transcription activities of the H2B Mut template. D, in vitro, exogenous NADH and NAD ϩ were not interconverted or converted to other metabolites. The coenzyme levels were measured after (ϩ) or before (Ϫ) transcription reactions. impeded as opposed to the normal and NAD ϩ -enhanced recruitment of OCA-S in the mock-depleted sample (Fig. 3B). In the p60/Sti1-depleted sample, the recruitment of p36/LDH and p38/GAPDH to the H2B promoter was normal but the recruitment of p20/nm23-H1 and p18/nm23-H2 was impeded (Fig. 3C). Hence, p36/LDH is essential as an OCA-S component that helps tether an intact OCA-S complex to the H2B promoter. This is most likely through a direct interaction between p38/GAPDH and p36/LDH within OCA-S; however, the possibility that the interaction is mediated by other untested OCA-S component(s) is not yet excluded. p38/GAPDH along with p36/LDH did not exercise glycolytic activities in the reactions in the B cell NE-based transcription mixture, because NAD(H) was not converted to other metabolites or interconverted (Fig. 2D); however, we cannot rule out the possibility that the two essential OCA-S components, while associated in OCA-S, might have catalytic activities that are able to feedback to H2B transcription if provided substrates. Under optimized conditions, immobilized H2B promoter pull-down assays allowed for 10 -20% recruitment of OCA-S (Fig. 3D). We found that the H2B promoter-bound OCA-S exhibited an enzyme activity for p36/LDH, which was as potent as the input (the B cell NE-based transcription mixture); however, we have failed to detect any intrinsic enzyme activity for p38/GAPDH, which was marginal even in the input (Fig. 3E). This suggests that the OCA-S intrinsic enzyme activity for p36/ LDH, which uses NADH as a coenzyme (to be re-oxidized to NAD ϩ ) to convert pyruvate to lactate, can impact H2B transcription. Indeed, the H2B transcription in vitro inhibited by 1 mM NADH was relieved and stimulated by pyruvate in a dosedependent manner (Fig. 3F, top panel), accompanied by the pyruvate-facilitated NADH to NAD ϩ re-oxidation (Fig. 3F, bottom panel). Disrupting NAD ϩ Biosynthesis Inhibits H2B Expression in Living Cells-The belief that optimal H2B transcription in vitro requires a proper NAD ϩ /NADH ratio (redox status) (Fig. 1) prompted studies on the roles of the NAD ϩ /NADH redox status for H2B expression in living cells. ϳ80% mammalian NAD ϩ is derived from a two-step biosynthesis (21): the cytoplasmic nicotinamide phosphoribosyltransferase (Nampt) converts precursor nicotinamide to nicotinamide mononucleotide, which is converted to NAD ϩ by the enzyme family nicotinamide mononucleotide adenylyl-transferases (Nmnat, of which Nmnat-1 is a nuclear isozyme). Therefore, silencing the expression of Nampt or Nmnat-1 can deplete the cellular or nuclear NAD ϩ pool, allowing us to study a role of NAD ϩ for H2B expression in vivo. We used siRNAs (nos. 1 or 2) targeting two Nampt mRNA regions to silence its expression in HeLa cells via RNAi (13). Immunoblot analyses of the extracts of the siRNA-treated cells showed ϳ90% knock-down with either siRNA (Fig. 4A). RT-PCR and quantitative real-time PCR showed reduced H2B expression (Fig. 4B) in the Nampt-depleted cells, which exhibited a reduced free cellular NAD ϩ /NADH ratio (Fig. 4C) or total NAD(H) levels (Fig. 4D). These results link H2B expression in vivo to the NAD ϩ /NADH ratio or the overall (total) NAD(H) levels. Because of certain technical limitations, there is no direct assessment of the free NAD ϩ /NADH ratios, or the total NAD ϩ (or NADH) levels, in the nuclear compartment. The measured free cellular NAD ϩ /NADH ratios and total NAD ϩ (or NADH) levels, therefore, might partially reflect the nuclear ratios and levels. Nmnat-1-depleted cells might have more reduced free NAD ϩ /NADH ratios in the nuclei than the measured cellular ratios (Fig. 4G), which might be comparable to the nuclear ratios in Nampt-depleted cells. Indeed, although reduction of the free cellular NAD ϩ /NADH ratios was less prominent in the Nmnat-1-depleted cells than in Nampt-depleted cells (Fig. 4G versus Fig. 4C), the H2B expression defects in the former were similar with, or even slightly greater than, the defects in the latter (Fig. 4F, top panel versus Fig. 4B, top panel; real-time PCR quantifications). Results in Fig. 4, taken together, link an optimal H2B expression program in living cells to a proper cellular (nuclear) NAD ϩ level or a proper free cellular (nuclear) NAD ϩ / NADH ratio. Reducing the NAD ϩ /NADH Ratio in Vivo Down-regulates H2B Expression-The free cellular NAD ϩ /NADH ratios define the redox status, while the overall cellular NAD ϩ plus NADH level can be constant. We sought to establish an assay system with reduced free cellular NAD ϩ /NADH ratios, but an overall constant NAD ϩ plus NADH level, and realized that cells treated with NaN 3 or CoCl 2 met the criterion (Ref. 24, also see below). HeLa cells treated by NaN 3 showed a reduced free cellular NAD ϩ /NADH ratio (Fig. 5A) and reduced H2B expression (Fig. 5B). Similar phenotypes were observed in cells treated by CoCl 2 (Fig. 5, D and E). The histone expression is tightly coupled with S-phase progression (7). Thus, there could be a possibility that NaN 3 and CoCl 2 might adversely affect S-phase progression, which then fed back to H2B expression; however, the treatment by NaN 3 or CoCl 2 inhibited the H2B expression (Fig. 5, B and E) but did not grossly change cell cycle profiles (Fig. 5G). This implies that the observed H2B expression defects (Fig. 5, B and E) were most likely primary defects due to the redox changes, but not secondary defects caused by a potentially defective S-phase progression. This is in line with primary or direct redox effects in vitro (Fig. 2). In vitro, p38/GAPDH nucleates an Oct-1-OCA-S interaction subject to redox-modulation to determine the H2B transcriptional outputs, in close correlation with the H2B promoter occupancy by p38/GAPDH (Figs. 1 and 2). Such an interaction in living cells should be reflected by H2B promoter occupancy by p38/GAPDH. Indeed, ChIP assays revealed that in the NaN 3 -or CoCl 2 -treated cells, H2B promoter occupancy by p38/GAPDH (OCA-S) was reduced (Fig. 5, C and F). This parallels the H2B expression defects in the NaN 3 -or CoCl 2 -treated cells (Fig. 5, B and E), which are (primarily) at the transcription level (see below). The NAD ϩ /NADH redox status reflects cellular metabolic states, hence raising a concern of whether the H2B expression defects, as a result of NaN 3 and CoCl 2 treatments or of disrupting NAD ϩ biosynthesis, were due to a severe shortage of ATP (an RNA building block) because of the changed NAD ϩ /NADH ratios. We feel that this is highly unlikely for several reasons. First, the ␤-actin expression (internal control) was unaffected by the changed redox status (Figs. 4, B and F and 5, B and E). Second, the overall ATP levels in control or redox-perturbed cells did not differ significantly. For example, NaN 3 treatment led to an only slight decrease of ATP levels at ϳ16 h (Fig. 6A); however, H2B expression defects were already manifested within 2 h of the treatment by NaN 3 (Fig. 5B) or even in a shorter period of time (see below, Fig. 6D). NaN 3 and CoCl 2 perturb the redox status with reduced free cellular NAD ϩ /NADH ratios but sustain a constant NAD ϩ plus NADH level (24). To reaffirm this, we measured redox-related levels in the redox-perturbed cells; Fig. 6B offers an example with the NaN 3 treatment (1 mM) in a 20-h time course. Obviously, the total (NAD ϩ plus NADH) levels did not significantly change over the 20-h time course, and the total NAD ϩ /NADH ratios (calculated from the total NAD ϩ and NADH pools) were not significantly reduced at 2 or 4 h. However, the free cellular NAD ϩ /NADH ratio was reduced by ϳ50% in 2 h of NaN 3 treatment ( Fig. 5A) with reduced H2B expression (Fig. 5B) and reduced H2B promoter occupancy by OCA-S (p38/GAPDH; Fig. 5C). The reduced free cellular NAD ϩ /NADH ratios were also a common feature upon blocking the NAD ϩ biosynthesis, which also led to H2B expression defects (Fig. 4). Thus, the H2B expression in vivo is linked to a proper NAD ϩ /NADH ratio, which defines a proper redox status; perturbing this status inhibits H2B expression (Fig. 5, B and E) by disrupting Oct-1-p38/GAPDH interaction (Fig. 5, C and F). To provide the support that H2B expression defects in redox-perturbed cells are (primarily) at a transcriptional level, we transfected an H2B promoter-luciferase reporter into HeLa cells and found the expression of the ectopic gene to be much reduced in NaN 3 -treated cells (Fig. 6C). All above in vivo results are in line with the patterns of H2B transcription and the H2B promoter occupancy by p38/GAPDH (OCA-S) in vitro ( Figs. 1 and 2). Primary (or Direct) Redox Effects on H2B Transcription-It was shown that NAD(H) directly exerted redox effects on H2B transcription in vitro (Fig. 2). H2B expression defects (Fig. 5, B and E) in vivo upon NaN 3 or CoCl 2 treatment manifested without a change in cell cycle profiles (Fig. 5G). This is in agreement with a primary effect and argues against an indirect cell cycle effect. To further support that the redox effects on H2B transcription in vivo were primary, we studied the onset of H2B expression defects caused by NaN 3 treatment and found the defects to manifest very swiftly (in 30 min of the NaN 3 treatment, Fig. 6D). In parallel with this swift onset, reversing redox perturbations by withdrawing NaN 3 allowed swift H2B expression recovery (Fig. 6E). If the effects of redox perturbations on H2B transcription are not primary but secondary to the redox changes (e.g. through the cell cycle or other physiological changes), the above onset or recovery might not be as swift as so observed. The above results support primary (or direct) NAD(H) effects on H2B transcription in vitro and in vivo, and all the results (Figs. 1-6) support the notion that redox changes are directly sensed by p38/GAPDH to impact upon an Oct-1-OCA-S (p38/GAPDH) interaction to determine the H2B transcriptional output (see "Discussion"). DISCUSSION p38/GAPDH, essential for in vivo H2B transcription (10), represents the enzymes with the coenzyme-modulated geneswitching functions (10 -12, 25) and acts as a redox sensor (Fig. 2, B and C) to regulate an Oct-1-OCA-S interaction. This interaction is a key regulatory step determining the H2B transcriptional output. p36/LDH is essential for tethering an intact OCA-S complex to the H2B promoter in vitro, explaining the essential role for H2B expression in vivo (Fig. 3, A-C). The sum of the molecular masses of the OCA-S components approximates the native size (ϳ300 kDa) of the OCA-S complex (10). This is consistent with monomer presence. Generally, monomeric dehydrogenases, while retaining full NAD(H) binding capacities, lack catalytic abilities (26); however, p36/ LDH within OCA-S seems to be an exception (Fig. 3E) that circumvented the NADH-inhibited H2B transcription in vitro (Fig. 3F). A similar redox modulator role might operate as part of an essential role for H2B expression in living cells (Fig. 3A) to circumvent physiological redox constraints. Coexisting p38/GAPDH and p36/LDH in OCA-S might represent an advantage in which the NAD ϩ /NADH redox status plays multiple roles. For instance, in addition to an Oct-1-p38/ GAPDH interaction, a proper NAD ϩ /NADH redox status might play a role in the assembly of the OCA-S complex by allowing p38/GAPDH or p36/LDH to assume proper conformation(s). Future structural studies may provide clues regarding this notion. The redox-modulated H2B expression is most likely a direct (or primary) response of H2B transcription to varied NAD ϩ / NADH redox status. First, in vitro, the H2B transcription and Oct-1-OCA-S interaction were redox-modulated (Figs. 1 A-F, 2, B-C, and 3B) in which the exogenous NAD(H) were not converted to other metabolites or interconverted (Fig. 2D), which otherwise might exert indirect effects. Second, the H2B promoter and Oct-1/OCA-S specificity (Figs. 1, A-E and 2C) argue against a nonspecific redox effect. Third, in vivo upon redox-perturbations, H2B expression defects (Fig. 5, B and E) manifested without changing the cell cycle profiles (Fig. 5G). Fourth, the onset of H2B expression defects upon redox perturbations was very swift, and H2B expression recovered swiftly upon withdrawing of the redox perturbations (Fig. 6, D and E). Taken together, these results are in accord with the primary (or direct) NAD(H) redox effects on H2B transcription in vitro and in vivo, and with the notion that the redox target is the Oct-1-OCA-S interaction that determines H2B transcriptional output. The biphasic responses (Fig. 1, A, C-E, and G), and that H2B transcription was sensitive to the NAD ϩ depletion (Fig. 4), suggest that the H2B transcription favors higher NAD ϩ levels or higher NAD ϩ /NADH ratios within a certain range that defines a proper redox status. DNA damage/repair generally consumes the nuclear NAD ϩ pool (27), which may render the H2B expression sensitive to NAD ϩ depletion in a similar fashion as scenarios in Fig. 4. This and a coordination mechanism (10) may contribute to a global histone expression inhibition upon DNA damage, in addition to the activated cell cycle checkpoint (e.g. Ref. 28). A, short-term NaN 3 treatment did not reduce the overall cellular ATP levels in HeLa cells treated with 1 mM NaN 3 for the indicated hours. B, short-term NaN 3 treatment did not reduce the overall cellular total NAD ϩ plus NADH levels or the total NAD ϩ /NADH ratios in HeLa cells treated with 1 mM NaN 3 for the indicated hours. C, H2B promoter-luciferase activities in control or NaN 3 -treated (0.5 mM, overnight) HeLa cells. D, rapid onset of H2B expression defects in NaN 3 -treated cells. Cells were treated with 1 mM NaN 3 for the indicated hours, and the H2B expression levels quantified by real-time PCR. E, swift recovery of H2B expression upon NaN 3 withdrawal. H2B expression levels in untreated cells, or cells treated with 1 mM NaN 3 for 1 or 2 h, or for 1 h but allowed to recover for 1 h, were quantified by real-time PCR. In view of the in vitro biphasic responses (see Fig. 1, A and C-E), physiological or pathological redox perturbations that can potentially elevate the NAD ϩ /NADH ratios in vivo, if beyond certain threshold, are expected to inhibit H2B expression. This is in line with the H2B transcription requiring a proper cellular redox status (redox balance). Direct links between gene switching and the metabolic state of a cell are quite common in prokaryotes but rarely reported in eukaryotes (12). The current study shows that the activity of p38/GAPDH as an OCA-S component in histone transcription, and potentially other aspects of the OCA-S function, can be modulated by the redox status. Given that the cellular redox status reflects the cellular metabolic state, our study suggests a direct link between cellular metabolism and gene switching in higher eukaryotes. Because S-phase events are tightly coupled to ensure an orderly S-phase progression (1-7), the metabolic states of the cell might also feedback to other coupled S-phase events in addition to histone (H2B) expression.
6,687.8
2008-10-03T00:00:00.000
[ "Biology", "Chemistry" ]
Amino acids inhibit kynurenic acid formation via suppression of kynurenine uptake or kynurenic acid synthesis in rat brain in vitro The tryptophan metabolite, kynurenic acid (KYNA), is a preferential antagonist of the α7 nicotinic acetylcholine receptor at endogenous brain concentrations. Recent studies have suggested that increase of brain KYNA levels is involved in psychiatric disorders such as schizophrenia and depression. KYNA-producing enzymes have broad substrate specificity for amino acids, and brain uptake of kynurenine (KYN), the immediate precursor of KYNA, is via large neutral amino acid transporters (LAT). In the present study, to find out amino acids with the potential to suppress KYNA production, we comprehensively investigated the effects of proteinogenic amino acids on KYNA formation and KYN uptake in rat brain in vitro. Cortical slices of rat brain were incubated for 2 h in Krebs-Ringer buffer containing a physiological concentration of KYN with individual amino acids. Ten out of 19 amino acids (specifically, leucine, isoleucine, phenylalanine, methionine, tyrosine, alanine, cysteine, glutamine, glutamate, and aspartate) significantly reduced KYNA formation at 1 mmol/L. These amino acids showed inhibitory effects in a dose-dependent manner, and partially inhibited KYNA production at physiological concentrations. Leucine, isoleucine, methionine, phenylalanine, and tyrosine, all LAT substrates, also reduced tissue KYN concentrations in a dose-dependent manner, with their inhibitory rates for KYN uptake significantly correlated with KYNA formation. These results suggest that five LAT substrates inhibit KYNA formation via blockade of KYN transport, while the other amino acids act via blockade of the KYNA synthesis reaction in brain. Amino acids can be a good tool to modulate brain function by manipulation of KYNA formation in the brain. This approach may be useful in the treatment and prevention of neurological and psychiatric diseases associated with increased KYNA levels. Background Tryptophan was mainly metabolized through the kynurenine (KYN) pathway in the mammalian brain. Kynurenic acid (KYNA), a product of this pathway, is a negative allosteric modulator of the α7 nicotinic acetylcholine receptor at endogenous concentration, and a competitive antagonist of glycine co-agonist sites of the N-methyl-D-aspartic acid receptor (Kessler et al. 1989;Hilmas et al. 2001;Schwarcz and Pellicciari 2002). In particular, nanomolar increases in KYNA reduces dopaminergic and glutamatergic neurotransmission (Carpenedo et al. 2001;Rassoulpour et al. 2005), and contributes to cognitive dysfunction (Erhardt et al. 2004;Chess and Bucci 2006;Chess et al. 2007;Chess et al. 2009). While decreases in endogenous KYNA augment dopaminergic, acetylcholinergic and glutamatergic neurotransmission (Amori et al. 2009a;Zmarowski et al. 2009;Konradsson-Geuken et al. 2010), and lead to enhanced cognitive abilities (Potter et al. 2010;Kozak et al. 2014). In humans, patients with schizophrenia show higher KYNA levels in the prefrontal cortex and cerebrospinal fluid (Erhardt et al. 2001;Schwarcz et al. 2001;Linderholm et al. 2010). Based on these findings, it has been suggested that KYNA is involved in the pathophysiology of psychiatric disorders including schizophrenia (Erhardt et al. 2007;Erhardt et al. 2009), and thus, suppression of KYNA production may contribute to prevention or improvement in these disorders. Astrocytes uptake KYN, the immediate bioprecursor of KYNA, from blood stream, and KYN is metabolized to KYNA. Two factors regulate KYNA production in the brain: kynurenine amino transferase (KAT) activity and availability of KYN (Turski et al. 1989). Four KATs have been identified in the mammalian brain, these KATs have broad substrate specificity for amino acids, and several amino acids competitively inhibit KATs for KYNA production (Okuno et al. 1991;Guidetti al. 2007;Han et al. 2010). Hence, amino acids may suppress KYNA production via KAT inhibition in the brain. Astrocytes uptake peripheral KYN from blood stream via large neutral amino acid transporters (LATs). LATs are known to transport both branched chain amino acids (e.g., valine, leucine and isoleucine) and aromatic amino acids (e.g., tyrosine, phenylalanine, and tryptophan). Several findings show that LATs transport amino acids with higher affinity than KYN in tumor cell lines (Fukui et al. 1991;Speciale et al. 1989;Asai et al. 2008;Yanagida et al. 2001), and changes in physiological concentrations of these amino acids may affect KYN transport into the brain. In the present study, we comprehensively investigated the effects of amino acids on suppression of KYNA production via inhibition of KYN uptake and KYNA synthesis in the brain. We used brain slices to determine the inhibitory effects on KYNA synthesis, KYN uptake and KYNA synthesis at physiological KYN concentrations. Our findings demonstrate that amino acids are a good tool for modulating brain function by manipulating KYNA formation in the brain. Animals Male Wistar rats (7-10 weeks old) were obtained from CLEA Japan (Tokyo, Japan). Rats were allowed free access to food and water. The animal room was maintained at a temperature of 22°C with 60% humidity and a 12-h light/ 12-h dark cycle (light onset at 6:00 a.m.). Care and treatment of experimental animals conformed to the University of Shiga Prefecture guidelines for ethical treatment of laboratory animals (reference number: 24-9). In vitro screening of amino acids regulating de novo KYNA formation Routinely, seven tissue slices were placed in each culture well (seven slices per well,~1 mg of total protein) containing a final volume of 1 mL ice-cold KPB and final concentration of 1 mmol/L each amino acid for screening, or 3-3000 mmol/L each amino acid for dose-response assays. After 10 min pre-incubation at 37°C in an oxygenated shaking water bath, a final physiological concentration of 2 μmol/L KYN was added to each well. After 2 h incubation at 37°C, plates were placed on ice. Because assessment of the time course of KYNA concentration in KRB showed linear increases up to 4 h of incubation (Turski et al. 1989), biochemical viability of brain tissue was maintained during the study period. The medium was rapidly separated from the tissue and acidified with 100 μL of 1 mol/L HCl for subsequent KYNA measurements. We described KYNA concentration in the incubation medium as KYNA production, because more than 90% of newly synthesized KYNA readily liberate from tissue slices into the medium (Turski et al. 1989). The tissue slices were rapidly washed three times with 500 μL of KRB and sonicated using an ultrasonic cell breaker (Powersonic model 50; Yamato Kagaku, Tokyo, Japan) in 250 μL distilled water. The 200-μL aliquot of tissue slice suspension was acidified using 50 μL of 6% perchloric acid. After centrifugation (10 min, 12,000 × g, 4°C), the supernatant aliquot was used for KYN determination of component compound. A 50-μL aliquot of tissue slice suspension was used for protein determination using the Bradford assay (Bradford 1976). KYNA and KYN determination KYNA concentration in the samples was determined by high-performance liquid chromatography with fluorescence detection (RF-20Axis; Shimadzu, Kyoto, Japan) at 344 nm excitation and 398 nm emission wavelengths (Shibata 1988). KYN concentration was determined by high-performance liquid chromatography with ultraviolet detection (SPD-10AV; Shimadzu) at a 365 nm wavelength (Holmes 1988). Statistical analysis All data were expressed as mean ± SE. One-way analysis of variance with Dunnett's Multiple Comparison Test was used for more than three-group comparisons. Sigmoid curves were generated by nonlinear regression analysis. We calculated the half-maximal inhibitory concentration (IC 50 values in μmol/L) of each amino acid for KYNA production and KYN uptake using the equation "log (inhibitor) vs. response" by GraphPad Prism 5.0 (GraphPad Software, San Diego, CA, USA). Correlation between inhibitory rates for KYNA production and tissue KYN concentration by amino acids were represented by linear regression of data. Inhibitory rates for KYNA production and tissue KYN concentration at each additional amino acid concentrations were described by the percentage of control values. Correlation between KYNA production and KYN uptake at various KYN concentrations in the KRB were also represented by linear regression of data. KYNA production and KYN uptake were described by the percentage of 2 μmol/L KYN control values. Pearson correlation coefficient was calculated respectively. A p-value < 0.05 was considered significant. GraphPad Prism 5.0 (GraphPad Software, San Diego, CA, USA) was used for all analyses. Screening amino acids for suppression of KYNA production To determine which amino acids suppress KYNA production in vitro, we comprehensively assessed 19 proteinogenic amino acids at 1 mmol/L using tissue slices from the rat cerebral cortex. Because commercially available tryptophan contains tryptophan metabolites (including KYN and KYNA), we excluded tryptophan from our initial list of 20 proteinogenic amino acids. The amount of KYNA in the extracellular medium was reduced by 40-60% by eight amino acids (leucine, isoleucine, methionine, alanine, tyrosine, glutamine, glutamate, and aspartate), and to approximately 25% by phenylalanine and cysteine ( Figure 1a). Leucine, isoleucine, methionine, phenylalanine, and tyrosine also reduced tissue KYN concentrations to < 50% ( Figure 1b). No significant difference was observed for tissue KYN concentrations using alanine, cysteine, glutamine, glutamate, and aspartate, which all reduced KYNA production. Although valine reduced tissue KYN concentrations to 50%, it did not suppress KYNA production Amino acid dose-dependent inhibition of KYNA production and KYN uptake Since 10 of the 19 proteinogenic amino acids significantly reduced KYNA formation at 1 mmol/L, these amino acids were chosen for further investigation. To determine the precise capabilities of 10 amino acids to suppress KYNA production, each amino acid was added to KRB at concentrations varying from 3 μmol/L to 3 mmol/L. All 10 amino acids reduced KYNA production in a dose-responsive manner ( Figure 2). Five amino acids (leucine, isoleucine, phenylalanine, methionine, and tyrosine) also reduced tissue KYN concentration in a dose-responsive manner ( Figure 3). The other amino acids (alanine, aspartate, cysteine, glutamine, and glutamate) did not affect tissue KYN concentrations. In addition, we determined IC 50 values for KYNA production and KYN uptake. The rank order of IC 50 values for KYNA production was phenylalanine < leucine < isoleucine < glutamate < cysteine < alanine < methionine < aspartate < glutamine < tyrosine (Table 1), and for KYN uptake was phenylalanine > leucine > isoleucine > methionine > tyrosine (Table 1). Amino acid contribution of KYN uptake inhibition to KYNA production To determine how inhibition of KYN uptake affects KYNA production, we selected five amino acids which reduced not only KYNA production but tissue KYN concentration, and examined the relationship between KYN uptake and KYNA production. Inhibitory rates of KYN uptake were significantly correlated with KYNA production by leucine (y = 1.04x -5.6, r = 0.988; p < 0.0001), isoleucine (y = 0.904x + 5.9, r = 0.987; p < 0.0001), phenylalanine (y = 1.03x -7.0, r = 0.945; p < 0.001), methionine (y = 1.25x -28.0, r = 0.889; p < 0.01), and tyrosine (y = 0.921x -7.7, r = 0.967; p < 0.0001) (Figure 4). Combining data from the respective amino acids showed significantly high correlation (y = 0.981x -1.1, r = 0.946; p < 0.0001) (Figure 4f ). To determine the direct relationship between tissue KYN concentration and KYNA production, cortical slices were incubated in KRB containing 0.4-2 μmol/L KYN. KYNA production and tissue KYN level were described the percentage of 2 μmol/L KYN control. With increasing KYN concentrations, KYNA production and tissue KYN concentration linearly increased in a dose-dependent manner (Figure 5a and b). Moreover, KYNA production was strongly correlated with tissue KYN concentration (y = 0.978x -4.5, r = 0.985; p < 0.01) (Figure 5c). This relationship (including slope of the regression line) was the same for inhibitory effects of all five amino acids. These results suggest that inhibition of KYN uptake, but not KAT activity, contributes to inhibitory effects of these five large neutral amino acids on KYNA production. Discussion Previous reports have shown that amino acids have the potential to suppress KYNA production via inhibition of KYN uptake and KYNA synthesis in the brain, therefore we comprehensively investigated the effects of proteinogenic amino acids on regulating KYNA production in rat brain in vitro. We show that 10 of 19 amino acids (specifically, leucine, isoleucine, phenylalanine, methionine, tyrosine, alanine, cysteine, glutamine, glutamate, and aspartate) significantly reduce KYNA production at the tissue level. Five (leucine, isoleucine, phenylalanine, methionine, and tyrosine) of these 10 amino acids also reduce tissue KYN concentration, with inhibition of KYNA production reflecting these reductions in KYN uptake. Our results suggest that these five amino acids suppress KYNA production via blockade of KYN transport, while the other five amino acids (alanine, cysteine, glutamine, glutamate, and aspartate) act via blockade of KYNA synthesis in the brain. KYN is transported into the brain via LATs, which are Na + -independent neutral amino acids transporters. There are two LATs, LAT 1 and LAT 2, with the affinity of LAT 1 to large neutral amino acids higher than that of LAT 2. LAT 1 exhibits high-affinity transport of large neutral amino acids, including branched chain and aromatic amino acids, while LAT 2 has broader substrate specificities (Kanai et al. 1998;Segawa et al. 1999). The K m value of LATs for KYN is~160 μmol/L, 80 times higher than plasma KYN concentrations (Fukui et al. 1991;Speciale et al. 1989). In the present study, the amino acids that inhibited KYN uptake are consistent with substrate amino acids of LAT 1 rather than LAT 2, suggesting a critical role of LAT 1 in KYN uptake in the brain. K m values of LAT 1 for leucine, isoleucine, methionine, phenylalanine, and tyrosine are 15-30 μmol/L, around physiological concentrations (Asai et al. 2008), indicating higher affinity than for KYN (Yanagida et al. 2001). The other amino acids (glutamine and aspartate) have low affinity for LAT 1 (K m = 1.5-2 mmol/L), and are not substrates (See figure on previous page.) Figure 2 Dose-dependent inhibition of KYNA production in tissue slices from the cerebral cortex by (a) leucine (Leu), (b) isoleucine (Ile), (c) phenylalanine (Phe), (d) tyrosine (Tyr), (e) methionine (Met), (f) cysteine (Cys), (g) aspartate (Asp), (h) glutamine (Gln), (i) alanine (Ala), and (j) glutamate (Glu). Experiments were performed as described in the text using 2 μmol/L KYN. KYNA was measured in incubation medium. Values are expressed as mean ± SE (n = 4-6). Sigmoid curves were generated by nonlinear regression analysis using Graph Pad Prism 5.0. of LAT nor affect tissue KYN concentration. Histidine is also known to be a high affinity substrate for LAT 1 (K m = 12.7 μmol/L), but did not reduce tissue KYN concentration in the present study. KYN is transported through either Na-independent or Na-dependent matter in tissue slice culture (Turski et al. Turski et al. 1989). In this study, only Na-independent LATs substrates such as leucine, but not Na-dependent LATs substrates such as glutamine, reduced KYN uptake. Na-independent transport rather than Na-dependent transport may contribute KYN uptake, and regulating of Na-independent LATs may be effective in modulating KYN uptake. The mammalian brain expresses four KATs, KAT I (glutamine transaminase K, GTK; EC 2.6.1.64), KAT II (2-aminoadipate aminotransferase, ADA; EC 2.6.1.7), KAT III (cysteine conjugate β-lyase 2, CCBL2; EC4.4.1.13) and KAT IV (mitochondrial aspartate aminotransferase, ASAT; EC 2.6.1.1). A previous study determined the relative contributions of KAT I, II, and IV to total KAT activity, and found that rat and human brain contain the highest proportion of KAT II (~60%) (with~10 and 30% of KAT I and IV, respectively) suggesting a critical role for KAT II in KYNA synthesis in rat and human brain (Guidetti et al. 2007). KAT III contribution to brain KYNA synthesis remains to be determined. In the present study, glutamate, aspartate, cysteine, glutamine, and alanine suppressed KYNA production but not KYN uptake, suggesting that these amino acids inhibit the KYNA synthesis reaction. Glutamate and aspartate strongly inhibit KAT II (IC 50 : 2.1 and 1.2 mmol/L, respectively) and IV (IC 50 : 0.9 and 0.3 mmol/L, respectively), while glutamine and cysteine show inhibitory effects on KAT I and III activities (Guidetti et al. 2007;Han et al. 2009;Han et al. 2010). Furthermore, cysteine sulfinate, the deoxygenated product of cysteine, acts as a KAT II inhibitor and inhibits rat brain KYNA production at physiological concentrations in vitro (Kocki et al. 2003). Our results cannot determine if cysteine, cysteine sulfinate, or both inhibit the KYNA synthesis reaction. LAT 1 substrates (leucine, methionine, and phenylalanine) also inhibit KAT III activity (Han et al. 2009). In the present study, inhibition of KYNA production reflects reduced KYN uptake, and we did not observe additional effects of KAT inhibition. We precisely investigated the effect of 10 amino acids on KYNA production and KYN uptake at 3 μmol/ L-3 mmol/L. The physiological concentrations of leucine, isoleucine, phenylalanine, methionine, tyrosine, alanine, aspartate, cysteine, glutamine, and glutamate are approximately 150, 90, 60, 50, 70, 400, 10, 10, 700, and 80 μmol/L in rat plasma, respectively (Asai et al. 2008). Interestingly, all 10 amino acids partially inhibited KYNA production at physiological concentrations, with IC 50 values of most amino acids for KYNA production or KYN uptake around physiological levels. Although the LATs mediating KYN uptake in blood-brain barrier is not completely identical to the tissue slices in the present study, it is expected that changes in physiological concentrations of these amino acids may affect brain KYNA levels in vivo. In particular, increases in plasma levels of these amino acids may lower brain KYNA levels. Enhancement of brain KYNA production can be caused by pharmacological manipulation of KYN such as through systemic KYN administration or kynurenine 3-hydroxylase inhibitor in vivo (Swartz et al. 1990;Röver et al. 1997;Lombardi et al. 1994;Rassoulpour et al. 2005). Pharmacological manipulation of KAT suppresses KYNA production in the brain in vivo (Amori et al. 2009a;Amori et al. 2009b;Dounay et al. 2012;Kozak et al. 2014). In addition, recent studies have shown that diet also affects brain KYNA concentrations. High tryptophan diets increase brain KYNA levels owing to increased peripheral KYN in a dose-dependent manner, and reduce dopamine release via enhancement of KYNA production in the rat striatum (Okuno et al. 2011). Long-term exposure to a high-fat and low-protein/carbohydrate ketogenic (See figure on previous page.) Figure 3 Dose-dependent inhibition of tissue KYN concentration in tissue slices from cerebral cortex by (a) leucine (Leu), (b) isoleucine (Ile), (c) phenylalanine (Phe), (d) tyrosine (Tyr), (e) methionine (Met), (f) cysteine (Cys), (g) aspartate (Asp), (h) glutamine (Gln), (i) alanine (Ala), and (j) glutamate (Glu). Experiments were performed as described in the text using 2 μmol/L KYN. KYN was measured in tissue slice suspension. Values are expressed as mean ± SE (n = 4-6). Sigmoid curves were generated by nonlinear regression analysis using Graph Pad Prism 5.0. diet shows a several-fold increase in KYNA concentrations in the rat brain (Żarnowski et al. 2012). Because amino acids are nutritional factors and often used as supplements, their side effects, safety doses, and interactions have been well investigated. Thus, long-term administration of amino acids from a diet may be a good method to manipulate KYNA formation in the brain. Several studies suggest that dietary large neutral amino acids modulated neurotransmitter release via LAT. For example, ingestion of α-lactalbumin-containing diet, a tryptophan-rich protein, increase brain Tryptophan content and serotonin synthesis and release (Choi et al. 2009;Orosco et al. 2004). Branched-chain amino acids ingestion causes the decline in tyrosine uptake and dopamine synthesis in brain (Fernstrom 2013). We suggest that large neutral amino acids may expand glutamate or acetylcholine release by suppressing KYNA production. In this study, we also show that non-large neutral amino acids, alanine, cysteine, glutamine and glutamate reduce KYNA production. Our findings may highlight the importance of dietary amino-acid compositions in brain chemistry and function. It will be interesting to determine the amino acid composition of habitual diets in patients with psychiatric disorders.
4,485.6
2015-02-01T00:00:00.000
[ "Biology", "Chemistry" ]
Two symmetric and computationally efficient Gini correlations , Introduction Measuring the strength of association and correlation between two random variables is of essential importance in many research elds.Many notions of correlations have been proposed and studied [16,21].Perhaps the most commonly used one is Pearson's correlation coe cient which measures the linear relationship between two random variables.Pearson's correlation is computationally e cient with a computation cost of O(n) where n is the sample size.It is the most statistically e cient one for normal variables; however, it is very sensitive to outliers.Even one single outlier might have a large impact on the coe cient's value and its performance [36,37].An important tool to study robustness is the in uence function, which measures e ects due to in nitesimal perturbations of the underlying distribution [13].It has been proven that the Pearson correlation has an unbounded in uence function, indicating its lack of robustness [5]. Alternatively, rank based correlations such as Spearman and Kendall's tau are robust to outliers.Kendall's tau is a similarity measure of the ranks of two random variables [17] and Spearman's correlation is the Pearson correlation coe cient evaluated on the ranks of the two variables [39].Both values are widely used for measuring monotonic relationships.They can be computed e ciently at a cost of O(n log n) [18], and their in uence functions are bounded [3].The tradeo to robustness is a loss of statistical e ciency in normal settings.For the correlation parameter ρ = ., ., . in the normal distribution, the asymptotic relative e ciencies (ARE) of Kendall's tau to the Pearson correlation are about 91%, 89% and 84%, respectively, while the ARE of the Spearman correlation are even lower [3]. Standard Gini correlations [1] are based on the covariance between one variable and the rank of the other.More speci cally, let H be the joint distribution of the random variables X and Y, and let F and G be the marginal distribution functions of X and Y, respectively.The standard Gini correlations are de ned as γ = γ(X, Y) := cov(X, G(Y)) cov(X, F(X)) and γ = γ(Y , X) := cov(Y , F(X)) cov(Y , G(Y)) (1) re ecting di erent roles of X and Y.The representation of the Gini correlations indicates that they have mixed properties of those of the Pearson and Spearman correlations [39].As expected, the statistical e ciency and robustness of Gini correlations are between those of Pearson and Spearman correlations.In terms of balance between e ciency and robustness, Gini correlations play an important role in measuring association for variables from heavy-tailed distributions [43].The Gini correlations are computationally e cient and can be computed at a cost of O(n log n) [31].They are not symmetric in X and Y in general [31,32], i.e., γ(X, Y) ≠ γ(Y , X). In some applications, this asymmetry is natural and useful [9,12,33].In other scenarios, symmetry is a desired property for dependence measures.Some researchers [21,27] even list symmetry as one of the axioms of association measures.A symmetric Gini correlation was proposed in [4,28], which is based on the joint rank function.It is more statistically e cient than the standard Gini correlations, but it is not computationally e cient with O(n ) complexity, which means it is prohibitive for large n.Yitzhaki and Olkin [42] proposed two symmetric Gini correlations which are the arithmetic mean and geometric mean of the standard Gini correlations, respectively.r ( ) g = r ( ) g (X, Y) := γ + γ and r ( ) Clearly those symmetric Gini correlations inherit the computational e ciency of O(n log n).However, they have not been well studied in literature except that Xu et al. [41] studied r ( ) g under the normal settings.In this paper, we systematically study the properties of these two symmetric Gini correlations and explore their statistical e ciency.Their robustness is studied by means of their in uence functions.The limiting distributions of sample symmetric Gini correlations are established.It is interesting to see that there are three kinds of asymptotical sampling distribution of the sample correlation, r( ) g , depending on di erent cases of r ( ) g .To our best knowledge, this is a novel result and can be applied to the geometric mean type of statistics such as the symmetrized information dependence measure de ned in [26]. It is worthwhile to mention that the Gini correlations in (1) and the symmetric versions in (2) are quite di erent from the Gini gamma or Gini coe cient [10,24], although the names are very similar.Gini correlation γ in (1) is a natural bivariate extension of univariate Gini mean di erence (GMD) from the covariance representation GMD(F) = E|X − X | = Cov(X, F(X)), where X , X are independent copies of X from F. The Gini gamma was proposed by Gini [11].Related to the Spearman correlation in a di erent way, the Gini gamma is a concordance measure which is de ned based on both ranks of X and Y.It is easy to check that the Gini gamma follows all axioms of concordance stated in [30].However, neither r ( ) g nor r ( ) g is a concordance measure, and neither hold to the coherence axiom. The paper is organized as follows.In Section 2 we provide properties of r ( ) g and r ( ) g .Their in uence functions are presented in Section 3. The limiting distributions of sample correlations are established in Section 4. Statistical e ciency and computational e ciency of various correlations are compared in Subsection 4.2 and their nite sample performance comparison is conducted through a simulation study on elliptical distributions and an asymmetric bivariate log-normal distribution in Section 5. A real data application on the relationship between GDP per capita and suicide rate is presented in Section 6. Final remarks are provided in Section 7. Proofs are relegated to the Appendix. Two symmetric Gini correlations Basic properties of the two symmetric Gini correlations r ( ) g and r ( ) g in ( 2) are explored.Their relationships with the linear correlation parameter, ρ, in bivariate elliptical distributions and log-normal distributions are presented. . General properties Let X and Y be two random variables from F and G, respectively, with the joint distribution H. Proposition 2.1.Assume that H is continuous and its rst moment exists, then we have If X and Y are statistically independent, then r ( ) g (X, Y) = .4. If Y is a monotonic increasing (decreasing) function of X, then r ( ) g (X, Y) equals + (− ).The symmetry of r ( ) g and r ( ) g is obvious noting the commutative property of addition and multiplication.Properties -in the above two propositions follow simply from the properties of the original Gini correlations γ and γ , shown by [31].Property 5 states that the two symmetric Gini correlations describe a linear relationship between X and Y. Note that we assume continuous H in Propositions (2.1) and (2.2).If H is not continuous, some revisions on de nitions in γ and γ are needed for general properties.For example, replacing F(x) with (F(x) + F(x−))/ and G(x) with (G(x) + G(x−))/ in (1) keeps γ and γ in the range [− , ].For simplicity, the continuous distribution is assumed throughout the paper. Before we study the symmetric Gini correlations in elliptical distributions and lognormal distribution, we would like to provide de nitions of other measures of association that will be used and compared in the paper.For H with a nite second moment, the Pearson correlation rp is The rank based Spearman and Kendall's tau correlations don't need a moment condition.The Spearman correlation is de ned as the Pearson correlation on the ranks of X and Y, that is, The Kendall's tau rτ is de ned as where (X , Y ) T and (X , Y ) T are independently distributed from H. For Z = (X, Y) T from H with nite rst moment, the joint-rank based symmetric Gini correlation r (s) g [28] is de ned as is the spatial rank of z = (x, y) T with respect to H and the norm • is the Euclidean norm.Those correlations have di erent properties and may have di erent values under the same distribution.It is preferred to consider their Fisher consistent versions so that they correspond to the same quantity or same parameter [7].For a distribution H with a parameter ρ, ρr is Fisher consistent for ρ if We denote the Fisher consistent versions of Pearson, Spearman and Kendall's tau correlations as ρp, ρs and ρτ, respectively. Next the symmetry Gini correlations as well as each of above mentioned correlation are studied in elliptical distributions and lognormal distribution. . Gini correlations in elliptical distributions A d-variate continuous random vector Z has an elliptical distribution H if its density function is of the form where µ is the location parameter, the positive de nite matrix Σ is the scatter parameter and the nonnegative function g is the density generating function.One important property for the elliptical distribution is that the nonnegative random variable Conventionally, we write the parameters of bivariate elliptical distributions as (µ , µ , σ , σ , ρ). If second moment of Z exists, then the covariance matrix exists and is equal to ER d Σ.In this case, the Pearson correlationrp is well de ned and is equal to the parameter ρ.More details on the elliptical distribution family refer to [6]. If σ = σ , the joint-rank based Gini correlation r (s) g proposed in [28] has the following relationship with ρ. where − x sin θ dθ are the complete elliptic integral of the rst kind and the second kind, respectively.The Fisher consistent version of r (s) g is hard to obtain an explicit form but a numerical solution is possible.For Kendall's tau, Blomqvist [2] proved that rτ = /π arcsin(ρ) in the normal case.Lindskog et al. [20] proved that this such relationship holds under all elliptical distributions in general.Hence the Fisher consistent version of Kendall's correlation is ρτ = sin π rτ . . Gini correlations in bivariate lognormal distribution The random vector, (X, Y) T , is said to have a bivariate lognormal distribution with parameters (µ , µ , σ , σ , ρ) if (log X, log Y) T follows a bivariate normal distribution with the same parameters.Clearly, Kendall's tau and Spearman correlation are invariant under monotonically increasing transformations, thus equations ( 6) and (7) still hold.For the Pearson correlation, it is easy to have Then the Fisher consistent version of Pearson correlation for the parameter ρ in the lognormal distribution is For the two new symmetric Gini correlations, we have derived the functional relationships as below. Proposition 2.4.Under the bivariate lognormal distribution with parameters (µ , µ , σ , σ , ρ), we have where Φ is the cdf of the standard normal variable.Further, if σ = σ = σ, the Fisher consistent version of symmetric Gini correlations are The proposition states that explicit forms of the Fisher consistent symmetric Gini correlations are only available for the homogeneous case.Also (13) indicates that the Fisher consistent version of r ( ) g requires information of the sign of ρ.If σ ≠ σ , we need a numerical method to approximate them. Plots in Fig. 1 display the relationship of various correlations to the parameter ρ in the lognormal distributions.In the left plot, σ = σ = , we have r ( ) g = r ( ) g > ρ > rs > rp > rτ if < ρ < , otherwise they are equal at 0 and 1.On the right with σ = and σ = , if < ρ < , then r ( ) g > r ( ) g , though the di erences between r ( ) g and r ( ) g are tiny and unnoticeable in the plot.Also we have r ( ) g > rs > rτ > rp.Note that the Pearson correlation, rp, can not reach when σ ≠ σ .The maximum value in the plot above is 0.6642169 when ρ = .From Equation ( 8), it is easy to prove that rp < for ρ = if σ ≠ σ .In other words, for a normal random variable X and a positive constant a ≠ , rp(exp(X), exp(aX)) < , meaning that the Pearson correlation is not suitable to describe nonlinear relationships. Influence function The in uence function (IF) introduced by Hampel [13] is now a standard tool which serves two purposes.The rst is to measure local robustness for e ects on estimators due to in nitesimal perturbations of distribution functions.The second is to derive limiting distributions and asymptotic variances.See also [14].For a cdf H on R d and a functional T : where δ z denotes the point mass distribution at z.Under regularity conditions on T (see [14,34] for details), we have E H {IF(Z; T, H)} = and the von Mises expansion where Hn denotes the empirical distribution based on a sample z ,...,zn.This representation shows the connection between the IF and the robustness of T, observation by observation.Further, ( 14) yields the asymptotic m-variate normality of T(Hn), We rst derive the in uence functions for the standard Gini correlations γ and γ , which are stated in the following proposition. Proposition 3.1. For any continuous bivariate distribution H with nite rst moment, the in uence functions of the traditional Gini correlations are given by The in uence functions of the standard Gini correlations are approximately linear in u and v. Comparing with the quadratic e ects of the Pearson correlation coe cient [5], γ and γ are more robust than the Pearson correlation.However, they are not strictly robust since their inuence functions are unbounded.Kendall's tau, rτ, and Spearman correlation, rs, have bounded in uence functions [3], which are In this sense, the standard Gini correlations are more robust than rp but less robust than rτ and rs. Proposition 3.2.For any continuous distribution H with nite rst moment, the in uence functions of r ( ) g and r ( ) g are given by Since the square root function is not di erentiable at zero, the in uence function of r ( ) g does not exist when r ( ) g = .This brings di culty in deriving the limiting distribution of sample r( ) g when r ( ) g = , as explained further in a later section.The in uence function of r ( ) g and that of nonzero r ( ) g are linear combinations of the in uence functions of γ and γ , and hence are approximately linear in u and v.The symmetric Gini correlation r (s) g proposed in [28] also has an approximately linear in uence function.We expect that the newly studied Gini correlations and the symmetric one based on the joint rank perform similarly in terms of robustness and statistical e ciency. In Figure 2, we demonstrate the in uence functions of rp, rτ, r (s) g and r ( ) g and r ( ) g under the bivariate normal distribution with µ = µ = , σ = σ = and ρ = . .Since we know that r ( ) g = ρ and r ( ) g = |ρ| for bivariate normal distributions, the in uence functions for the two Gini correlations are identical for ρ = ., and thus share the same plot in Figure 2. Indeed under a general elliptical distribution, IF((u, v) T Note that scales of the value of the in uence functions in the four plots are quite di erent. Estimation Estimation of the two new symmetric Gini correlations can be done easily by plugging in estimators γ and γ of γ and γ , respectively.Given a random sample Z = {Z , Z , ..., Zn} with Z i = (X i , Y i ) T , the traditional Gini correlations γ and γ can be estimated by a ratio of U-statistics.That is, [31] applied U-statistics theorem to establish consistency and asymptotic normality of γ and γ .The same result can be reached through the in uence function approach which is derived in Proposition 3.1.More speci cally, for H with nite second moment, where the asymptotic variances vγ and For a bivariate normal distribution, Xu et al. [41] provided an explicit formula vγ 16) and ( 17) is time-intensive with complexity O(n ).Rewriting U and U as linear combinations of order statistics reduces the computation to O(n log n) [31].That is, where X (i) is the i th order statistic of X , X , ..., Xn and X (Y (i) ) is the X corresponding to the order statistic Y (i) .Similarly, U and U are linear combinations of order statistics.This provides computational e ciency for γ and γ .Thus, we have computationally e cient estimators for r( ) g and r( ) g ; r( ) g is the arithmetic mean of γ and γ , while r( ) g is the geometric mean of γ and γ . which are continuous functions of γ and γ and they can be e ciently calculated in O(n log n) of time.The strong consistency of r( ) g and r( ) g follows directly from the strong consistency of γ and γ . Proposition 4.1.Let Z , Z , ..., Zn be a random sample from a continuous bivariate distribution H with nite rst moment.Then r( ) g and r( ) g given in (18) converge almost surely to r ( ) g and r ( ) g , respectively. . Limiting distributions To simplify the presentation, we denote With the in uence function derived in Proposition 3.2, we can easily obtain the asymptotic normality of r( ) g . Under the lognormal distribution, asymptotic normality of the Fisher consistent estimator ρ( ) g is obtained by the Delta method.Its asymptotic variance is k (ρ) − vg , where with ψ and Φ being the pdf and cdf of the standard normal random variable, respectively. To study the asymptotic behavior of r( ) g , we have to overcome the di culty brought about by the nonexistence of the in uence function when r ( ) g = .It is interesting to see that there are three di erent limiting distributions of r( ) g , corresponding to three cases of r ( ) g .We present the results in the following two propositions. For r ( ) g ≠ , the in uence function of r ( ) g exists and can be used to establish the asymptotic normality of r( ) g and calculate its asymptotic variance.If r ( ) g = , the in uence function of r ( ) g does not exist, and hence we have to rely on U-statistic theory to derive the limiting distributions of r( ) g .There are two di erent cases resulting from r ( ) g = , depending on whether or not both γ and γ are zero.Without loss of generality, we assume γ = and the two cases correspond to γ = and γ ≠ , respectively. . Asymptotic relative e ciency We compare the asymptotic e ciency of the symmetric Gini correlations with other correlations under elliptical distributions and lognormal distributions.We consider Fisher consistent estimators.Note that the purpose here is not to estimate parameter ρ, which is usually provided by likelihood inference.Rather, the Fisher consistent correlation coe cients estimate the same parameter and hence their asymptotic variances and statistical e ciencies are comparable.Denote ρ( ) g , ρ( ) g , ρ(s) g , ργ , ρτ and ρp as corresponding estimators of symmetric Gini, standard Gini γ , Kendall's tau and Pearson correlations.The asymptotic variances of those estimators are derived by the Delta method. We compute the asymptotic variances (ASV) of the Pearson estimators ρp, and asymptotic relative eciencies (ARE) of estimators ρ( ) g , ρ( ) g , ρ(s) g , ργ, and ρτ relative to ρp, which are reported in the rst part of Table 1.The asymptotic relative e ciency (ARE) of one estimator ρ with respect to another ρ is de ned by The second part of Table 1 lists ASV of all correlations under the lognormal distribution with σ = and σ = .In this case, the Pearson correlation has extremely large asymptotic variances, the result agreeing well with [19,23].The asymptotic variance of rp involves the fourth moment and is given by Witting and Müller-Funk ( [40]) as follows.where For the lognormal case of ρ = ., σ = and σ = , we have vp = .and using the Delta method, the ASV of the Fisher consistent Pearson correlation ρp is vp multiplied by 3.12. Since we have yet to determine the relationship between ρ and r (s) g for the lognormal distribution, the asymptotic relative e ciencies of ρ(s) g under the lognormal distribution are not presented in this paper.Note that by Remark 4.1, we have γ = γ and hence the ASV's of ρ( ) g and ρ( ) g are same for all cases except for the second setup of the lognormal distribution.In that case, ρ( ) g is 15%, 10% and 2% more e cient than ρ( ) g for ρ = ., .and ., respectively. Table 1 shows that the asymptotic variances of ρp, ρ( ) g , ρ( ) g , ρg, ργ, and ρτ all decrease as ρ increases in elliptical distributions.Asymptotic variances increase for t distributions as the degrees of freedom ν decrease.Under normal distributions, the Pearson correlation estimator is the maximum likelihood estimator of ρ, thus is the most e cient asymptotically.The two proposed symmetric Gini estimators ρ( ) g , ρ( ) g are both high in e ciency with ARE's greater than 90 percent; thus, more e cient than Kendall's estimator ρτ and the traditional Gini correlation estimator ργ.For heavy-tailed elliptical distributions, symmetric Gini estimators ρ( ) g and ρ( ) g are more e cient than Pearson's estimator ρp.They are also more e cient than the traditional Gini correlation in all elliptical distributions.The rank based symmetric Gini correlation ρ(s) g has a similar e ciency as ρ( ) g and ρ( ) g , but it has a slight advantage when ρ = .and 0.9.Under the lognormal distribution with σ = σ = , ρ( ) g and ρ( ) g are competitive with Kendall's tau.Under the case of σ = , σ = however, a large variation in Y will degrade the performance of γ and consequently ρ( ) g and ρ( ) g .ASV of Kendall's tau is the most e cient in this case. Empirical Results We rst conduct a small simulation to compare computational e ciency of each correlation.Then we compare nite sample statistical e ciency of these methods. . Computational e ciency To study the computational e ciency of these methods among nite samples, we perform a small simulation to compare the calculation times of the two symmetric Gini correlation estimators r( ) g , r( ) g with Kendall's tau rτ, Spearman rs, and Pearson rp correlation estimators, as well as the symmetric Gini correlation estimator r(s) g .Samples of sizes n = , and were drawn from a bivariate Normal distribution with parameters (µ = µ = , σ = σ = , ρ = ).For each sample, the computation times of each correlation measure were recorded.The procedure is then repeated 30 times to procure the mean and standard deviation of computation times for each measure.In Table 2, we display the mean and standard deviation (in parenthesis) of calculation times for r( ) g , r( ) g , r(s) g , τ, rs, and rp.The values in Table 2 were achieved on a Windows PC with an Intel ® Core TM i7-9700K CPU @ 3.60GHz, 8 cores.The R package "pcaPP" is used for fast computation of Kendall's tau correlation. Table 2: The mean and standard deviation (in parenthesis) of calculation times for r( ) g , r( ) g , r(s) g , τ, rs, and rp under a bivariate Normal distribution. From the complexity study, we know that r( ) g , r( ) g , τ, and rs all have calculation times of O(n log n), r(s) g has a calculation time of O(n ), and rp has a calculation time of O(n).In Table 2, we can see that rp is the most computationally e cient, with r( ) g , r( ) g , τ, and rs being only slightly less e cient.It is clear from Table 2 that all of r( ) g , r( ) g , rp, τ, and rs would perform well with most all sample sizes, however, r(s) g would not perform well with large samples. . Finite sample e ciency In order to study the e ciency of these methods among nite samples, we conduct a small simulation comparing the two symmetric Gini correlations with Kendall's τ, Spearman, and Pearson correlation estimators.Samples of sizes n = and n = were drawn from 4 t-distributions with degrees of freedom 1, 5, 15, and ∞, and from the Kotz and Lognormal distributions.Let µ = ( , ) T and Σ = ( ) be the parameters. The R Package "mnormt" was used to generate data from the multivariate t distributions, bivariate normal distribution and the lognormal distribution by taking the exponential transformation of a bivariate normal random sample.We generate data from the Kotz distribution by rst obtaining uniformly distributed random vectors on the unit circle by u = (cos θ, sin θ) T with θ in [0, 2π], then generate r from a Gamma distribution with shape parameter α=2, and scale parameter β=1.Thus, we obtain Σ / ru + µ, a sample from a bivariate Kotz(µ, Σ) distribution. An estimator ρ(m) is computed for the m th sample and the root mean squared error (RMSE) is used for a criterion for assessing estimators, which is de ned as In our experiment, M is set to be 3000.The procedure is then repeated 30 times to procure the mean and standard deviation of √ nRMSE.In Table , we display the mean and standard deviation (in parenthesis) of √ nRMSE of ρ( ) g , ρ( ) g , ρτ, ρs, and ρp.We notice a decreasing trend in √ nRMSEs as ρ increases for each sample size and an increasing trend as degrees of freedom, ν, decrease for t distributions.Under the normal distribution, √ nRMSEs of both proposed symmetric Gini estimators, ρ( ) g and ρ( ) g , are highly competitive with √ nRMSE of ρp.For ρ = 0.1 , ρ( ) g outperforms ρp in all distributions.We include the heavy-tailed distribution, t( ), to demonstrate the behavior of Pearson and Gini estimators when their asymptotic variances may not exist.We observe that for large sample size, ρp is around twice as large as both ρ( ) g and ρ( ) g .When the sample size is small (n = ), and degree of freedom ν is large (15, ∞) ρ( ) g performs the best.For the lognormal distribution, when ρ is small, we see ρ( ) . Robustness We also conduct a simulation with contaminated data to demonstrate robustness and show how contamination a ects the performance of each correlations.We generate contaminated data of sizes (n = , ) from the following mixture normal model with contamination rates (ε = %, %). where σ = , .The majority of the data is highly positively correlated with a contamination by a small portion of negatively correlated outliers.The same criterion √ nRMSE is used to evaluate the di erence between each correlation estimator and the true parameter value 0.9.M and the number of repetitions are the same as the previous subsection: 3000 and 30, respectively.The result is listed in Table 4. In each case above, the Pearson correlation has the highest RMSE.This indicates the Pearson correlation's sensitivity to contamination and the high level of degradation those outliers have on its performance.The most robust correlation is the Kendall's tau.The performance of the Gini correlations are between those of the Pearson and Kendall's correlations.This result supports our ndings from the derived in uence functions in Section 3. The two symmetric Gini correlations ρ( ) g and ρ( ) g perform very similarly, but they are less robust than the joint rank based Gini correlation ρ(s) g . Real data analysis For the purpose of illustration, we apply the developed Gini correlations to the "GDP per captia and Suicide rates" data which is available on Kaggle.Many factors (mental health issues, weather, culture, etc.) a ect suicide.We would like to explore whether or not an economic factor, such as GDP, relates to suicide rate by measuring the correlation using several correlation coe cients.The data contains information from 160 countries around the world from the years 2000, 2005, 2010, 2015 and 2016.There are 2 missing values in 2000 data and 5 missing values in other years.We drop those countries with missing values and consider only the complete data for each year.We analyze how GDP and crude suicide rates are related and how the relationship changes through years.The crude suicide rate is the number of suicide deaths in a year, divided by the population and multiplied by 100,000.The countries with the highest suicide rates are Russia and Lithuania.Their suicide rates range from 32 to 52 per 100000 people.Luxembourg is the country with the highest GDP per captia of $48736 in 2000 and $101305 in 2016.Ethiopia, Burundi, and Somalia are countries with the lowest GDP of $124 in 2000 and $282 in 2016.There is a high degree of positive skewness in the distribution of GDP, hence we also consider the log transformation of GDP data to handle the asymmetry.We draw the scatterplot between GDP per capita and SR as well as the scatterplot between log(GPD) and SR per year in Figure 3.We also add a cubic smoothing spline tting curve in each plot.We used default values of parameters of smooth.spline in R to t the curves.We can see that the tted curves demonstrate non-linear relationship between GDP per capita and SR, but almost linear relationships between log(GDP) and suicide rate except for the year 2010.We have calculated the symmetric Gini correlations for (GDP, SR) and (log(GDP), SR), as well as other correlations presented for comparison in Table 5.We utilize the jackknife method to provide an estimation of the variation of the sample correlations.Let r(−i) be the jackknife pseudo value of a correlation estimator r based on the sample with the i th observation deleted.Then the jackknife variance is where r(•) = /n n i= r(−i) .See [35] for more details.Table 5 lists the jackknife standard deviations in parentheses. From Table 5, we observe that all the listed correlations between GDP per capita and SR are less than .5000,which indicates a weak or moderate association between GDP per capita and SR and is consistent with Figure 3.However, with each year, we notice an increasing trend in the correlations between GDP and SR.The data suggest that the correlations between the two become more signi cant as time passes.Values of r( ) g and r( ) g are close to each other, but there is a visible di erence between the regular Gini correlations, γ and γ .After the log transformation on GDP, the di erence becomes less signi cant.The monotonic transformation does not change the rank of the GDP.Kendall's τ and γ should maintain the same values before and after the transformation, which agrees with the values we have shown in ) Table 6: Correlations between log(GDP) and SA for the complete data and the deleted data in 2015 and 2016.The standard deviations are in parenthesis. To demonstrate robustness, we delete some outliers and compare the di erences of each correlation estimator in the complete data and in the edited data.We expect the Pearson correlation to show the largest di erence, the Kendall's τ correlation to demonstrate the smallest, and the Gini correlations to be somewhere in-between.We consider log(GDP) and SA data from 2015 and 2016.We delete all countries with SR > .The results listed in Table 6 con rm what we expect.In 2015, the Pearson correlation estimator changes from 0.295 to 0.353, while symmetric Gini correlations only have a slight change from 0.321 to 0.335.The Kendall's tau correlation is the most stable.A similar conclusion can be drawn for the 2016 data.This experiment illustrates that Pearson correlation is not robust and may not be a good measure of association even though the cubic smoothing spline tting lines in the scatter plots in Conclusion We have systematically studied two symmetric Gini correlations r ( ) g and r ( ) g , which are the arithmetic and geometric means of the traditional Gini correlations γ and γ .We studied basic properties of r ( ) g and r ( ) g , as well as their relationships to the correlation parameter in the elliptical distributions and log-normal distribution.Such relationships enable us to obtain Fisher consistent versions of each correlation.We derived their in uence functions in order to gauge robustness.They are more robust than the Pearson correlation but less robust than Kendall's tau and Spearman correlations.We established asymptotic distributions of the sample correlations.Usual asymptotic normality holds for r( ) g as well as for r( ) g as long as r ( ) g ≠ .Their asymptotic variances are obtained through the in uence function approach.For r ( ) g = , r( ) g has two di erent limiting distributions, depending on whether or not both γ and γ equal 0. We compared their computational e ciency and statistical e ciency with the rank-based symmetric Gini, Kendall's tau and the Pearson correlation.r( ) g and r( ) g can be e ciently calculated with a computational complexity of O(nlogn).Asymptotic eciency and nite sample e ciency of each correlation are obtained under various elliptical distributions and asymmetric lognormal distributions.In summary, the two symmetric Gini correlations balance well among statistical e ciency, robustness, and computational e ciency. Continuations of this work could advance in several directions.The jackknife empirical likelihood (JEL) method proposed by Jing et al. [15] has been proven to be e ective and reliable in dealing with U-statistics.Sang et al. [29] have applied JEL to the classical Gini correlations.It could be bene cial to develop JEL for the two symmetric Gini correlations.In the current work, comparisons among correlations are made in elliptical distributions and lognormal distributions.It would be worthwhile to explore the comparisons in wide families of bivariate distributions such as copula family and Farlie-Gumbel-Morgenstern models.Fontanari et al. [8] proposed a new Archimedean copulas based on the Lorenz curve that is highly related to Gini index and Gini correlations.It is interesting to study correlations in this family.Dang et al. [4] extended the Gini mean di erence in one dimension to the Gini covariance matrix (GCM) in high dimensions.However, its computation cost is O(n ).It would be worthwhile to study the GCM based on r ( ) g or r ( ) g which should be more computationally e cient. Let (X, Y) T follow a lognormal distribution with parameters (µ , µ , σ , σ , ρ).Then the marginal distributions are F(x) = Φ((log x − µ )/σ ) and G(y) = Φ((log y − µ )/σ ), respectively.EX = exp(µ + σ / ) and E(X|Y) = exp(µ + ρσ (log Y − µ )/σ + σ ( − ρ )/ ) by [23].We have The last equation is due to (22).Also Thus, A proof of the proposition follows directly from the fact of strong consistency of Ustatistics U , U , U , U by the U-statistics theorem [34] and the fact that r( ) g and r( ) g are continuous functions of U , U , U , U .By the continuous mapping theorem [34], the strong consistency of r( ) g and r( ) g holds.Proof of Proposition 4.2 and 4.3.The asymptotical normality of r( ) g and the asymptotical normality r( ) when rg ≠ are an immediate result from the application of the in uence function approach [14].statistics theorem and the continuous mapping theorem [34].We need to explore the limiting distribution of |U U |. Then by Slutsky's theorem [38], the limiting distribution of r( ) g follows.Now consider U U , the product of two U statistics.We have where Rn = op(n − ) and the symmetric kernel g(z , z , z , z ) = / !p h (z i , z i )h (z i , z i ) with p denoting summation over the 4! permutations (i , i , i , i ) of ( , , , ).De ne the new U statistic Un = n − ≤i<j<k<l≤n g(Z i , Z j , Z k , Z l ).It is easy to check that U U is asymptotically equivalent to Un.Now consider the rst order and second order projections of the kernel g.We de ne Y) for any constant c, d and nonzero a, b. where • is the Euclidean norm and U is uniformly distributed on the unit sphere.When d = , the class of elliptical distributions coincides with the location-scale class.For d = , let Z = (X, Y) T and Σ = σ σ σ σ , then the corresponding linear correlation coe cient of X and Y is ρ = ρ(X, Y) := σ σ σ . Remark 4 . 1 . If γ = γ ≠ , we have vg = vg , meaning that two estimators r( ) g and r( ) g have the same statistical e ciency. Proposition 4 . 4 . Let Z , Z , ..., Zn be a random sample from 2-dimensional distribution H with nite second moment.When r( ) g = , we have 1.If γ ≠ , r( ) g converges to the square root of a folded normal random variable.That is, n / r( )g d −→ |Z|,where Z is a normal random variable with mean zero and variance given in the proof.2. If γ = , we have√ s − ) ,where ∆ = Cov(X, F(X)) and ∆ = Cov(Y , G(Y)) are Gini's mean di erences for F and G, respectively, χ s (s = , , ...) are independent χ variables and {λs} (s = , , ...) are coe cients given in the proof. 30 Figure 3 : Figure 3: Scatter plots between GDP and Suicide Rate and log(GDP) and Suicide Rate in di erent years.A cubic smoothing spline tting curve is added in each plot. Fig 3 are almost linear in 2015 and 2016, suggesting the usage of the Pearson correlation.Other correlations are more preferred in this example. U and U are always positive.The denominator U U converges to √ ∆ ∆ almost surely by the U Table 4 : The mean and standard deviation (in parenthesis) of √ nRMSE of each correlation estimator in the contaminated Normal data. Table 5 : All types of correlations for (GDP, SR) and (log(GDP), SR), respectively.The standard deviations are in parenthesis.
9,161.6
2020-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Sampling Designs with Linear and Quadratic Probability Functions Fixed size without replacement sampling designs with probability functions that are linear or quadratic functions of the sampling indicators are defined and studied. Generality, simplicity, remarkable properties, and also somewhat restricted flexibility characterize these designs. It is shown that the families of linear and quadratic designs are closed with respect to sample complements and with respect to conditioning on sampling outcomes for specific units. Relations between inclusion probabilities and parameters of the probability functions are derived and sampling procedures are given. Introduction In the first part of the paper, we consider the most general fixed size without replacement sampling design with a probability function that is linear in the sampling inclusion indicators-the linear sampling design.The linear design, being an unequal probability design, is remarkable due to simple explicit relations between inclusion probabilities and parameters of the probability function.This enables sampling with desired inclusion probabilities, design-unbiased estimation and variance estimation.As special cases, the design covers the classical Midzuno [1] and the complementary Midzuno (see [2]) designs.It is shown that the linear design can be seen as a mixture of the two types of Midzuno designs.It is also shown that the family of linear designs is closed with respect to conditioning on sampling outcomes.This property, as well as the mixture representation, offers easy methods for sampling from the linear design.The family is also closed with respect to sample complements, i.e. the complement of a sample from a linear design is a sample from another linear design. In the second part of the paper, the fixed size without replacement design with quadratic probability function-the quadratic sampling design-is defined and studied.It is the natural extension of the linear design.A classical design by Sinha [3] is a special case of the quadratic design.His design aimed at sampling with prescribed second-order inclusion probabilities.Together with Sinha's design, two other special quadratic designs are studied more closely.These three designs are easy to use for sampling.There are explicit expressions for the second-order inclusion probabilities of Sinha's design.In the general case, the formulas are more complicated.A lemma is proved which relates the second-order inclusion probabilities and the parameters of the quadratic design.Like the family of linear designs, the family of quadratic designs is closed with respect to sample complements and with respect to conditioning on sampling outcomes.The last property makes list-sequential sampling from these designs efficient. The linear and the quadratic designs are simple but somewhat restricted when the aim is sampling with prescribed first-or second-order inclusion probabilities.They cannot be used for all such probabilities.In the final section of the paper, possible extensions are mentioned. Linear Sampling Designs We treat without replacement (WOR) sampling designs of size n from a population of size N. Let ( ) , , , N I I I =  I be the binary random sampling inclusion vector.A sampling design is given by its probability function ( ) ( ) S is the set of all possible "samples" x of size n. Definition 1. A sampling design of size ( ) . Of course, equal c k s give SRSWOR.Below it is always assumed that the c k s are normalized to have sum 1. Then ( ) In fact, ( ) The inclusion probabilities ( ) π Pr 1 of the linear design are given by ( ) Indeed, since By similar algebra, the second-order inclusion probabilities ( ) i.e. the ij π s are linear in the first-order inclusion probabilities. Without restriction we may assume that the c k s are given in increasing order.Obviously, in order to get it is then necessary and sufficient that 1 0. We obtain the following theorem. Theorem 1.Let be given numbers with sum n.Then there is a linear sampling design with these numbers as inclusion probabilities if and only if By the relation (2), we see that  To sample from a linear sampling design, we may use the well-known acceptance-rejection (AR) technique. A constant A such that ( ) ( ) is first found.Assuming that the c k s are in increasing order, it suffices to put 1 . Then we generate a tentative sample x according to SRSWOR of size n and a random number ( ) The sample is accepted as a sample from the linear design if ( ) The full procedure is repeated until a sample is accepted.The acceptance rate equals To be seen later on, also other sampling techniques exist.There are two special linear sampling designs: the Midzuno design and the complementary Midzuno design. The Midzuno design has the probability function and 0 k a ≥ for all k.Apparently it is a linear design.We may sample from the design by selecting one unit according to the probabilities k a and then 1 n − further units according to SRSWOR from the remaining 1 N − ones.The design was introduced in [1] (considering the k a s as proportional to auxiliary variables).Tillé ([4], p. 117) generalizes it and gives many further references concerning the design.Brewer and Hanif ( [5], p. 25) remark that the inequalities on the inclusion probabilities are restrictive. The complementary Midzuno design (see [2]) has the probability function ( ) ( ) k This design is considered in [6] with the k b s proportional to auxiliary variables.We may sample from the design by removing one unit according to the probabilities k b and sample n units by SRSWOR among the remaining 1 N − ones.For the complementary Midzuno design, we then get ( ) The formula for the second-order inclusion probabilities coincides with (3).Since ( ) ( ) x we also see that the complementary Midzuno design is a linear design with coefficients 1 for all k, it is even a Midzuno design and hence the two designs overlap each other. Remark 1. Let Of course, also (b) implies the inequality in Theorem 1.In fact, assuming the contrary that which is a contradiction as the sample size is n. The family of linear designs can be considered as closed with respect to sample complements.More precisely, the complement of any sample of size n from a linear design, is a sample of size N n − from another linear design.This follows from the relation ( ) Another interesting property of the family of linear designs is that it is closed with respect to conditioning on the sampling outcomes for specific units.For instance, if we know that unit 1 is selected (i.e. 1 1 x = ), then the probability function for the remaining size 1 n − sampling among 1 N − units is still linear.In fact, given that 1 0 π > and that 1 1 1, we have by ( 1) The conditional probability function of ( ) , , , N I I I = I  is then given by ( ) The new coefficients 1 , 2, 1 above.Normalizing them, we get coefficients ( ) ( ) < ), the new coefficients are simply given by 0 2 , 2. It follows that samples from a linear design can easily be generated list-sequentially.The first unit in the population is sampled with probability ( ) Else the second unit is sampled with probability ( ) A Mixture Result A linear design can also be called a mixed Midzuno design because of the following theorem. Theorem 2. A linear sampling design ( ) The components of the mixture are not unique.The very last statement in the theorem follows from the re-writing ( ) The full proof of the theorem is somewhat technical and is given in an appendix.The proof yields the following procedure for finding suitable components of the mixture.It is assumed that the design is not a pure Midzuno design and that the units in the population are ordered such that 1 2 1 0 , ). Procedure: The γ = and 1 1 2 1 0, 0, 0, 0, , , , 14 7 7 2 , , , ,0,0,0,0 .9 18 9 18 However, since the design is also a pure complementary Midzuno design with and hence Theorem 2 implies another simple way of generating samples from the linear design.First it is decided by a random choice whether a Midzuno or a complementary Midzuno design should be used.Then one of these designs is applied.In Tillé's ( [4], pp.99-104) terminology, we can see this technique as a special splitting technique which after two steps ends with SRSWOR. Quadratic Sampling Designs There is a natural extension of the linear designs.In this section we look at fixed size designs with quadratic probability functions.We could use ordinary quadratic forms.However, it is more appropriate to sum over sets of size 2 than over pairs with ordered elements.Below Definition 2. A sampling design of size ( ) In particular, if 2 n = then ij d is the probability to select the sample { } , .i j The normalization constant can be shown to be equal to the reciprocal of 2 2 . It can also be shown that the parameters ( ) However, this set is rather complicated if n is not very small. For 2 2, n N ≤ ≤ − a linear design can also be seen as a quadratic design, because ( ) . There are three natural quadratic designs with their basic parameters symmetric and nonnegative: we readily find that for the three cases the normalization constants are the reciprocals of 2 , 2 respectively.It is not clear that the designs (b) and (c) are quadratic according to Definition 2 but it will be obvious later on.It will also follow that the complement of a sample from a quadratic design of size n can be seen as a sample from a quadratic design of size .N n − The design (a) corresponds to selecting one pair of units according to the probabilities ij a and then 2 n − further ones by SRSWOR.The design (b) was considered by Sinha in [3].Remove one pair of units according to the probabilities ij b with sum 1 and then choose n units by SRSWOR among the remaining ones.The design (c) corresponds to selecting one pair according to the probabilities ij c and then keeping one of the units for the sample and removing the other one.Then 1 n − units are selected by SRSWOR among the 2 N − non-selected units. Remark 2. An extension of the design (c) would be to let , , In this case the units i and j have different roles.In the symmetric case the new parameters ij c equal the previous ones multiplied by 1/2.This extension is not considered further here. Using the facts that for , , where , 1 it is not difficult to see that also the designs (b) and (c) are indeed quadratic designs according to Definition 2 with parameters ij d that are not necessarily nonnegative.More explicitly we have, with , where . 2 2 1 , where and 1. 2 1 where . This result is a consequence of formula ( 4) but with the sample of size n replaced by its complement of size This gives us the following theorem. It can happen that a design is of all the three types simultaneously.For example, SRSWOR is such a design. and let the design be defined by In the linear case we saw that a general linear design can be represented as a mixture of a Midzuno and a complementary Midzuno design.It would be desirable that a general quadratic design could be represented as a mixture of the three designs (a), (b), and (c) (or its extension).However, such a result is not likely to hold. For the design (b) the second-order inclusion probabilities ij π (which determine the first-order ones since ( ) ) are easy to find.We have It is also easy to find the b-parameters that correspond to ( ) ) This result is due to Sinha [3] but more generally also a special case of Lemma 1 below.Sinha used his result to find a method to sample with prescribed second-order inclusion probabilities.However, for the method to work with nonnegative , ij b strong restrictions on the ij π s are needed. Lemma 1.Let , , ij q i j ≠ be given numbers with . ij ji q q = Then the equations in the unknowns , , We only have to put from the lemma.The proof of Lemma 1 is given in the appendix. For a design given by ( ) x the formulas for the second-order inclusion probabilities π ij are somewhat complicated.However, we can first find ij b from formula (5) and then use the π ij formula above.The inverse procedure ( ) can also be used to find d-parameters corresponding to given second-order inclusion probabilities.The "mixed case" (c) can be handled in the same way.A computer program makes these procedures simple to use. To sample from the general quadratic design, we could use an AR-technique by dominating ( ) p x by a multiple of ( ). SRS p x However, it is probably simpler to use list-sequential sampling.In fact, given 1 1 I x = (0 or 1), the probability function for the remaining sample of size n or 1 n − is again quadratic, cf.Section 2. But we must recalculate parameters (coefficients) and calculate first-order inclusion probabilities.The first-order inclusion probability 1 π is given by (assuming that the d-sum is 1) . Additional Comments As recently becoming more common in sampling articles, we have used sampling indicators and focused on the probability function of these.The names linear design and quadratic design become very natural for this reason. There are several advantages of the linear design: there are simple relations between the parameters and the inclusion probabilities of first and second order.It is also easy to sample.Thus the design is easy to use and for the Horvitz-Thompson estimate of a population total, we can also readily find a variance estimate.The drawback of the design is its lack of sufficient flexibility: although much more general than SRSWOR, it is not able to cover all possible first-order inclusion probabilities. The quadratic design is more complicated than the linear one.By using a quadratic design, it is possible to sample with prescribed second-order inclusion probabilities.However, it cannot be used for all such possible inclusion probabilities and therefore this design is also not flexible enough. To get more flexibility, the linear and quadratic functions need to be complemented by binomial factors.To sample with prescribed first-order inclusion probabilities, we may generally use a probability function of the form ( ) ( ) where the parameters are suitably chosen.At least three well-known designs are of this form (with nonnegative k c ): the conditional Poisson design, the Sampford design, and the Pareto design, cf.[7].For these three cases, the linear function Letting i i p π = (the desired first-order inclusion probabilities) and using Lemma 1, it is possible to calculate the suitable parameters kl b without much effort.Details will be presented in a planned forthcoming paper.This design is much more flexible than the quadratic design but it has not full flexibility.A fully flexible design uses a probability function of the form ( ) but it is not easy to find the parameters [8]. 0 , where 0 k a ≥ and ∑  ), we have the desired mixture representation We shall show that (9) can be achieved by putting for some ν such that ) ( ) ( ) ( ) and hence 1 . Summing then over i and , , j i j ≠ we get ( ) ( ) and hence ( )( ) ( ) This shows that the solution must be of the form (8). It is not difficult to check that (8) also is a solution. ∑ Then, by(2), these numbers are inclusion probabilities for a Midzuno design iff (a) Obviously (a) implies the less restrictive inequa-lity in Theorem 1.The i π s are inclusion probabilities of a complementaryMidzuno design iff (b) the parameters for the components be defined by 0, 1, first component, the Midzuno one, is used with probability α and the second component, the comple- mentary Midzuno one, is used with probability . size n.This means that the possible parameters ( ) parameters ij d with sum 1 we may solve the equations bc In fact, we have By ( 5 ) and(6), the corresponding b-and c-parameters are then given by case the design is not of type (a) or (c) but of type (b). b To sample with prescribed second-order inclusion probabilities, we may use a design with ( in (10) is certainly satisfied with strict inequality.If also the second one is satisfied, we can stop and use .
3,901.8
2014-03-28T00:00:00.000
[ "Mathematics" ]
Pump–probe capabilities at the SPB/SFX instrument of the European XFEL The pump–probe capabilities at the SPB/SFX instrument of the European XFEL are discussed. Introduction The advent of hard X-ray free-electron lasers (XFELs) with high peak intensities and femtosecond pulse durations has paved a way to study the interaction between matter and light at ultrafast time scales and to atomic resolution (Emma et al., 2010;Ishikawa et al., 2012;Kang et al., 2017;Decking et al., 2020;Prat et al., 2020). Ultrafast dynamics that are triggered and observed using an X-ray pulse or a secondary source of radiation, in general known as pump-probe (PP) experiments, are of increasing utility and have become a general method at XFEL facilities worldwide (Fukuzawa & Ueda, 2020;Jang et al., 2020;Pandey et al., 2020;Biasin et al., 2021). In most cases, the dynamics of interest are triggered or 'pumped' using an optical laser and observed or 'probed' by an XFEL (Zhang et al., 2014;Kim et al., 2015;Lemke et al., 2017;Jung et al., 2021. More recently, the development of superconductive accelerator technology has allowed some XFELs to operate at a high repetition rate (Feldhaus, 2010;Rossbach, 2020;Decking et al., 2020) and this offers the potential to reduce the data acquisition time by several orders of magnitude compared with non-superconducting free-electron lasers (Gisriel et al., 2019;Sobolev et al., 2020;Ayyer et al., 2021). The European XFEL (EuXFEL) is the only presently operating high-repetition-rate XFEL in the hard X-ray regime. However, EuXFEL operates in the so-called 'burst mode' with a burst duration of 600 ms at a repetition rate of 10 Hz. Each burst can have a single pulse to a train of pulses with an intra-burst repetition rate up to 4.5 MHz. This demands more advanced technology to stabilize variations in beam properties, largely caused by non-thermal equilibrium of the mechanical components involved in the generation, transport and conditioning of the pump and the probe sources. Here we report the pump-probe capabilities and parameters available to study a wide variety of scientific cases at the Single Particles, Clusters, and Biomolecules and Serial Femtosecond Crystallography (SPB/SFX) instrument of the European XFEL. Pump-probe experiments at SPB/SFX The SPB/SFX instrument is primarily focused on imaging as well as determining the structure of single particles and macromolecules using hard X-rays . Various experiments based on coherent diffraction imaging (Sobolev et al., 2020;Ayyer et al., 2021), serial femtosecond X-ray crystallography (Wiedorn et al., 2018;Grü nbein et al., 2018;Yefanov et al., 2019) and X-ray imaging have been performed to successfully visualize structures across a variety of length scales, utilizing megahertz repetition rate X-ray pulses. Exploring the ultrafast dynamics of biochemical reactions and time-resolved behavior of molecular structures are also being studied at SPB/SFX (Pandey et al., 2020). To facilitate such experiments at the instrument, different optical laser pump sources and diagnostic devices have been installed and commissioned. The SPB/SFX instrument has two interaction regions where sample interactions with X-rays and laser sources are measured, one termed interaction region upstream (IRU) and the other termed interaction region downstream (IRD) . The general layout of the SPB/SFX instrument is shown in Fig. 1. The X-rays propagate from the source to their focal point at IRU entirely through a vacuum, arriving with a full width at half-maximum (FWHM) focused diameter of a few micrometres or a few hundred nanometres using either of two Kirkpatrick-Baez mirror systems (Bean et al., 2016). A set of compound refractive lens refocusing optics (providing a micrometre-sized focus) are installed after IRU to enable measurements at IRD in an air/helium or vacuum environment . Pump-probe experiments are feasible at both interaction regions. A MHz X-ray imaging setup is also partially installed and commissioned, including a fast camera [Shimadzu HPV-X2 (Shimadzu, 2021)] and an X-ray imaging unit around IRD ; further developments are ongoing. This paper primarily describes the capability currently available at IRU with a similar capability planned for IRD in the future. Fig. 2 shows a simplified drawing of a typical in-vacuum pump-probe experiment at IRU. The samples are delivered to the interaction region via a liquid jet or an aerosol injector (Schulz et al., 2019a;Bielecki et al., 2019;Vakili et al., 2022). The SPB/SFX instrument geometry is suited for off-axis excitation with a laser perpendicular to the X-ray beam. The focus and pointing of the laser beam can be adjusted using a focusing lens and a steering mirror using piezo stages. The typical optical laser beam size at the interaction point is 30-50 mm FWHM. Optimization of the laser and X-ray focus, and their spatial overlap, are monitored with microscopes which provide orthogonal views. The diffraction pattern is recorded with the 1 megapixel adaptive gain integrating pixel detector (AGIPD), a fast, burst-mode compatible X-ray detector (Allahgholi et al., 2015). Top view of the general layout of the SPB/SFX instrument showing the central laser hutch, instrument laser hutch and experiment hutch. The two interactions regions are highlighted by orange boxes, the PP laser beam pipes and beam directions are marked in red, and the X-ray beam, marked in violet, goes from left to right. Figure 2 Schematic of a pump-probe experimental setup at IRU of the SPB/SFX instrument. The main diagnostics devices at IRU are side and inline microscopes . The inline microscope consists of an infinity corrected 10Â reflective objective (Edmund Optics, ReflX Objective, 89-724), a prism mirror, a tube lens (Thorlabs, AC254-200-AB-ML) and a CMOS camera (Basler ace acA2500-14gm). The secondary mirror of the reflective objective and the prism mirror have a 700 mm and 2 mm hole, respectively, to let the X-ray beam through. The side microscope consists of an objective or lens combined with a tube lens (Thorlabs, TTL200) and an sCMOS camera (Andor, Zyla 5.5) in air. Up to three objectives/lenses can be mounted on a motorized stage to provide different magnifications of the interaction point. The most commonly used optics is 10Â (10Â Mitutoyo Plan Apo Infinity Corrected Objective, 378-803-3). The inline and side microscopes were designed to provide a working distance of > 30 mm to accommodate the requirements of the sample environment (Schulz et al., 2019a). Femtosecond PP laser system The principal pump-probe laser (PP laser) at SPB/SFX is a burst-mode laser developed by the laser group at the European XFEL (Pergament et al., 2016;Palmer et al., 2019), delivering ultrashort pulse duration with up to 4.5 MHz intra-burst repetition rate. The PP laser is based on a non-collinear optical parametric amplification (NOPA) system which is able to provide various choices of pulse duration and wavelength tunability around 800 nm. The PP laser system consists of a femtosecond oscillator, a burst-mode fibre-based front-end and a power amplifier followed by up to three NOPA stages. The different NOPA stages can be set up according to the required output parameters at the instrument . The PP laser can be operated with the same pulse pattern as the XFEL; however, there is also a means to select only a subset of pulses within the burst. Therefore, any arbitrary pulse pattern derived from the intra-burst repetition rate can be generated. In addition, it is possible to use a sequence of varying pulse patterns for consecutive bursts -that is, different patterns for subsequent trains. The NOPA pump beam can also be used as an independent source, which gives options of uncompressed 400 ps and compressed 850 fs pulses at 1030 nm . The output parameters of the PP laser are tabulated in Table 1. The spectral and temporal profiles of the compressed 800 nm pulses for the 15 fs FWHM and 50 fs FWHM pulse duration set points are shown in Fig. 3. The PP laser is transported from the central laser hutch (CLH) to the SPB/SFX instrument laser hutch (ILH) through a 10.4 m-long pipe under vacuum. Relay imaging is used to maintain pointing stability over the long propagation distance. Both temperature (21 AE 0.1 C) and humidity (50 AE 2.5%) in the ILH are regulated to minimize the impact of environmental fluctuations. Additionally, the PP laser beam path and optics are kept in a laser-safe enclosure in order to reduce the effect of fluctuations in airflow in both the ILH and experiment hutch. The laser beam size, pulse energy and optical delay are conditioned in the ILH before transport to IRU through a 1.4 m-long vacuum pipe without relay imaging optics. The total optical path length from the output of the NOPA to IRU is about 40 m. When the PP laser is operated at the 800 nm /15 fs setpoint, pulses are initially negatively Table 1 Available optical laser parameters at SPB/SFX. For the PP laser, the repetition rate is the intra-burst repetition rate; the pulse energy depends on this repetition rate setting. SH -second harmonic. TH -third harmonic. FH -fourth harmonic. Figure 3 Spectral (a) and temporal (b) profile of the PP laser at 15 fs FWHM and 50 fs FWHM pulse duration. The insets show the beam profile in the near-field (c) and far-field (d). chirped to a duration of about 300 fs and compressed to the desired pulse duration close to the interaction point using fused silica glass plates with anti-reflection coating at 800 nm. For the 800 nm /50 fs setpoint, the pulse duration is optimized at the desired location by means of a Treacy compressor (Treacy, 1969). Currently, the PP laser is only available at IRU in SPB/SFX, and in general is limited to single-wavelength operation during a given experiment. The 800 nm branch of the PP laser is split into two in the ILH with 99% transported to the interaction region and the remaining 1% used for diagnostics. This enables online monitoring of not only beam profile, pointing stability, pulseresolved intensity and spectrum but also the arrival time relative to X-ray pulses via a photon arrival time monitor Sato et al., 2020;Letrun et al., 2020). The PP laser may also be used for 'stroboscopic image' samples at the interaction region (Kay & Wheeless Jr, 1976;Schropp et al., 2015;Reich et al., 2018). Stroboscopic imaging is used to optimize liquid jet alignment with respect to the X-rays and to determine the velocity of the liquid jet in situ (Vakili et al., 2022), and also enables imaging the dynamics of liquid jets/droplets exposed to intense X-ray pulses (Gorel et al., 2020). An example of PP laser stroboscopic imaging to capture the water jet explosion induced by a X-ray pulse is shown in Fig. 4. Pulse structure/pattern of optical lasers In a typical pump-probe experiment at SPB/SFX, dynamics can be studied at a wide range of timescales using a variety of pulse sequences. Pulse sequences are arranged by controlling the pulse pattern of the PP laser with respect to the X-rays. For the 800 nm output from the parametric amplifier of the PP laser, the pulse picking is realized with an acousto-optic modulator (AOM) installed before supercontinuum genera-tion . For the 1030 nm output of the PP laser, the pulse pattern is controlled by a BBO Pockels cell placed behind the booster power amplifier of the pump beam for the parametric amplifier . The burst length of the NOPA pump beam is usually adjusted to cover the length of the applied laser pattern. No obvious pointing drift or beam profile changes were observed with different pump burst lengths. Fig. 5(a) shows a schematic of a typical operation mode where the optical laser pulse train has the same structure as the XFEL pulse train. Here a burst-mode excitation with both megahertz optical laser excitation and megahertz X-ray probe is illustrated. This optical laser pulse structure within the burst can be tuned from single shot (10 Hz) to 4.5 MHz. This operation mode offers a precise control over the temporal delay between optical laser and X-ray pulses and this delay can be extended up to the pulse period, i.e. 220 ns in the case of a 4.5 MHz repetition rate. In burst-mode excitation, the optical laser pulses can also be picked for certain trains as shown in Fig. 5 Visualizing the water jet explosion induced by an intense X-ray pulse with the PP laser. The image was captured by the side microscope with a 20Â objective (20Â Mitutoyo Plan Apo SL Infinity Corrected Objective, 378-810-3). alternate trains of optical laser pulses, with a megahertz burst mode for both the optical laser and X-ray pulses, resulting in X-ray pulse trains at 10 Hz and optical laser pulse trains at 5 Hz. The pulse picking capability of the optical laser allows user-defined arbitrary pulse patterns derived from the intraburst repetition rate of the optical laser. Fig. 5(c) shows an arbitrary pulse pattern for the optical laser and periodic pulse pattern for X-rays at a megahertz intra-burst repetition rate. Using the commercial nanosecond lasers (discussed in Section 7), samples can be pumped at 10 Hz with single-or multi-stage pump schemes, suitable for tracking relatively slow dynamics up to sub-millisecond timescales [shown in Fig. 5(d)]. In a more limited set of cases the pulse picking capability of the PP laser can also be used for multi-stage pump schemes. A corresponding control of the X-ray pulse pattern is also possible (Obier et al., 2019). Wavelength extension with optical parametric amplifier A commercial optical parametric amplifier (OPA) was recently commissioned at SPB/SFX. The OPA is a TOPAS Prime (Traveling-Wave Optical Parametric Amplifier of White-Light Continuum) from Light Conversion (Light-Conversion, 2017) with additional modules, providing the possibility to tune the wavelength from 400 nm to 2600 nm. The TOPAS is located in the ILH and can be pumped using the PP laser with the following parameters: 800 nm, 50-60 fs and an intra-burst repetition rate of 100 kHz to 1.1 MHz. The pulse energy measured at the output of the TOPAS over the possible deliverable tuning range is shown in Fig. 6. Stability of the PP laser system The intensity and pointing stability of the PP laser and OPA output are an important factor in the execution of successful pump-probe experiments combined with micrometre to sub-micrometre focused X-rays. The PP laser is routed and conditioned by a large number of optics from the CLH to IRU for a total path length of about 40 m. Transport over such a long distance can easily introduce fluctuations in laser beam pointing. Hence it is essential to investigate positional and intensity stability at the interaction point. The TOPAS output beam, 640 nm at 1.1 MHz intra-burst repetition rate, was focused to a smaller than 50 mm FWHM diameter spot using a plano-convex lens ( f = 300 mm) at IRU. The pointing of this beam was measured at the sample position in IRU by the side microscope at an acquisition rate of 10 Hz over a period of 12 h. The microscope image data were processed and fitted with a 1D Gaussian, along both horizontal and vertical directions, to estimate the beam diameter and central position. Each acquired image from the microscope was integrated to obtain the intensity stability data for the TOPAS beam at IRU. The intensity stability of the PP laser fundamental was also measured in the CLH and the ILH over the same period by imaging the beam using a CCD. The intensity stability measurements in ILH and CLH were based on the maximum pixel intensity in the CCD for each image acquired. Fig. 7 shows the intensity stability of the PP laser fundamental and the OPA. The normalized root-mean-square deviation (NRMSD) of the TOPAS output intensity over 12 h at IRU was 5.1%. Fig. 8 shows the pointing stability of the TOPAS measured over 12 h as a 2D histogram. The beam pointing is estimated from the beam center, obtained from processing the image, divided by the focal length of the lens used to focus the beam Intensity stability of the TOPAS output measured at IRU over 12 h with the output beam set at a wavelength of 640 nm. The three lines show the 10 min moving average of intensity stability of the PP laser in the CLH, the ILH and the OPA output at IRU. Here the intensity is normalized to the mean intensity of each measurement. Figure 6 Output of TOPAS at different wavelengths. For this measurement, TOPAS was pumped by the PP laser with the following parameters: 800 nm, $ 54 fs, 215 mJ and 50 pulses per train at 1.1 MHz. (SH -second harmonic; SF -sum frequency; SIG -signal; IDL -idler). to the interaction region. The beam pointing jitters were 14.9 mrad r.m.s. and 11.4 mrad r.m.s. in the horizontal and vertical directions, respectively. This corresponds to 5.4% and 5% of the mean beam diameter, measured as 49 mm and 40 mm, in the horizontal and vertical directions, respectively. Due to this positional stability, no active beam stabilization setup was installed for the PP laser. Nanosecond laser systems In addition to the PP laser, with its femtoscond to picosecond pulse duration, it is also possible to pump or excite samples with laser pulses of longer pulse duration using nanosecond laser systems at SPB/SFX. There are two types available, either fixed wavelength or continuously tunable. The tunable nanosecond lasers are commercial systems based on an optical parametric oscillator (OPO), Opollette HE 355 LD (OPOTEK), with a tuning range of 210 nm to 2200 nm. Three identical laser systems are installed and each has a repetition rate of up to 20 Hz, a pulse duration of 7-9 ns, and their beams are delivered via optical fiber to the interaction regions. These three lasers are synchronized with the 10 Hz trigger derived from the EuXFEL master clock and can be used simultaneously to excite/illuminate the samples with a desired pulse pattern. The maximum pulse energy for these laser systems is of the order of a few mJ. The fixed-wavelength nanosecond lasers are commercial Nd:YAG lasers from Litron Lasers, Nano LG 150-10 and Nano LG 300-10, that generate pulses with maximum energies of 150 mJ and 300 mJ, respectively, at 10 Hz. The fundamental wavelength for these two lasers is 1064 nm, and their second and third harmonics are also available. These relatively high power lasers can be placed close to the interaction regions and the laser beam can be transported to the interaction region in free space. The pulses from either type of nanosecond laser can be delivered to both IRU and IRD. These nanosecond lasers, like the PP laser, may also be used as a light source for stroboscopic imaging of samples (Kay & Wheeless Jr, 1976;Schropp et al., 2015;Reich et al., 2018;Vakili et al., 2022), as shown in Fig. 9. The high-power nanosecond lasers can be used to visualize particles which have a size of hundreds of nanometres or smaller in focused aerosol beams, by detection of the Rayleigh scattering due to the particles (Hantke et al., 2018). This imaging technique is routinely used for the alignment of an aerosol sample's particle beam to the tightly focused X-ray beam. An example for aerosol particle beam imaging using Rayleigh scattering is shown in Fig. 10. Synchronization and photon arrival time monitor The EuXFEL uses an all-optical synchronization system to achieve high timing accuracy of all sub-systems throughout the facility Schulz et al., 2019b). A radiofrequency (RF) synchronization system, with lower timing fidelity, is also provided for initial timing synchronization . The EuXFEL facility uses a main RF oscillator with a frequency of 1.3 GHz, synchronized to the main laser oscillator, which provides the reference optical signals for the entire optical synchronization system. The main laser oscillator is a passively mode-locked commercial laser with a wavelength of 1550 nm, pulse duration of 200 fs and repetition rate of 216.7 MHz (Schulz et al., 2019b). The timing signals from the main laser oscillator are transported via polarization-maintaining optical fibers, with the fiber length stabilized by temperature control. The PP laser is synchronized using a subsidiary laser oscillator that is synchronized to the main laser oscillator and is located in the timing station near the instruments. The typical temporal stability of the PP laser with respect to the all-optical synchronization system is 5 fs r.m.s. when optically synchronized . The photon arrival time monitor (PAM) prototype was developed and installed in collaboration with the EuXFEL X-ray photon diagnostics group (Liu et al., 2017). It employs a spectral encoding method (Bionta et al., 2011 to measure the relative time delay between PP laser and X-rays (Harmand et al., 2013;Hartmann et al., 2014). Fig. 11(a) Double exposure stroboscopic imaging of droplets with two nanosecond lasers with a delay of hundreds of nanoseconds between the two pulses. A zoomed view of one of the droplets imaged is shown in the inset. The image was collected using the side microscope with a 10Â objective and an Andor Zyla 5.5 camera. Figure 10 An aerosol particle beam with particles of size of a few tens of nanometres visualized by Rayleigh scattering imaging using the (a) inline microscope with 10Â objective and (b) side microscope with 2Â objective. located between the two interaction regions of the SPB/SFX instrument. The PP laser optical path length of the IRU and PAM branches are set such that the timing is coincident with the X-ray pulses at both IRU and PAM. Although the 800 nm, 15 fs PP laser may be used directly in the PAM, in practice a supercontinuum is generated in a sapphire plate to increase the spectral bandwidth and for a quasi-flat-top spectral profile. The PAM sample holder is capable of mounting up to 16 materials with various thicknesses. This enables the choice of the optimal material for timing measurement for a given X-ray photon energy and fluence. The presence of spares also ensures that the material can be quickly exchanged if it is damaged by either the laser or X-ray beams. In general, the spatial overlap between X-ray and laser pulses is optimized with a sample that exhibits intense luminescence upon X-ray irradiation, such as Ce:YAG. A coarse timing overlap and monitoring is achieved with a fast photodiode (G4176-03, HAMAMATSU) which is installed on top of the PAM sample holder in such a way as to monitor elastically scattered X-rays from a material on the PAM sample holder. A schematic of the sample holder is shown in Fig. 11(b). For sub-picosecond timing overlap and monitoring, the X-ray beam pumps a thin dielectric material, e.g. Ce:YAG or Si 3 N 4 , and the X-ray induced change of the optical properties of the material is then probed by the supercontinuum laser pulses. The laser pulses transmitted through the sample are coupled into a commercial spectrometer (Andor, Shamrock 193i) with dual output ports. Each port is equipped with a line GOTTHARD detector (Mozzanica et al., 2012) that is capable of recording up to 800 kHz in its burstmode setting (Mozzanica et al., 2012;Zhang et al., 2014). Here, the GOTTHARD detectors are set up at 564 kHz each with interleaved acquisition windows to allow acquisition at a repetition rate of up to 1.13 MHz Letrun et al., 2020). The temporal jitter between X-ray and laser pulses with optical synchronization was measured at a 1.13 MHz repetition rate at the SPB/SFX instrument using the PAM. The r.m.s. jitter from train to train, for a given pulse, was measured to be < 25 fs over a period of 10 min Letrun et al., 2020). Fig. 12 shows a plot of temporal jitter between X-ray and laser pulses measured over 15 min. Alignment procedure for pump-probe experiments at SPB/SFX The general alignment procedure for pump-probe experiments at SPB/SFX requires optimization of both the spatial and temporal overlap of the X-rays and laser pulses. Typically, a Ce:YAG scintillator of thickness 20 mm is used to visualize the X-ray position and ensure the spatial overlap of the laser and X-rays. The experimental geometry used is similar to the pump-probe geometry shown in Fig. 2 with the liquid sample jet replaced by a Ce:YAG scintillator. The X-ray beam size and position are usually visualized and measured using the inline microscope with the Ce:YAG scintillator oriented normal to the incident X-ray beam. For the determination of the spatial and temporal overlap of X-ray and laser pulses, the scintillator is rotated to increase the angle of incidence of the X-ray beam to 70 -75 , with the two surfaces of the scintillator then facing the incident laser beam and side microscope optics, respectively. A coarse temporal overlap, or so-called 'time zero', between X-ray and laser pulses on the picosecond timescale is achieved using a fast photodiode. A finer temporal overlap between X-ray and laser beams at the interaction region is determined using a method called spatial encoding (Harmand et al., 2013). The time point where the delay between laser and X-ray pulses is zero or falls within the detection limit, defined as 'time zero' in this manuscript, is optimized using a mechanical optical delay line in the laser beam path. The time zero position can be determined by a decrease in transmission of the laser pulse through a material induced by the X-ray pulse excitation, and imaged by the side microscope. Fig. 13 shows a typical image taken by the side microscope to determine time zero at the interaction region. This method relies on the same phenomenon as the spectral encoding used in the PAM; however, with the timing information encoded in the spatial rather than in the spectral domain (Maltezopoulos et al., 2008;Harmand et al., 2013). Schematic of (a) the PAM setup, and (b) the PAM sample holder. Figure 12 Plot of temporal jitter measured over 15 min. The darker blue line shows the rolling mean over 5 s . The PAM can then be aligned to the time zero at the interaction region and be used for measuring and monitoring the temporal jitter and drift during a pump-probe experiment. After the optimization of spatial and temporal overlap of X-ray and laser pulses, the sample of interest is aligned to the optimized spatial overlap position at the interaction point within a few micrometres. These samples are then excited with optical laser pulses and probed with X-ray pulses with the required time delay between the two. Jitter between the optical and X-ray pulses can be measured and recorded with the PAM on a shot-to-shot basis. Conclusion The pump-probe capabilities and general alignment procedure for a pump-probe experiment at the SPB/SFX instrument of the European XFEL are presented here. The available optical laser parameters are tabulated in Table 1. The temporal jitter at the instrument between X-rays and PP laser is of the order of a few tens of femtoseconds, and the femtosecond output of the PP laser at IRU has around 5% fluctuation in both intensity and pointing over 12 h. The SPB/ SFX instrument provides high-stability optical pump sources with a variety of wavelengths with up to megahertz repetition rates. At SPB/SFX, the temporal delay between X-ray and laser can be measured and monitored with the PAM. With both high precision of timing as well as a high data rate, the SPB/SFX instrument is a versatile tool to study ultrafast phenomena. The unique pulse structure of both the X-ray and laser sources at the European XFEL, and the possibility to change these pulse patterns independently, makes the instrument an ideal place to study longer timescale dynamics in addition to the ultrafast timescales.
6,335
2022-07-21T00:00:00.000
[ "Physics", "Engineering" ]
Optical Properties of Concentric Nanorings of Quantum Emitters A ring of sub-wavelength spaced dipole-coupled quantum emitters features extraordinary optical properties when compared to a one-dimensional chain or a random collection of emitters. One finds the emergence of extremely subradiant collective eigenmodes similar to an optical resonator, which features strong 3D sub-wavelength field confinement near the ring. Motivated by structures commonly appearing in natural light-harvesting complexes (LHCs), we extend these studies to stacked multi-ring geometries. We predict that using double rings allows us to engineer significantly darker and better confined collective excitations over a broader energy band compared to the single-ring case. These enhance weak field absorption and low-loss excitation energy transport. For the specific geometry of the three rings appearing in the natural LH2 light-harvesting antenna, we show that the coupling between the lower double-ring structure and the higher energy blue-shifted single ring is very close to a critical value for the actual size of the molecule. This creates collective excitations with contributions from all three rings, which is a vital ingredient for efficient and fast coherent inter-ring transport. This geometry thus should also prove useful for the design of sub-wavelength weak field antennae. Introduction The optical properties of a quantum emitter, such as its excitation lifetime and transition frequency, are strongly modified when it is placed close to a second emitter, due to vacuum fluctuations that mediate dipole-dipole interactions between them. As a remarkable example, the decay rate of a collection of emitters separated by subwavelength distances can be enhanced or suppressed, leading to the well-known phenomena of superradiance or subradiance, respectively [1][2][3][4]. These phenomena are expected to be strongly enhanced in ordered subwavelength arrays of emitters, where maximal interference of the scattered fields can be observed . Moreover, subradiant and superradiant exciton states can be leveraged for quantum technological applications, such as single photon quantum memories [17,28], single photon switches [29,30], the generation of non-classical states of light [31,32], or quantum metrology [33,34]. Among the different array geometries, a ring-shaped structure formed by regularly placed emitters has very special optical properties. It has been shown before [6,9,17] that a linear chain of emitters whose inter-particle distance is smaller than half of the light wavelength supports collective modes that can guide light and are extremely subradiant, with the excitation lifetime increasing polynomially with the atom number. The lifetime limitation arises from photon scattering off the ends of the chain. Remarkably, by joining the ends of the chain to form a closed ring, the lifetime can be exponentially increased with atom number [17,35,36]. Such extraordinary optical properties can be exploited for applications including efficient energy transfer, single-photon sources, or light-harvesting [37,38]. We have previously shown [35,36] that tailoring the geometry, orientation, and distance between two such nanorings allows for lossless and high-fidelity transport of subradiant excitations, as if the two rings were two coupled nanoscale ring resonators. Low-loss excitation transfer is an essential process for quantum communication and quantum computing. In addition to subradiant states confining and guiding light, these nanorings also feature radiant modes whose corresponding electromagnetic field is strongly focused at its center. By placing an extra emitter at its center, these modes can be exploited to create a nanoscale coherent light source with a spectral line width that is strongly suppressed compared to the single atom decay rate [39]. In this case, the collective optical modes of the ring play the role of the cavity modes and the central atom acts as the gain medium when incoherently pumped. Furthermore, if the central emitter is absorptive, the system can be tailored to achieve a strong absorption cross-section way beyond the single atom case, while the outer ring behaves as a parabolic mirror when illuminated externally by a coherent light field [40]. In this work, we analyse in detail how two or more nanorings' optical properties are modified when they are stacked concentrically. Note that this system is radically different compared to the case previously studied of two rings coupled side by side [35,36], as it preserves some rotational symmetry. The study of this geometry is strongly motivated by the abundant presence in nature of highly efficient photosynthetic complexes sharing a similar stacked structure [41,42]. In particular, the active core photosynthetic apparatus of certain bacteria is formed by chromophores, featuring an optical dipole transition, which are arranged symmetrically, forming a complex structure of stacked concentric coupled nanorings. Some of these units are specialized in transforming the absorbed energy into chemical energy (LH1), whereas a larger number of them (LH2 and LH3) do not have a reaction center but efficiently capture and funnel light towards the LH1 units. In this system, coherence effects between the chromophores have already been shown to play a crucial role in energy transfer and light-harvesting [37,43,44]. A natural question is whether collective decay, i.e., superradiance and subradiance, plays an essential role in this process, and whether nature chooses a particular geometry to optimize its effects. In this work, we aim to shed light on this question, by analysing the optical properties and exciton dynamics in realistic structures. Furthermore, similar mechanisms could, in principle, be exploited for artificial light-harvesting [45]. Proving these concepts may already be possible using state-of-the-art experimental setups, such as neutral atoms trapped in optical lattices [46][47][48][49], optical tweezer arrays [50][51][52][53][54], microwave coupled superconducting q-bits [55][56][57], or solid-state quantum dots [58,59]. The paper is organized as follows. We first introduce the theoretical framework to describe a system of dipole-dipole interacting quantum emitters and demonstrate that a structure of coupled symmetric nanorings can be described in a particularly simple form in terms of Bloch eigenmodes. Next, we summarize the optical properties of single nanorings, which can exhibit special radiating properties. We then move to study the case of two coupled nanorings, displaying two energy bands. Thereafter, we apply a similar analysis to elucidate the radiating properties of a realistic natural light-harvesting complex (LH2), which contains a close double-ring structure with a shifted third ring at higher resonance frequencies. Studying this geometry, we find that the rings' geometry and size are critically close to the case where the energy bands of all rings overlap to form common superradiant exciton states. Theoretical Framework: Bloch Eigenmodes Let us first consider a ring-shaped array (or regular polygon) of N identical twolevel quantum emitters with minimum inter-particle distance d. The emitters possess a single narrow optical dipole transition around the frequency ω 0 with dipole orientation ℘ i = sin θ cos φê φ,i + sin θ sin φê r,i + cos θê z (i = 1, . . . , N), whereê z andê r,i(φ,i) denote unit vectors along the vertical and radial (tangential) direction defined with respect to the emitter i, respectively (see Figure 1a). In this work, we will then consider a configuration where two or more of these rings are stacked concentrically around theẑ-axis (see Figure 1b). As will be explained in Section 3.3, this structure can model light-harvesting complexes, replacing the molecular dipoles with a generic open system model based on two-level quantum emitters. Phonons are neglected now but could be added [60]. All the emitters are dipole-dipole interacting via the electromagnetic field vacuum fluctuations. After integrating out the optical degrees of freedom in the Born-Markov approximation [61], the atomic reduced density matrix is governed by the master equatioṅ [16,17,62] and Lindblad operator with i and j running over all dipoles. The coherent Ω ij and dissipative Γ ij dipole-dipole couplings can be written in terms of Green's tensor G(r, ω 0 ) in free space: where r i is the position of the ith dipole and G(r, ω 0 ) is given by Here, k 0 = ω 0 /c = 2π/λ is the wavenumber associated with the atomic transition, λ the transition wavelength, and Γ 0 = |℘| 2 k 3 0 /3π 0 is the decay rate of a single emitter with dipole moment strength |℘|. The scattered electromagnetic field can be also retrieved from a generalized inputoutput relation [16,17] once the atomic coherences are known: Motivated by realistic conditions in natural light-harvesting complexes, this work focuses on the linear optical properties and the response of the system excited by light of low intensities. Therefore, we will restrict our study to the case where, at most, a single excitation is present in the system. In this situation, the first term in the Lindblad operator Equation (2) (also known as recycling term) only modifies the ground state population and is not relevant for the observables of interest (e.g., scattered fields or excitation population). The remaining terms in the equation can be recast as an effective non-Hermitian Hamiltonian: with Ω ii = 0. In this situation, the dynamics of the system can then be fully understood in terms of the collective modes defined by the eigenstates of H eff . Each of these modes is associated with a complex eigenvalue, whose real and imaginary parts correspond to the collective mode's frequency shift and decay rate, respectively. As we will see next, these modes have a particularly simple form for a symmetric ring-shaped structure as they correspond to Bloch functions. Bloch Eigenmodes in Rotationally Symmetric Ring Structures We will consider here ring structures possessing an N-fold rotational symmetry, similar to those arising in certain natural light-harvesting complexes [41,42]. In this case, as we will see, the eigenmodes corresponding to the single-excitation manifold will be of the Bloch form, i.e., delocalized states with well-defined angular momentum m. The N-fold rotational symmetry enables defining N different unit cells (for an example, see Figure 1), which will be denoted by j = 1, · · · , N. Each cell contains, in general, d dipoles with given orientations℘ jα with α = 1, · · · , d. We can then rewrite Equation (7) as with G αβ ij ≡℘ * iα · G(r iα − r jβ ) ·℘ jβ . We note that a structure consisting of several coupled concentric rings with the same emitter number, each ring being rotationally symmetric, can also be described within this model. In this case, the unit cell contains one site of each of the rings, and it has as many components as rings. In the following, we demonstrate that the eigenmodes of the coupled structure are of the Bloch form. The symmetry of the system imposes that the position and polarization vectors associated with dipole iα transform under a rotation U of angle 2π/N (around theẑ−axis) according to r iα → U r iα = r i+1α and℘ iα → U℘ iα =℘ i+1α . By noting that G is a tensor containing terms proportional to the identity and to r iα ⊗ r T jβ , and thus it transforms under the same rotation as G( , as it is a periodic function only depending on the difference between the two indices i and j. This property allows us to write the Hamiltonian Equation (8) in terms of Bloch modes as follows: whereG αβ m ≡ ∑ N−1 =0 e i2πm /N G αβ , and we have defined the creation and annihilation operators of a collective Bloch mode with well defined angular momentum m: Here, the periodicity of the wavefunction under a 2π rotation imposes m to be an integer value, and thus, N linearly independent eigenstates can be constructed by choosing m = 0, ±1, ±2, · · · , ±(N − 1)/2 , where · is the ceiling function. Equation (9) is not yet in its full diagonal form (except if the unit cell contains a single dipole), but it already tells us that the angular momentum is a good quantum number. For each value of m, the eigenmodes consist in general of a superposition of each excited dipole in the unit cell, and it can be easily found by diagonalizing the Here, Ω mλ (Γ mλ ) is the real (imaginary) part of the eigenvalue associated with Bloch mode m and λ, whereasσ eg mλ is the corresponding creation operator. Optical Properties of a Single Nano-Ring Let us first summarize some of the most relevant optical properties for a single ring with N dipoles, i.e., the case where the unit cell contains just a single dipole. As previously shown in [35,36], the optical properties of the ring strongly depend on the size of the ring compared to the light wavelength and on the dipole orientations. In the following, we focus on two different limiting regimes: a dense large ring (quasi-linear chain) and a small ring (Dicke limit). Dense and Large Ring Case (Quasi-Linear Chain Limit) A large ring with a large number of emitters locally resembles a linear array and can support optical modes that do not propagate into the three-dimensional space but are rather confined and guided through the array. These modes correspond to spin-waves (Bloch modes) whose quasi-momentum along the chain is larger than the light wavenumber k 0 . This leads to an evanescent field along the transverse directions to the array. In the very large ring case, one can identify the linear momentum k z ↔ 2πm/Nd, and the condition k z > k 0 sets the value of the angular momentum of the guided subradiant modes to be m m 0 , with m 0 = Nd/λ associated with the light line. Moreover, such states can only exist if d < λ/2, as the maximum value of k z (or equivalently m) is given by the boundary of the first Brillouin zone. Despite these similarities, a striking difference between linear and closed ring configurations is the scaling of the subradiant decay rates with emitter number. Indeed, by closing the ends of the open chain in a ring structure, losses can be strongly reduced, leading to an exponential suppression of the decay rates with atom number, in contrast to the polynomial suppression for the linear chain [17,35]. On the other hand, the modes for which m m 0 are, in general, radiant. The angular momentum of the brightest state, however, strongly depends on the polarization direction of the atoms. In Figure 2a we have plotted the collective decay rates versus m for a ring of N = 100 emitters and different polarization orientations℘ i = ê z ,ê r,i ,ê φ,i . For comparison, we also plot the result for an infinitely long linear chain with the same lattice constant (solid line). Clearly, in this regime, the radial and transverse (tangential) polarization decay rates tend to those for the perpendicularly (longitudinally) polarized linear chain, with maximally bright modes close to the light line m = m 0 (m = 0). In addition to studying the radiative properties, it is also interesting to analyse the sign of the frequency shifts in the collective modes arising due to dipole-dipole interactions. Figure 2c shows the frequency shifts corresponding to Figure 2a. We find that the symmetric m = 0 mode has a positive (negative) shift when the dipoles are aligned transversely (longi-tudinally). This is not so surprising when thinking of interacting classical static dipoles that repel (attract) each other if they are aligned in parallel (in a head-to-tail configuration). Note also that in this regime the bright states are always energetically lower than the guided subradiant modes. orange are for transverse, radial, and tangential polarization, respectively). Left panels correspond to a large ring with d/λ = 1/3 and N = 100. For comparison, solid lines show the result for an infinite linear chain with transverse (blue) and longitudinal (orange) polarization. Right panels are for d/λ = 0.05 and N = 20 (Dicke regime). In this case, there are only one (two) bright modes at m = 0 (m = ±1) for transverse (tangential and radial) polarization. For tangential polarization, the bright (dark) modes are energetically low (high), whereas the opposite behavior is found for radial and transverse polarization. Small Ring Case (Dicke Limit) We now focus on a different regime where the ring diameter is small compared to the light wavelength, i.e., R λ/2 (Dicke limit). This regime will be relevant in the study of natural light-harvesting complexes, given the small inter-particle distances that are a few orders of magnitude smaller than the light wavelength. In this case, the emitters radiate as if they were a single dipole with effective moment strength and decay ratê From this expression, one can then easily see that, for transverse polarization, only the mode with m = 0 has a non-vanishing value of the effective dipole moment℘ m,eff = √ Nê z , and thus it is bright and decaying at rate Γ m=0 ∼ NΓ 0 . Instead, for tangential or radial polarization, there are two bright modes m = ±1 with℘ m,eff = √ N/2 ê x ± iê y and Γ m=±1 ∼ NΓ 0 /2. The remaining modes are dark with vanishing effective dipole moment and Γ m → 0. Figure 2c shows the decay rates for a ring in this regime (d/λ = 0.05, N = 20) with different polarization orientations℘ i = ê z ,ê r,i ,ê φ,i . Moreover, note that, in general, a ring with polarization℘ i = cos θ cos φê φ,i + cos θ sin φê r,i + sin θê z (i = 1, . . . , N) will have three different bright modes m = 0, ±1 with decay rates Γ m=0 = NΓ 0 sin 2 θ and Γ ±1 = (NΓ 0 /2) cos 2 θ. In this limit, the collective frequency shifts also acquire a particularly simple cosinusoidal form. Indeed, in this regime, the interactions Ω ij between first neighboring sites dominate, and one can approximate where we use again the discrete rotational symmetry of the ring. Here, the sign and strength of first-neighbor coupling Ω d strongly depend on the polarization direction. For the same general polarization as before, Ω d = −(3Γ 0 /4k 3 0 d 3 ) cos 2 θ(3 cos 2 φ − sin 2 (π/N)) − 1 [36]. Therefore, the bright modes will be energetically high (low) for transverse/ radial (tangential) polarization, as shown in Figure 2d for the same parameters as before. Moreover, for polarization angles cos θ ≈ 1/ √ 3 cos φ and a large number of emitters, a nearly degenerated flat band emerges, with frequency shifts that basically vanish [36]. Finally, it is also possible to evaluate the electromagnetic field generated by one of these eigenmodes, using Equation (6). The result will strongly depend on the angular momentum m, the polarization orientation, and the size of the ring. For the ring geometry, we find that strongly subradiant modes radiate with very low intensity basically along the ring plane, whereas the field is evanescent in the transverse direction, as shown in the top row of Figures 6 and 7 for a ring of N = 9 tangentially polarized emitters and m = 4. Instead, the brightest modes (which in this case correspond to m = ±1) exhibit a strong field at the center of the ring that propagates also transversally to the ring plane, as shown in the same figures. Optical Properties of Two Coupled Nano-Rings We now analyse the case of two rings of radius R 1 and R 2 that are arranged concentrically and separated by a vertical distance Z. In general, we will also allow in the model a general rotation of angle δ ∈ [0, 2π/N) of one of the rings around theẑ-axis (see Figure 1). In this case, the unit cell consists of only two dipoles (d = 2). Coupled Identical Non-Rotated Rings (δ = 0) We first focus on the case of two identical rings (R 1 = R 2 ) concentrically stacked on top of each other and with no rotation angle δ. Because the two rings are identical, and due to δ = 0, the matrixG αβ m is complex symmetric, and the eigenmodes of Equation (8) can be chosen as the symmetric and the anti-symmetric superposition of Bloch states corresponding to each ring with well defined angular momentum m, which will be denoted as |Ψ ± m = (|m, 1 ± |m, 2 )/ √ 2 (with |mα ≡σ eg mα |g ). The corresponding collective frequency shifts and decay rates are then simply given by where Ω m and Γ m are the frequency shift and decay rate corresponding to a single ring, whereas Ω inter are the dispersive and dissipative inter-ring couplings, respectively. In Figure 3, we plot for two rings in the Dicke regime (R/λ = 0.05) and separated by vertical distance Z = 0.5R the decay rates and frequency shifts of the two emerging bands: symmetric |Ψ + m (orange line) and anti-symmetric |Ψ − m (blue line). For comparison, we overlay the result for two independent rings (grey line). We find that, regardless of the emitters' polarization, the anti-symmetric solution is always more subradiant than the symmetric one. Moreover, the darkest state is Ψ − max[m] , i.e., the anti-symmetric superposition of the darkest state of a single ring. Looking at the frequency shifts, we find that the behavior with angular momentum m is similar to that for the single-ring case, but shifted in energy. In particular, the symmetric band is shifted to lower energies (higher energies) for transverse (tangential and radial) polarization of the emitters. This fundamental difference in the energy shift sign can be intuitively understood in analogy to the energy of two interacting static dipoles. For the case with transverse polarization, two closer emitters from the two different rings are in a tail-to-head configuration, thus decreasing their total energy if they are in phase. Instead, in the case of tangential and radial polarization, the emitters' polarization is parallel, increasing its energy when they have the same phase. In conclusion, these results show that the polarization of the emitters can fundamentally modify the optical properties of the emerging bands and determine the ordering of states in energy, something that is relevant in the excitation transfer between the different energy bands. In particular, the energy transfer in photosynthetic processes involving dipole interacting chromophores is understood via H-and J-aggregation. In J-aggregates, neighboring chromophores are oriented in a head-to-tail arrangement, resulting in a negative coherent nearest-neighbor coupling Ω d and the positioning of the optically allowed (m = 0) Bloch mode at the bottom of the energy band, whereas for H-aggregates, the orientation is parallel and the symmetric (m = 0) mode is positioned at the top of the energy band. For comparison, the single-ring solution for the same parameters is shown (grey solid line). The two rings are separated by the vertical distance Z = 0.5R, and the emitters have transverse, radial, or tangential polarization (left, middle, or right panels, respectively). For transverse (radial and tangential) polarization, the symmetric band is lower (higher) in energy. Another interesting property of this system is the scaling of the most subradiant state decay rate with the atom number N. For a fixed inter-particle distance d/λ, we show in Figure 4a the decay rate of the most subradiant state of two coupled rings of N emitters each, compared to that of a single ring of N emitters (left panel). We observe that, in addition to a lower decay rate, the double-ring structure always shows a stronger exponential suppression with the atom number compared to a single ring of the same size and inter-particle distance d. In Figure 4b, we also compare the double-ring result but with a single ring of 2N emitters and the same density. We find that, in this case, for small inter-ring distances z and ring atom number N, the coupling between the two rings is still strong enough to lead to more subradiance compared to the single-ring case with the same total number of atoms. However, if N is too large, then the single ring will always support the most subradiant state, as the curvature and therefore losses will experience a strong suppression as the system approaches an infinite linear chain, for which it is known that the decay rates are exactly zero. For this threshold, the exponential suppression with N overcomes the coupling effect between the two rings. Interestingly, the most subradiant decay rate does not show a monotonic behavior with the lattice constant d/λ or the inter-ring distance z/λ. In Figure 5a, we plot the most subradiant decay rate versus these two ratios. We observe that the decay rate oscillates due to wave interference and that there can still exist subradiance beyond the values d/λ = 1/2 and z/λ = 1/2. As previously discussed, such a subradiant state is always the anti-symmetric superposition of two Bloch waves of well-defined angular momentum m. For small rings such that d/λ < 1/2, the most subradiant state always corresponds to the superposition of the two most subradiant states, i.e., |m| = (N − 1)/2 . However, for d/λ > 1/2, the value of m that produces the most subradiant state periodically varies. This behavior is shown in Figure 5b, where we have plotted the overlap of the Bloch waves of a particular absolute value of the angular momentum. Additionally, the Bloch waves can be in a symmetric or antisymmetric superposition, and even the symmetric superposition of the symmetric m = 0 modes can lead to subradiance at various distances. We finally discuss the striking differences in the field patterns generated by the eigenmodes |Ψ ± m , with m = 0, 1, 4. In Figures 6 and 7, we plot (middle and bottom rows) the field intensity as a function of real space position for two identical coupled concentric rings of N = 9 emitters with tangential polarization, lattice constant d/λ = 0.1, and separated by a vertical distance Z/λ = 0.2. For comparison, in the top row, we have added the result for a single ring with the same parameters. We find that the symmetric superposition shows a pattern that is very similar to the single-ring case. The brightest mode (m = 1 in this case) shows an enhanced field intensity along the central axis of the rings. In the symmetric mode, the field is enhanced in the region between the two rings, whereas in the anti-symmetric superposition, it shows a strikingly different pattern with a suppressed field in the region between the two rings. Coupled Unequal Rings with Rotation (δ = 0) We now consider the more general case where the two rings can have a different radius and are rotated by an angle δ. Note that, in this case, the matrix describing the singleexcitation manifoldG αβ m is, in general, not complex symmetric. However, for the equal radius case (R 1 = R 2 ) in the Dicke regime, the off-diagonal elements satisfyG αβ m = (G βα m ) * (α = β). This leads to eigenmodes of the form |Ψ ± m = (|m, 1 ± e iη |m, 2 )/ √ 2 with η = atan ImG 12 m /ReG 12 m . The behavior of the eigenmodes and eigenvalues with the rotation angle δ is not trivial and strongly depends on the polarization orientation and inter-particle distances. For transverse polarization and small vertical separation between the rings (Z = 0.1R, R/λ = 0.05), we find a value of δ c ∼ 0.15 for which the frequencies of the two eigenmodes with m = (N − 1)/2 feature an avoided level crossing. Interestingly, at this point, the nature of the state changes. Whereas for δ < δ c , the highest energy state is radiant with η ∼ 0, for δ > δ c the highest energy state becomes subradiant with η ∼ π. These features are shown in Figure 8 (top panels) and disappear for a too small value of Z. Similar results can be found for other values of m. Moreover, the decay rate of the most subradiant state presents a broad minimum around π/N and with η ∼ π/2, i.e., when the sites of the second ring lie exactly in between those of the first ring. At this point, and because the inter-particle distances are larger, the frequency shifts are also smaller. Similar results can be found for other polarization orientations and also when varying the relative radius between the two rings. As an example, we show in Figure 8 (bottom panels) the same analysis for two co-planar rings (Z = 0) with tangential polarization and R 1 = 0.9R 2 . As can be seen in the figure, in this case, there is also an avoided level crossing (inset) at value δ c ∼ 0.07, where the state of the highest energy state changes to be subradiant. As in the previous case, we also find the broad minimum around δ ∼ π/N, where the frequency shifts almost vanish. It is worth noting that in the natural lightharvesting complex LH2 (see next section) the dipoles of the B850 band are arranged in a similar configuration with rotation angle δ ∼ π/N. An intriguing question is whether this is an accidental coincidence or whether the broad minimum emerging in the decay rate, which is thus robust against small fluctuations in the position of the emitters, can play a relevant role in the energy transfer and the light-harvesting processes. B850 and B800 Bands in LH2 As already anticipated, the study of the optical properties of two (or more) coupled nanorings is motivated by the existence of similar structures in nature that enable efficient light-harvesting and energy transfer [42,43,45,[63][64][65][66][67][68][69][70][71]. Indeed, whereas most biological systems are soft and disordered, photosynthetic complexes in certain purple bacteria exhibit crystalline order. The complexes are composed of antenna units that show a n-fold symmetry [65] that, in turn, are arranged to form a maximally packed hexagonal pattern [72]. Purple bacteria are among the oldest living organisms and are most efficient in turning sunlight into chemical usable energy. One of the most common species (Rhodopseudomonas Acidophila) contains two well-differentiated types of complexes: a larger one containing the reaction center where the energy conversion takes place (LH1), and a second one (LH2) that is more abundant and whose main role is the absorption of photons and efficient subsequent energy transfer towards the LH1 units. The two complexes are formed by the same light-absorbing pigments: carotenoids (absorbing wavelengths ranging from 400 to 550 nanometers) and bacteriochlorophyll-a (BChla, absorbing in the red and infrared). The BChla features a two-level optical dipole transition around 800-875 nanometers (depending on the complex). These pigments are sustained by a hollow cylinder of apoproteins whose diameter is a few tens of angstroms. Here we will focus on the LH2 complex and the optical properties displayed by the BChla. Early X-ray crystallography data [41] together with subsequent molecular dynamics simulations [73] suggest a ring structure with 9-fold symmetry. This structure consists of a ring of 9 emitters maximally absorbing at 800 nm (the so-called B800 band) concentrically arranged and coupled to another two-component ring with 9-fold symmetry (with a total of 18 emitters) maximally absorbing at 850 nm (the so-called B850 band). The dipole orientation also preserves the 9-fold rotational symmetry and is mostly contained in the plane of the ring, except for a small vertical component (see inset in Figure 9). Therefore, the whole structure can be regarded as a ring of 9-unit cells of three components (denoted by purple, blue, and yellow in the figure). In the following, we analyse the eigenmodes and collective optical properties of the two bands (B800 and B850) using the parameters extracted from [65]. This analysis can be relevant for the understanding of the efficient energy transfer between the B800 and the B850 bands, but also for energy transfer between the LH2 and the LH1 units. Taking into account that the lifetime of the excited state in the BChla is of the order of nanoseconds, the energy transfer process is expected to occur at a much faster time scale. Figure 9 shows and compares the decay rates and frequency shifts of the collective eigenmodes as a function of the angular momentum quantum number m, considering the rings are uncoupled (left) or coupled (right). The dispersive couplings between the two components of the B850 band (denoted by yellow and blue in the figure) are very large due to the small inter-particle distances, and of the order of 10 6 Γ (being Γ ∼ 25 MHz the estimated decay rate of the excited state in the dipole transition). This leads to the emergence of a two-band structure with large frequency splitting where the two components of the B850 ring strongly hybridize: a higher energy band that is mostly subradiant, and a lower energy band containing only two bright modes at m = ±1. For completeness, we show in Figure 10 the excited state population of each of the components for the coupled system eigenmodes. The excitation is clearly delocalized over the two components of the ring. In the inset of Figure 10, we show the small contributions of the lower double-ring configuration to the excited state population of the third band. A similar behavior emerges in the case of the first and second bands, where the B800 ring gives a non-vanishing contribution to the population of the first and second bands. In contrast, the coupling between the B850 and B800 band (indicated by purple in the plot) is ten times smaller (of the order 10 5 Γ), whereas the energy transition difference is of the order of 10 7 Γ, and therefore, the B800 band remains mostly decoupled. However, it is worth noting that after the B850 bands are coupled, the higher energy band lies close to the B800 band. Finally, let us point out a very special property of naturally occurring geometry. Indeed, it can be seen that the actual geometry is very close to the critical transition point, where the up-shifted eigenstate energies of the lower double ring just overlap with the upper ring energies. For this, in Figure 11, we plot the corresponding exciton energies as a function of the overall size of the molecule, where we only consider small size variations R α,i = αR i around the actual measured size. We see that close below the value of α = 1 the energy bands cross and eigenstates appear that possess similar contributions of all three rings. Close to this resonance condition, any excitation in one of the rings is thus coherently transported to the other rings in a short time. Interestingly, the crossing point depends on the angular index m shifting further away from α = 1 with growing m. From this sensitivity behavior, one could expect tunability of the ring properties via the local refractive index or small deformations of the complex. Collective decay rates and (c,d) frequency shifts as a function of angular momentum index m for the LH2 structure (B800 and B850 bands) parameterized according to [65]. Left and right panels correspond to uncoupled and coupled rings, respectively. The B850 band consists of a two-component unit cell ring with 9-fold symmetry (denoted by blue and orange), whereas the B800 band is a single-component ring with 9-fold symmetry (denoted by violet). The B800 ring is far in energy and thus only couples very weakly to the B850 rings. However, the two components of the B850 band are strongly coupled, due to the reduced inter-particle distance, which leads to a broad dispersion in the frequency shifts. Two bands emerge: a darker band that is higher in energy and close to the B800 band (denoted by cyan), and a brighter band (with two bright modes corresponding to m = ±1) that is lower in energy (denoted by green). This band structure is relevant for the excitation energy transfer occurring between the B800 and B850 bands. Figure 9), whereas violet is the occupation of the B800 ring. Each panel is a different eigenmode, indicated with the same code color as in Figure 9. Figure 9. Dependent on the mode m the second and third bands as well as the excited state populations cross at α c < 1. For systems with α < α c , the third band is occupied by the B850 ring, whereas for α > α c , it is occupied by the B800 ring. Conclusions Our calculations show that structures involving multiple concentric rings exhibit strongly modified exciton properties and, in particular, feature extremely subradiant states with subwavelength confined fields. For two identical rings at close enough distances, we find that the anti-symmetric superposition of the individual ring radiative modes, which inherits the angular symmetry of the setup, is always more subradiant than the corresponding symmetric combination. In particular, the most subradiant states are obtained by choosing the individual ring's darkest eigenmodes. We have shown that the spontaneous emission of such states decays faster with the emitter number compared to the single-ring case. Moreover, important radiative properties, such as the ordering in frequency of the optical modes, can be controlled via relative rotation or size differences of two otherwise identical rings. For instance, we find that by modifying these parameters, the highest energy level changes from being subradiant to superradiant. When we apply our model to the specific geometry of the triple ring LH2 structure including the natural distances, energy shifts, and dipole polarization, we find most of the collective modes are extremely dark. Most interestingly, the collective energy shifts from the lower double B850 ring structure, for which the inter-particle distances are very small, is of the order of the 50 nm energy shift of the upper ring, so that the energy spectrum spans almost the full gap between the rings. More specifically, two bands emerge due to the strong coupling between the two B850 components: a subradiant band that is higher in energy and close to the B800 band and a brighter band that is much lower in energy. The realistic dipole orientations and distances lead to only two bright modes corresponding to a quasi-symmetric superposition of the angular momentum m = 1 and m = −1 modes. This emerging band structure could be helpful for any phonon-induced collective energy transfer processes, which are, of course, beyond our model here, but we plan to explore in future work. Data Availability Statement: All plots were generated directly from the formulas within the paper using Julia. The datasets can be produced with the help of https://doi.org/10.5281/zenodo.7682056. Conflicts of Interest: The authors declare no conflict of interest.
8,948.8
2023-01-18T00:00:00.000
[ "Physics" ]
Neutron cross sections for carbon and oxygen from new R-matrix analyses of the 13,14C and 17O systems We report the latest results from R -matrix analyses of reactions in the 13,14 C and 17 O systems that are of interest in reactor applications and nuclear astrophysics. These were done in order to provide separate cross sections for the stable isotopes (12,13 C) of natural carbon, and to contribute improved cross sections for 16 O to the CIELO project. Although particular attention was paid to the data in the standards region ( +16 O, the analyses extend to several MeV neutron energy for all the systems. The fits to the data included are generally quite good, in keeping with the unitary constraints of R -matrix theory. The cross sections for 12,13 C give results for natural carbon that are close to the previous evaluation by Fu et al. at energies below 1 MeV. Above that energy, the deviations become larger, especially near the narrow resonances. The thermal cross section for 16 O is at the upper end of the range of recommended values, in excellent agreement with a high-precision measurement by Schneider. At higher energies, the 17 O analysis follows in great detail high-resolution measurements of the total cross section, and agrees quite well with the 13 C(α,n)16 O cross section measurement of Bair and Haas at roughly their original normalization scale. We will discuss the implications of these new evaluations for critical benchmarks and astrophysical applications. Introduction Reactions in the 13,14 C and 17 O systems are of much interest in reactor applications and nuclear astrophysics. Since there are many oxides and carbides (as well as graphite) in reactor materials, the neutron cross sections for oxygen and carbon are of great importance. In astrophysics, the 13 C(α, n) 16 O reaction is thought to be a source of neutrons in slow-neutron (s-process) capture. The neutron capture cross sections for 12,13 C and 16 O are themselves not very well known at energies above thermal. We have performed R-matrix analyses of data for these light systems in order to provide separate neutron cross sections for the stable isotopes ( 12,13 C) of natural carbon, and to contribute improved cross sections for n+ 16 O to the CIELO project. This was done using the versatile Los Alamos R-matrix code EDA [1], which implements standard R-matrix theory [2] without any approximations. In addition, it uses the Wolfenstein density matrix formalism [3] to calculate the results of any possible measurement for two-body reactions, and relativistic kinematics throughout. The result is a description of the experimental data in terms of the usual R-matrix parameters (reduced-width amplitudes and eigen-energies) that ensures three basic properties of hadronic scattering theory: unitarity of the S-matrix, reciprocity (time-reversal invariance), and causality. These properties (especially unitarity) impose powerful constraints on the fits to the experimental data. In the following sections, we will give summaries of the data included and show the quality of the fits obtained for each of the analyses, starting a e-mail<EMAIL_ADDRESS>with 17 O. Then we will conclude with a discussion of the implications of the new cross sections for various applications. 17 O system analysis The 17 O system analysis included all possible reactions between the channels n+ 16 O and α+ 13 C. This is summarized in Table 1. Particular attention was paid to the data in the low-energy region for n+ 16 O, shown in Fig. 1. The thermal cross section is lower than before, but still at the upper end of the range of recommended values, in excellent agreement with a high-precision measurement by Schneider [4]. At higher energies, as shown in Fig. 2, the 17 O analysis follows in great detail the total cross section measurements of Ohkubo [5], Johnson [6], Fowler [7], and Cierjacks [8] with reasonable re-normalizations (−2% to +4%). It also agrees quite well with the 13 C(α,n) 16 O cross section measurement of Bair and Haas [9] at roughly their original normalization scale (0.94), a consequence of the unitarity imposed by an R-matrix description. The resulting 16 O(n, α) 13 C cross sections are shown in Fig. 3. They agree with the measurements and evaluation done at IRMM by Giorginis [10], which are 30 -40% higher than the ENDF/B VII.1 cross sections. 13,14 C system analyses The 13 C system analysis included reactions among the channels n+ 12 C, n+ 12 C*, and γ + 13 C. A summary of the channel configuration and data for the reactions included is given in Table 2. Although particular attention was paid to the data in the standards region (E n < 2 MeV) for the Figure 2. n+ 16 O total cross section compared to experimental data [5][6][7][8]. The insert shows the fit to the 13 C(α, n) measurement of Bair and Haas [9], renormalized by 0.94. carbon isotopes, the analyses extend to several MeV for both the 13,14 C systems. The types of data used are mostly differential and integrated (total) cross sections, but some analyzing-power measurements are also included. The fits to the data are generally quite good, as can be seen in Figs. 4 and 5. Some changes were also made in the n+ 12 C channel a c (fm) l max n+ 12 C(0 + ) 4.6 4 n+ 12 C*(2 + ) 5.0 1 γ + 13 C 50. 1 reaction energy range # data observables (MeV) points 12 C(n, n) 12 C E n = 0−6.45 6940 σ T , σ (θ), A n (θ ) 12 C(n, n ) 12 C* E n = 5.3−6.45 443 σ int , σ (θ ) 12 C(n, γ ) 13 C E n = 0−0.2 7 σ int total: 7390 5 capture cross section, as can be seen in Fig. 6. The flat region between 0.2 and 7 MeV in ENDF/B VII.1 has been replaced with a more physically reasonable behavior when joined to the higher-energy data above 10 MeV. The n+ 13 C ( 14 C system) analysis has been done in two versions. The first was a single-channel analysis that went only up to 5 MeV. That analysis included total cross section data and differential cross sections for elastic scattering. A preliminary ENDF/B evaluation for 13 C based on this analysis, which also included the capture cross section, was submitted in August of 2015. The second analysis was completed recently for inclusion in ENDF/B VIII.0. It is a six-channel analysis that goes up to 20 MeV, but includes experimental data only for the total cross section. The results from that analysis are shown in Fig. 7. The total cross section from that analysis is similar to the one obtained earlier at energies below 5 MeV. Natural carbon These cross sections for n+ 12,13 C combine to give results for natural carbon that are very close to the previous evaluation (ENDF/B VII.1) by Fu et al. at energies below 1 MeV. Between 1 and 2 MeV, the deviations become larger, approaching 2% just below the first resonance, and are even larger at higher energies, especially near the narrow resonances. Earlier this year, Andrej Trkov (IAEA) merged the 2015 ENDF/B file for 13 C with the TENDL file at energies above 5 MeV to produce an evaluation that extends to 150 MeV. Using those cross sections with the new ones for 12 C, he made a plot of the n+ nat C elastic scattering cross section that is shown in Fig. 8. The error bars on the measured data are large, but they tend to support the trend of the new 12 C cross section to higher values in the region below 2 MeV. On the other hand, a recent measurement of the total cross section by Danon (RPI) that was not included in the 13 C analysis appears to support the lower cross sections in this energy region. Summary and conclusions The EDA analyses of reactions in the 13,14 C and 17 O systems described here give very good fits to all the data included, with values of chi-square per degree of freedom in the range 1.5-1.7. The neutron cross sections from these analyses are highly constrained by the unitarity property mentioned earlier. For the 17 O system, the lowenergy n+ 16 O scattering cross sections are now in better agreement with high-precision measurements, and the (n, α 0 ) cross section agrees with the data of Bair and Haas [9] and Giorginis [10]. A post-analysis check showed good agreement also with σ T measurements done at RPI by Danon. The evaluated 16 O file CIELO 3/16 = ENDF/B VIII.0-β2 extends to 150 MeV, and is the same as ENDF/B VII.1 above 9 MeV (except for capture). The changes in the cross sections from ENDF/B VII.1 appear not to have made much difference in the benchmarks that have been tested so far. Changes in k eff of −50 to −100 pcm have been reported, which is rather surprising, considering the ≈ 40% changes in the reaction cross section. For further discussion of the implications of this new evaluation for the CIELO project, please see the paper of Chadwick, et al. [11] in the proceedings of this conference. The scale of the 13 C(α, n) 16 O cross section at low energies has been fixed by the 17 O analysis, which should lead to a better determination of this important s-process source reaction in astrophysics. We plan further improvements on the astro-physically important 16 O(n, γ ) reaction, by putting in the resonant structure of the capture cross section above the first resonance. The EDA analysis of the 13 C system gave quite reasonable results for the n+ 12 C cross sections at energies up to about 6.5 MeV, and resolved the large discrepancy between the experimental scales of two recent measurements of the inelastic cross section to the first level (see Fig. 5). The analysis also clarified the level structure of the 13 C system in the region around E x = 10.9 MeV. In recent work on the 14 C system, more channels were added to the existing single-channel analysis in extending it to higher energies (20 MeV). Above that energy, we plan to merge with the existing evaluation in the TENDL file. The 12,13 C(n, γ ) cross sections have been improved, and give better agreement with the MACS in the KADoNIS data base (J.-C. Sublet). However, there are questions from recent benchmarking work on graphite about the value of the thermal cross section for n+ 13 C capture. The elastic scattering cross section for natural carbon becomes ≈ 2% larger than ENDF/B VII.1 around 2 MeV. That difference exceeds the maximum estimated uncertainty (0.6%) of the standard cross section at the upper end of its energy range (1.8 MeV), but may be in better agreement with the measurements. These changes in the cross sections for natural carbon produce about a 70 pcm reduction (the right direction) in the reactivity of the HMI006 benchmark set, according to calculations done by Andrej Trkov at the IAEA.
2,576
2017-01-01T00:00:00.000
[ "Physics" ]
Breakthrough and Disruptive Innovation: A Theoretical Reflection The aim of this paper is a theoretical reflection which attempt is to define the breakthrough and disruptive innovation phenomena. For many years, scholars have studied the different forms of innovation and have provided new definitions and proposed different approaches to this topic. An issue discussed in this work is the differences between disruptive and breakthrough innovations, which can be perceived as similar but are producing different effects in the industry and market structure, and as well in the product structure and the technological approach to develop an innovation. This paper aims to develop a univocal meaning in the literature, differentiating the two phenomena, and provides a framework for the analysis of the different forms of technological change, studying its pervasive influence on our economy. Introduction Innovation is a wide field of research, one which many scholars have attempted to study from different points of view (economic, managerial, engineering and sociological). An innovation is something new, and in the literature 'newness' is considered under different perspectives as something new to the world (Kleinschmidt and Cooper, 1991;Lee and Na, 1994;Olson et al., 1995), new to the market (Lee and Na, 1994;Ali et al., 1995;Atuahene-Gima, 1995;Olson et al., 1995), new to the customer (Ali et al., 1995;Atuahene-Gima, 1995;Olson et al., 1995) and, finally, new to the firm and the industry (Kleinschmidt and Cooper, 1991;Green et al., 1995;Olson et al., 1995;O'Connor, 1998). In management and engineering, the different typologies of innovation are well analysed and scholars generated multiple approaches to innovation field. Several typologies of innovation occur in the literature: architectural innovation, component innovation, discontinuous innovation, business model innovation, competence enhancing or destroying innovations. Among all those categorization, one particular aspect is relevant: the difference between radical and incremental innovation. These concepts seem clearly explained and well defined in the literature (e.g., Kline and Rosenberg, 1986;Pavitt, 2005). Moreover, the ability to define the deepness of radical innovation is linked to the concepts of breakthrough and disruptive innovation. Other scholars, such as Levinthal, Tripsas, O'Connor and Markides (among many others), continued to contribute to the innovation literature considering these pillars. However, they did not always share a common terminology, hence they explain different phenomena. Also Garcia and Calantone (2002) in their literature review maintained that scholars have created confusion among the different definitions of innovation typologies, and Markides (2006Markides ( , 2010 called for an improvement of the theory related to disruptive innovation. In this work, the issue of radical, breakthrough and disruptive innovation is particularly discussed. Those two phenomena appear very similar, but the use in different context and the way in which scholars adopt those definitions, reflect the assumption that breakthrough innovation is something that enhance the competences of the firms, refers to the technological dimension of a product, and it is more planned than a serendipitous event. Instead, the disruptive innovation, such as proposed by Christensen in 1997, refers more to a change in the market, a change in the competitors' structure causing the failure of the incumbents, and a change in the business model adopted by firms. Here it is proposed a theoretical reflection on the two different phenomena, which are interpreted elsewhere as very similar. Actually, this comparison calls for a deeper understanding of the theory, inexorably leading to different understandings of the competitive position of the firms within they environments. Moreover, it is argued that disruptive innovations, as defined by Christensen (1997Christensen ( , 2003, are a very rare phenomenon, mainly analysed in the context of semi-finished products. On one hand, there is a reflexion on the theory generated starting from 1997 since now about the role of disruptive innovation; on the other hand, we propose a different conceptualization of the disruption process created by technology push. Despite scholars always referred to those topics in an interchangeable way, we argue that there exists a significant difference. While the disruptive innovation process refers to a change in the market structure, the breakthrough innovation, in contrast, reflects a radical change in the technology or in the product range. So, even if we can assume that both processes are relatively radical in their core, they lead to two separated phenomena. Even if those two phenomena can intersect each other, the outcome and the result leads to a different outcome. According to Wohlin (2014), to define the relevant literature, we follow a theoretical approach based on a snowball methodology. After searching the keyword "disruptive innovation" and "breakthrough innovation", in Google Scholar, the reading of several papers led the author to develop the present theoretical framework. To check the snowball effect, a co-citation analysis has been done. This paper is structured as follows. Section 2 summarises the typologies of innovation. Sections 3 analysis the difference existing among incremental and radical innovation. In section 4 an overview of the breakthrough innovation is provided, while in Section 5 the definition of disruptive innovation is articulated. In Section 6, a discussion of the different meanings and effects of disruptive and breakthrough innovation is provided. Finally, Section 7 contains the main conclusions and limitations of this work, along with some future research proposals. Typologies of innovation In the literature there are different conceptualizations of innovation. Innovation can be architectural (Henderson and Clark, 1990;Magnusson at al., 2003). The architectural innovation is considered the reconfiguration of existing modular resources, also at the firm level (Galunic and Eisenhardt, 2001), to obtain a new process, product, or business model (Sanchez and Mahoney, 1996;Galunic and Rodan, 1998). In this case the, according to Henderson and Clark (1990), the architectural innovation is "is the reconfiguration of an established system to link together existing components in a new way" (p. 5). It means that, also according to Sood and Tellis (2005), the core technology of the architectural innovation relies on the existing one. It is something about the changes needed to differently link the existing modules in order to obtain something slightly new. In a complementary way, we refer to the innovation of components when only a part changes within the same architecture (Sood and Tellis, 2005). In the case of component innovation, the change occurs in the materials, parts, or in the new modules used to provide a new product. Innovation can be discontinuous. In fact, according to Robertson (1967) "A discontinuous innovation involves the establishment of a new product and the establishment of new behavior patterns" (p. 16). According to Christensen (1997), sustaining technologies could be both discontinuous and radical, but they nonetheless ameliorate the performance of the already-existing products, without threatening incumbent firms. In this context, customers can value the realised improvements. Also Veryzer (1998) considers the discontinuous innovation such the one that "involve the dramatic leaps in terms of customers' familiarity in use" (p. 305). Scholars agree on the fact that the discontinuous innovation is displacing customers, generating "entirely new product categories" (Rice et al., 2002, p.330), and new behaviors, changing the customer habits (such as the microwave). Scholars distinguished also between competence-enhancing or competence-destroying innovation. Freeman (1982) and later Abernathy and Clark (1985), gave the first definitions of 'revolutionary innovation' , such as an 'innovation that disrupts and renders established technical and production competences obsolete' (Freeman,p. 12). Also Tushman and Anderson (1987) observed that, under certain conditions, major technological shifts could be 'competence-destroying' . At this point, new entrants would dominate the new industries that have been transformed by the introduction of new radical technologies. However, under other conditions, technologies were 'competence-enhancing' , thereby strengthening the position of the existing incumbents. Importantly, this suggests that disruption is not always a 'changing of the guard' between existing incumbents and new entrants -as discussed by numerous studies (Bessant, 2008). Many other leading scholars have considered the importance of business model innovation. Chesbrough (2007Chesbrough ( , 2010, Zott (2001, 2012), Teece (2010), Chesbrough and Rosenbloom (2002) and Gambardella and McGahan (2010), among others, considered the importance of innovation in a firm's strategy, organisation, and market orientation. Johnson et al. (2008) have identified when (at which stage of a sector's or firm's maturity) a new business model may emerge without disrupting incumbents. They identified five 'strategic circumstances' (p. 64) when change could happen: (1) when there is the opportunity to create new markets (e.g., emerging markets) thanks to the introduction of new disruptive innovations, which could provide cheaper products for the customers; (2) when there is the opportunity to reshape a new business organisation; (3) when there is the possibility to finalise and improve products or services and, in doing so, changing the current business model, focusing increasingly on the 'job-must-be-done'; (4) when there is the opportunity of changing low-end markets; and (5) when firms can be able to be responsive to market competition introducing innovation. However, disruptive and breakthrough innovations are connected throughout the technology life cycle. In Figure 1 the Tushman and Rosenkopf (1992) model is presented, as adapted by Kaplan and Tripsas in 2008, which considers four 'standard stages' of the technology life cycle: 1. Technological discontinuity: The discovery of new technologies related to new scientific knowledge or a radical improvement in technological performance. 2. Variation -Era of ferment: This phase is characterised by technical uncertainty, high variation in customers' needs and preferences, and ambiguous user preferences. 3. Selection -Dominant design emergence: The preferences of the customers are established, and the common architecture of the new technology becomes a standard used by the majority of firms in the market. 4. Retention -Era of incremental change: The dominant design is still stable and difficult to displace; only marginal improvements are introduced. The focus of innovation activity is on incremental innovations. Breakthrough innovation and disruptive innovations arise in the era of technological discontinuity, but, the breakthrough innovation is more specific on the technological trajectory of the product. Thus, the focus technological attributes of the product, and the firms' effort lies in finding of the new-to-the-world product. Instead, the disruptive innovation arises if markets are revolutionized. Therefore, when the new product arrives at the "era of ferment", the opportunity to generate a dominant design may be faster for the breakthrough innovation, rather than for disruptive. This is underlined in DIA (Discovery, Incubation, and Acceleration) system proposed by O'Connor (2009). In fact, the "acceleration" process refers to the opportunity to commercialize the new product, which can be considered the way to conduct the product from the era of ferment to the dominant design. Incremental versus radical innovation The OECD (Oslo Manual, 2018) defines innovation as 'a new or improved product or process (or combination thereof) that differs significantly from the unit's previous products or processes and that has been made available to potential users (product) or brought into use by the unit (process)' (p. 32). This type of innovation is often complex, and it is more likely to involve technological changes, and to mobilise different actors. Incremental innovation is defined by the OECD as "small improvements on existing products and processes" (OECD, 2010; p. 35). In a working paper by Hamdan-Livramento, and Raffo (2016) incremental innovation is considered as the one that "usually involve a minor -but maybe significant -improvement on the existing technologies" (p. 4). Incremental innovation does not imply such large changes; instead, it represents only an improvement -in the technology, design, and use -of the product or process. Research and development (R&D) investment in incremental innovation requires less effort. Scholars agree on the idea that incremental innovation is an update to, or an amelioration of something that already exists (Ettlie et al., 1984;Christensen, 1997;Lane, 2011Lane, , 2016Utterbach and Acee, 2005;Hargadon, 2003;Ruttan, 1959;Benner and Tripsas, 2012;Tripsas, 1997aTripsas, , 1997bTripsas, , 2009). The development of incremental innovation follows the path dependency highlighted by the emergence of a dominant design. Thus, scholars consider the incremental innovation more reliable by incumbents, because the lock-in process of path dependency occurs more frequently in big organizations. Radical innovations, instead, according to Freeman (1992), represent a discontinuity in the industry; a change in the process, product or in the organisation. In fact, Freeman considered incremental innovation as a further development of an existing paradigm, in which firms reduce costs or prices, or they search for better product designs. In the literature, it is clear that radical innovation brings about a change in both firms and customers, and, consequently, in the market as well. This change derives from new investments (or search strategies) made by firms to discover new products or processes. Sometimes, radical innovations can emerge thanks to lead users (Von Hippel, 1986Hippel, , 1988. The radical innovation changes the world, because it is something new to the firm, the customers, and the market. It is, in fact, new to the world. It happens when a technological discontinuity occurs and 'inaugurates an era of ferment' (p. 27) that promotes competition among the innovating firms to select the most appropriate form of innovation, which includes perfecting the design and improving the technical characteristics of the innovation itself (Tushman and Anderson, 1987). The concept of radical innovation has always been contrasted with that of incremental innovation. According to many scholars, the difference between radical and incremental innovation can be easily identified. Radical innovations are those that change the world, something new for all firms and markets. The main difference that Chandy and Tellis (1998) proposed among radical and incremental innovation refers to the role of the technology and the market. The first dimension looks at which extent a technology adopted to obtain a new product is new; instead, the second dimension considers the opportunity to satisfy new customers' needs with respect to previous products. They looked at the radicalness of an innovation as the shock that affect the S-curve of the product diffusion, considering the changes needed to modify a product for the market. The more the core technology is modified to obtain a new product, the more the innovation is radical. On the contrary, if the shock comes from the market, it generates a market change. Substantially, the market demand shifts are less radical and involve small ameliorations and improvements of already existing products. Tellis (1998, 2000) provide a good analysis of the capabilities and opportunity for firms to introduce an innovation. In their article of 2000 they focused on the dimension of the actors which introduce radical or incremental innovation in the market. They find out that it is likely that small or non-incumbent firms introduce radical innovations rather than big incumbents, because (as underlined by e.g. David, 1985;Liebowitz and Margolis, 1995;Tripsas 1998;Coombs and Hull 1998;Cavalcante et al., 2011) the path dependency process is strongly linked to the consolidated firms' internal routines, knowledge, capabilities, and resources. Those aspects are absent in small new entrant firms. Menguc et al. (2014) have shown that the design of a product is relevant in numerous cases of product innovation. "Firms with incremental product innovation capability have the competency to deliver product innovations that depart minimally from existing routines, operations, and knowledge. Firms that possess such capabilities produce products that are seen by customers as ones that enhance the consumption experience without significantly disrupting or deviating from customers' prior knowledge or requiring new learning" (p. 316). In contrast, the radical innovation process in products lies in the idea that there is a need of "unlearning and more cognitive effort" (p. 316) for customers in the use of the new product. Scholars agree on the fact that a radical innovation occurs less frequently than an incremental one. Radical innovations are a supplydriven, not a demand-driven, phenomenon. Tripsas (2008) considered the fact that technological change is often driven by customer preferences along four dimensions: 1. Relevant attributes: customers' preferences about different given sets of attributes of the product; 2. Minimum performance requirement: a product must meet the minimum performance threshold level before the customer will include it in the set of possible purchases; 3. Maximum value performance: in many cases, customers may be attracted to high performance, and they may be willing to spend more if the new product exhibits these characteristics; 4. Relative preference for specific attributes: all customers have different personal opinions about the value of a product, according to their particular tastes. Thus, even though Tripsas used the word 'radical' to define the technological discontinuities occurring in an industry, she considered the changing of the customer preferences as the most important factor. She did not reflect on the radical changes that may occur in the structure of the market because of the introduction of a radical innovation. This approach is quite different from that one adopted by Christensen with his idea of market subversion due to the entry of a disruptive technology. According to David (2001), path dependency concept "refers to a property of contingent, non-reversible dynamical processes, including a wide array of biological and social processes that can properly be described as 'evolutionary'" (p.1). Considering the lock-in process, he looks at big firms that are unable to create radical innovations. They follow only the path of incremental innovation. Path dependency exists when big firms have an intrinsic inability to operate a different set of choices in resources allocation. A large body of the literature has examined the path dependency theory together with the resource based view theory, including the dynamic capabilities approach (Coombs and Hull, 1998;Danneels, 2002;Broring, 2010;DaSilva and Trkman, 2014). Also Tripsas (2009) examined the lock-in process in firms. The scholar was interested in analysing firms' identities. Thus, she assessed the emergence of a disruptive innovation that forces firms to change their foci and, in doing so, their identity. She considered two types of identities: (1) internal, linked to the shared understanding of the employees regarding the firm core business; and (2) external, related to the outside audience (firm stakholders). In this contribution, she also argued that incumbent firms are less likely to change their core business because of their internal identity, which consequently leads to some missed opportunities in introducing new technologies in their markets (or, in an extreme case, even to the failure of the firm). Breakthrough innovation As presented in the introduction, in order to distinguish the difference between breakthrough innovation and disruptive innovation we search in Google Scholar the more cited articles. 'Breakthrough' innovation is a term used in the literature in the 2000s, in the era of emerging ICT technologies, to define radical innovation; this term is used as a synonym of radical or discontinuous innovation. The term "breakthrough innovation" has been used frequently in a large body of literature (Rice et al., 1998;McDermott and O'Connor, 2002;O'Connor et al., 2002;O'Connor and Rice, 2013). The concept of breakthrough innovations rises strongly among scholars after 2002, with an acceleration after 2012. Figure 2 1 shows the evolution of the number of papers written in breakthrough innovation field. A snowball process has been adopted (Wholin, 2014); Lecy and Beatty, 2012). We will start to comment the first search referred to the 1 The search was run in Scopus Database looking at all documents referring to "breakthrough innovation*", written in English between 1957 and 2019. O'Connor's contributions, for instance, explore the management of innovation projects in firms. She proposes a useful guideline for entrepreneurs and managers to successfully (and quickly) develop or absorb novel and radical innovations. Barnholt (1997) and Mascitelli (2000) used this terminology to classify revolutionary technological changes. In their vision, a breakthrough is an innovation that often occurs unexpectedly, due to a creative process emerging in an organisation. This innovation breaks the previous technological paradigm and creates a new trajectory. This conceptualisation is synonymous with that of revolutionary technical change, as discussed in the 1980s by the Sussex school and, particularly, by Freeman (1982). In particular, Mascitelli (2000) defines breakthrough innovations as the processes which "represent any creative and original action by individuals or project teams that enables firms to capture at least temporary monopoly profits or that results in a significant increase in market share" (p. 181). In contrast, in the O'Connor's contribution (O'Connor and Rice, 2013;Roberson and O'Connor, 2013), the focus is placed on how to manage the development of new radical innovations. Hence, in her definition of breakthrough innovation, we can assume that a breakthrough innovation is an activity that is planned, and well defined by the firm. In particular, in O'Connor and Rice (2001) it is clarified what is a breakthrough innovation: a "radical or breakthrough innovation as the creation of a new line of business -new for both the firm and the market place. By new we mean a product or process either with unprecedented performance features" (p. 99). This contribution examines twelve case studies. O'Connor shares a similar view with Mascitelli (2000) and Barnholt (1997). In fact, for them "as breakthrough innovation project proceeds, there is an increased commitment of financial and human capital" (p. 104), and "the capacity of the firm for opportunity recognition is related to the continuity of the informal networks of individuals engaged in the conversion of breakthrough innovations into new ventures" (p. 107). Thus, on the one hand, the "informal networks", represented by the individuals involved in the innovation projects is relevant. On the other hand, in all those contributions emerge that the breakthrough innovation is a planned activity, which need the creativity of the individuals. The innovation output and the commercialization is of course the last step of the process, but, interesting to observe, these authors are not studying the effects of the innovation on the market dynamics. Moreover, in O'Connor and Rice (2001), O'Connor et al., (2008) the examples used are constructed on large incumbents rather than on small new entrants. In any case, this literature is centred on incumbents which have the attitude, competence, and capability to manage a radical innovation process without being displaced by innovative entrants. In fact, firms rely on projects where the innovation process is driven by already existing competences, and for this reason breakthrough innovation is specular to their dynamic capabilities. As it is known, dynamic capabilities (Teece, 1994, and2014;Teece and Pisano, 1994;Teece et al. 1997;Martin, 2000, Winter, 2003;Drnevich and Kriauciunas;2011;Wilden and Gudergan, 2015) are processes and routine practices, which are aimed to support the firms' organization in order to generate competitive advantage. Hence, dynamic capabilities and routines seem to support breakthrough innovation. Therefore, we can consider the breakthrough innovation process as an activity that needs dynamic capabilities, but which may also be a competence enhancing process (Kelley et al., 2009;O'Connor, 2009;Kelley et al., 2011). Other authors share this view, such as McDermott and Handfield (2000) and Dunlap-Hinkler et al. (2010): a breakthrough innovation happens a priori in the enterprise through well-planned projects and a result of technological exploration (March, 1991;O'Reilly and Tushman, 2004). They focused exclusively on large companies that had established processes in place, and that had a "critical mass" of resources available to dedicate to their new product development efforts. However, a significant number of innovations that transform an industry do not originate from the industry's leaders (Henderson and Clark, 1990;Utterback, 1994). Often incumbents may prefer to follow an established trajectory, instead of pursuing a new direction that is more radical. Henderson (1993) suggested that, in some circumstances, extensive experience with a technology might have a substantial disadvantage, creating a resistance to change. Technological shifts open up a new fluid phase that may evolve into multiple directions. Incumbent firms may find difficult to rely on new, radical and breakthrough technologies (Phillips et al., 2006). Disruptive innovation Because disruptive innovation, generates new markets, displacing the incumbents, as pointed by Christenseen (1997), it is often developed by new firms. This approach will be discussed in this section. Adopting the same methodology of the previous section, the search in Google scholar has found three important papers. The first is 1. What is disruptive innovation (Christensen at al., 2015; cited 1430 times); 2. Disruptive innovation for social change (Christensen et al., 2006; cited 762 times), and 3. Disruptive innovation: in need of better theory (Markides, 2006; cited 1313 times). Schumpeter (1942) proposed in origin the first interpretation of 'creative destruction' in his theory of economic change. He considered the innovation process from an economic viewpoint, as a 'process of industrial mutation […] that incessantly revolutionises the economic structure from within, incessantly destroying the old one, incessantly creating a new one ' (p. 83). His idea has been re-proposed by Christensen (1997), under the new terminology of 'disruptive innovation' , as an innovation that displaces firms from the market. Specifically, Christensen (1997) It can be noted that disruptive technology is not necessarily a new product (or a new process) that performs better than the previous one, but rather a product that creates new markets and new opportunities and, as such, can destabilise the incumbents. An empirical example can be the e-commerce. It is a branch of the Internet, so it is an extension of a previously existing technology, and it is fulfilling the need of customers to buy goods online. When the e-commerce has begun to be adopted, mainly with amazon.com, and also with other different platforms, it has widely changed the market structure, generating new opportunities and threats, and the entry of new firms. Whereas in his early model Christensen stressed disruption from below (in which a simpler and cheaper technology displaces an established but over-sophisticated one), in his later model he also included the case of 'new market' disruption, where the focus is on unmet or unimagined needs that creates new market segments (Christensen and Raynor, 2003). Moreover, Christensen and Reynor (2003) argued that 'disruption is a process' (p. 69), which implies that disruption does not occur only in just one given moment. It is a process that continues over time. Danneels (2004) agreed on this point, considering new innovative entrants into an industry as the possible cause of incumbents' displacement in their market. As discussed by Christensen, incumbents are often lazy: They do not want to invest in new market niches that do not have a significant size. The replacement of incumbents by new entrants in the market happens because (some) existing firms fail. According to Tellis (2006), Benner and Tripsas (2012), Tripsas (1997aTripsas ( , 1997b, Bower and Christensen (1995) and Adner and Zemsky (2003), incumbents fail due to the absence of investments in the new technology, or in the presence of weak capabilities in new technology absorption. This illustrates the case in which firms make the wrong set of choices to manage the discontinuous technology. Markides (2006) was clear in maintaining that business model innovation usually creates a new market niche without completely overtaking the traditional way of competing. In fact, the e-commerce generated new threats but also new opportunities, not just for new entrants (such as in 1990s were amazon.com, e-bay, or yoox.com), but also for incumbents, which were operating in the old model of selling through physical stores (e.g. Inditex Group, Benetton, or H&M). Several incumbents were able to absorb the innovation, and to foster the changes occurring in the market, which have affected their business model, as well. In doing so, the incumbents have demonstrated that they were able to develop a new business model, adopting the new technology. The important thing to stress is that certainly, as argued also by Utterback and Acee (2005) disruptive technologies create new market niches. Niches are typically small portions of the main market. They cannot guarantee good profitability or even the survival of the incumbents (assumed to be large organisations). Tellis (2006) has argued that the lack of investment in the new disruptive technology (or in a related R&D activity) is motivated by the absence of leadership. The firms often lack of 'visionary leadership' . But the Christensen's definition of disruptive innovation suffers from a problem of identification (Danneels (2002(Danneels ( , 2004. How can we define a disruptive technology a priori, before it has shown its potential? Tripsas' contribution to the innovation literature appears, in this theoretical context, to be very interesting. Tripsas (1997aTripsas ( , 1997b analysed the history of the typesetting industry to show that there is no deterministic outcome deriving from the market entry of disruptive technologies. Her argument can somewhat be related to Tellis' idea of visionary leadership. Her reasoning was further developed in Benner and Tripsas (2012) and Tripsas (2009), in relation to the history of the digital camera industry, considering the capability of firms to manage the introduction of digital technologies. The entry of disruptive innovations opens opportunities to develop new markets and new products, but these opportunities are also open to new firms and incumbents. Firms may miss these opportunities because they are incapable of changing their identity ('prior industry affiliation ' , p. 279). An interesting contribution was presented by Markides (2006), who has distinguished between disruptive technologies (causing a potentially high impact on the industry) and disruptive innovations, which not only affect a specific technology applied to a novel product, but also the evolution of the market structure, or the introduction of a new business model. Markides (2010Markides ( , 2012 and Markides and Oyon (2010) have defined the characteristic of new business models deriving from the introduction of radical changes and disruptive innovations. Disruptive innovations generate new markets and provide a better balance between costs and performances. Disruptive innovations can create important new market niches, as, for instance, this has occurred in the case of low-cost airlines. Also in the case of the retail industry, after the entrance of new e-commerce modalities, that pushed incumbents to re-design their old business model. The omnichannel strategy is nowadays dominant. The click-and-collect modality is widely spread in the retail industry and this approach is combining the in-store and online customer experience. Moreover, also the mobile-phone applications represent a new opportunity to communicate with customers. Discussion. Breakthrough and disruptive innovation Considering the existence of different types of radical innovations that have been analysed in detail, in the previous paragraphs, here are reported some main lines of reflection regarding the issue of breakthrough innovation and disruptive innovation. On the one hand, these two types of radical innovations are similar, and both regard revolutionary changes. We can assume that both innovations lead to some big changes occurring in the market (disruptive) or in the technology related to a product (breakthrough). According to Christensen (1997), disruptive innovations are not 'sustaining' innovations, they are mainly competence-destroying because market niches are not sufficiently large for the growth of the incumbent firms. In contrast, these niches are a good starting point for small firms that want to develop disruptive innovations, creating new needs for novel customers. Disruptive innovation and is associated with a displacement of incumbents. In Markides interpretation, disruptive innovations generate disruptive new business models, but their power of disruption is limited to a minor, but significant, share of the market. Thus, old and disruptive technology tend to survive together in the medium-or long-term. Moreover, because of the small size of these firms (which can also be a spin-off of incumbent firms), the turnover coming from the niches is enough to guarantee their survival. Markides and Geroski (2005) also advocated this approach. Established companies should not even attempt to create disruptive innovations but should instead leave this task to small start-up firms, which have the requisite to succeed in this game. Established firms should concentrate on what they are good at -that is, consolidating young markets into the big mass market. It means that the small businesses are able to create disruptive innovations, which are cheaper to be produced (because generally they require more creativity than strong efforts in R&D activity). Another line of reflection has been proposed by Tripsas (1997a;1997b. The incumbents are able to change their approach in order to manage and absorb the new innovation which, in the case of business model innovation, refers more to a process and a strategy then to a physical attribute of a product. We can easily assume that several large players are able to manage the disruptive innovation, rather than small incumbents, which may suffer more. But in other cases, incumbents are not able to shift to new technology and they exit the market. Finally, we can resume arguing that breakthrough innovations refer more to the product innovation, and they are done both by large firms (incumbents on the market) and by small innovators. The term the disruptive innovation looks at a different phenomenon, when an innovation displaces the incumbents, shifting not just the technology but the industry structure, suggesting that the disruption is more related to the market structure rather than to the technological advancement of the product per se. An example is provided by the retail industry, with the spread of ecommerce. As discussed before, e-commerce represents a potentially disruptive innovation. Actually only large retail chains have been adopted it. In contrast, traditional small shops may be incapable to use this new technology. However, several small new start-ups were created in the last decade that only sell-online. Conclusions The main idea of this work was to contribute to the development of a unique distinction among the definitions of breakthrough and disruptive innovation. Scholars have developed different definitions, such as disruptive and breakthrough innovations. The main differences between disruptive and breakthrough innovation are due to a few key points. A disruptive innovation disrupts the market and creates new market niches. It is an innovation that not only involves the product or the process but it can also affect the firm's business model and the processes of entry and firms' shakeout. Considering its characteristics, it is a competence destroying innovation. Disruptive innovations in product life cycles reflect the poorer product performance (or the excessive price of the previous used technology). Existing customers do not yet consider the new product, but novel customers are attracted. This radical innovation affects competition in the market, creating new products that satisfy new needs. In new market niches, innovative firms can grow and, and at the end of the market evolution, they can displace the incumbents. When new technologies are introduced without changing the firms' rankings in the market, then these innovations might just be considered technological breakthroughs. A breakthrough innovation is a radical innovation that does not dramatically provoke the displacement of old firms from the market (and it originates in large firms as well or in new small firms entering the market without the power of becoming dominant), allowing new actors to conquering only a small percentage of total industry sales. Breakthrough innovations refer only to new technologies, and to the invention of new products, and they do not pertain to new business models. Referring to the competencies necessarily to create a breakthrough innovation, if this process is originated in a large firm, it is a more competence enhancing process, because it relies to the development of new projects and well planned activities, internal to the existing know-how of the firm. Nevertheless, clearly, new inventors or, more infrequently innovative start-ups, can create breakthrough innovations (Hervas-Oliver. & Boix-Domenech, 2013). New business models can be disruptive or non-disruptive. They can create new niches in old markets or new markets, increasing competition for both new and incumbent firms. The entire displacement of incumbents in an industry is a rare phenomenon. Thus, Christensen's theory has not found much empirical verification, despite its popularity. Moreover, it is interesting to observe the capacity of the incumbent to absorb potentially 'disruptive' innovations. Possessing the dynamic capabilities to fast react and recognising a disruptive innovation, in order to be able to absorb it, appears to be important in helping firms to define the best way to compete. This can spur incumbents to unfold the lock-in process, related to the existence of a strong internal identity. If an innovation has the nature of being disruptive, then firms must focus on it, finding the right managers capable of absorbing, or at least to react quickly. The possession of dynamic capabilities, or in general the ability to size an opportunity, and the courage to strengthen firms R&D activity, can help incumbents to survive in the market.
8,352.4
2020-12-01T00:00:00.000
[ "Business", "Economics", "Engineering" ]
Bioactive Ether Lipids: Primordial Modulators of Cellular Signaling The primacy of lipids as essential components of cellular membranes is conserved across taxonomic domains. In addition to this crucial role as a semi-permeable barrier, lipids are also increasingly recognized as important signaling molecules with diverse functional mechanisms ranging from cell surface receptor binding to the intracellular regulation of enzymatic cascades. In this review, we focus on ether lipids, an ancient family of lipids having ether-linked structures that chemically differ from their more prevalent acyl relatives. In particular, we examine ether lipid biosynthesis in the peroxisome of mammalian cells, the roles of selected glycerolipids and glycerophospholipids in signal transduction in both prokaryotes and eukaryotes, and finally, the potential therapeutic contributions of synthetic ether lipids to the treatment of cancer. Introduction Much like many other types of organic molecules, lipids are essential for life-even ancient life. Bacteria and Archaea kingdoms each synthesize a wide variety of structurally diverse lipids, including many of those conserved in eukaryotes such as phospholipids and sphingolipids [1] All lipids ranging from Archaea to Eukaryota can be divided into eight categories: fatty acyls, glycerolipids, glycerophospholipids, sphingolipids, sterol lipids, prenol lipids, saccharolipids and polyketides [2]. Important categories like glycerophospholipids can be further subdivided into classes and subclasses according to various features including the number of radyl chains, the type of polar headgroup and whether ester or ether bonds are present [3]. Many of these lipids are required for the formation of the physical barrier known as the lipid bilayer that separates primitive and more complex cells from their ever-changing environments. The crucial nature of the maintenance, composition and permeability of cellular membranes is not debatable, but the fact that many resident lipids in the bilayer also serve as substrates for various enzymes that generate potent molecular signals in multiprotein signaling cascades is not immediately apparent. Since the coining of the term "prostaglandin" by the Swedish scientist Ulf von Euler in the 1930s [4], a multitude of bioactive lipid mediators have been discovered that can act as signaling molecules making this aspect of lipid function no longer considered foreign, but increasingly recognized as a significant part of cellular behavior. Some of these signaling lipids such as diacylglycerol (DAG) and lysophosphatidic acid (LPA) are endogenously produced by the action of many isoforms of phospholipase enzymes, while others are derived from the oxidation of essential, polyunsaturated fatty acids (PUFA), including arachidonic acid, ω-3 or ω-6 fatty acids. PUFA can be further oxidized into oxylipins, a large group of signaling lipids that regulate an expansive array of metazoan vascular responses such as platelet aggregation and clot resolution, innate immunity and inflammation [5][6][7]. Bacteria also utilize lipids for signaling purposes, but as with eukaryotes, ether lipids are relatively understudied [8][9][10][11]. In this review, the role of lipids as signaling molecules is analyzed from the lens of one particular class of lipid found in all domains of life from the ancient Archaea to the more complex Eukarya: the understudied and enigmatic ether lipid. Due to the presence of other timely and comprehensive reviews [12][13][14], however, we will focus mainly on the signaling properties of naturally derived glycerolipids and phosphoglycerolipids, their synthetic mimetics designed to interfere with key signaling networks in human disease and briefly, their abundant membrane counterparts, the plasmalogens, about which much has been written already. Ether Lipid Biosynthesis Ether lipids or alkyl lipids are any lipids having an ether bond instead of the more prevalent ester bond found in the majority of lipid classes. Alkylglycerols and phosphorylated alkylglycerols are a special type of glycerolipid having an ether bond at the sn-1 position of the glycerol backbone, a hydroxyl group or acyl hydrocarbon chain at the sn-2 position and a hydroxyl group (alkylglycerol) or a phosphoryl group at the terminal sn-3 position (e.g., 1-O-alkyl-sn-glycerol, Figure 1a or 1-O-alkyl-sn-glycero-3-phosphate, Figure 1b). To avoid confusion caused by the many different nomenclatures for lipids and their metabolizing enzymes, common lipids or groups of lipids are described using their systematic names, whereas individual lipids with particular chain lengths and double bond configurations will be referred to by their numerical common name or LipidMaps nomenclature. Enzymes are referred to by their traditional substrate/activity names, coupled with their official gene symbol in parentheses when appropriate. In mammals, ether lipids constitute about 20% of the total phospholipid mass [15]. The early steps of ether lipid biosynthesis occur in the peroxisome, a small membrane-bound organelle that transiently interacts with the endoplasmic reticulum (ER) [12,14,16]. The process of ether lipid biosynthesis begins when glycerol 3-phosphate (G3P) is imported into the lumen of the peroxisome and converted to dihydroxyacetone phosphate (DHAP) by G3P dehydrogenase (GPD1). DHAP serves as the substrate for a DHAP acyltransferase known as glyceronephosphate O-acyltransferase (GNPAT/DHAPAT/DAPAT) that uses activated fatty acids in the form of acyl-CoA to create an acyl-DHAP product ( Figure 2). Acyl-DHAP is the substrate for another peroxisomal enzyme, alkylglycerone phosphate synthase (AGPS), which exchanges the nascent acyl group with a fatty alcohol to form the characteristic ether linkage at the sn-1 position as opposed to the more common ester linkage. The fatty alcohol used by AGPS is derived from the reduction of endogenous acyl-CoAs by a membrane-bound protein called fatty acyl-CoA reductase (FAR1/2) or from pools of fatty alcohols obtained from the diet. Interestingly, a pool of fatty acid synthase (FASN) is localized to peroxisomes that produce fatty acids (e.g., palmitate), which are subsequently activated by an acyl-CoA synthetase and reduced by FAR1/2, imported into the peroxisome and incorporated into ether lipids [17]. As discussed later in the oncogenesis section, genetic disruption of either GNPAT or AGPS axes causes a major, albeit not complete, reduction in ether lipid production. These loss-of-function data offer powerful insights into the major contributions of both peroxisomes and the ether lipidome as a whole, but usually do not reveal causal mechanisms centered on individual signaling pathways or lipid species [18,19]. Both alkyl-LPA and acyl-LPA bind to and activate G protein-coupled receptors, which explains their physiological signaling roles. (c). Plasmalogens contain a vinyl ether bond at sn-1, usually an unsaturated acyl group at sn-2 and either PE or PC at sn-3. Shown here is 1-(1Z-octadecenyl)-2-arachidonoyl-sn-glycero-3-phosphocholine. (d). Bacterial triglycerides often contain isoprenoid units of branched unsaturated hydrocarbon chains attached via ether bonds. Shown is free farnesol without ether bonds to glycerol. (e). Platelet activating factor (PAF) is a group of choline-containing ether lipids that mainly promote pro-inflammatory signaling pathways. The acetyl group at sn-2 required for this activity is added to alkyl-LPC by an acetyltransferase. (f). Hexadecylacetylglycerol (HAG) is a diradylglycerol (DG(O-16:0/2:0/0:0)) that is the acetylated form of HG (Panel a. above). (g). HAG can be phosphorylated to generate 1-O-hexadecyl-2-acetyl-sn-glycero-3-phosphate (HAGP) shown here, which is thought to compete with DAG to regulate PKC signaling and/or membrane localization. (h). Edelfosine is a synthetic anticancer ether lipid that differs from alkyl-PC and PAF in its methyl group at sn-2 (1-O-octadecyl-2-O-methyl-sn-glycero-3-phosphocholine). substrate for choline phosphotransferase 1 (CHPT1 or CPT1) or a related enzyme with broader substrate usage such as choline/ethanolamine phosphotransferase 1 (CEPT1). Like EPT1, CEPT1 can also generate alkyl-PE from AAG, but EPT1 seems to be more clinically significant, since no disease-related mutations in CEPT1 have yet been identified [24,27]. (1) DHAP is formed within the lumen of the peroxisome and becomes acylated by glyceronephosphate acyltransferase (GNPAT) to yield acyl-DHAP. (2) Alkylglycerone phosphate synthase (AGPS) replaces this acyl moiety of acyl-DHAP with an alkyl group derived from fatty alcohols, which are products of fatty acyl-CoA reductase 1 and 2 (FAR1/2). This reaction produces the first ether lipid precursor, alkyl-DHAP. (3) A membrane-spanning enzyme, peroxisome reductase activating PPARγ (PexRAP), reduces alkyl-DHAP and acyl-DHAP to alkyl-LPA ( Figure 1b) and acyl-LPA respectively, before transporting these products into the ER for further processing. (4) Alkyl-LPA is dephosphorylated and acylated at sn-2 in the ER to form alkylacyl- (1) DHAP is formed within the lumen of the peroxisome and becomes acylated by glyceronephosphate acyltransferase (GNPAT) to yield acyl-DHAP. (2) Alkylglycerone phosphate synthase (AGPS) replaces this acyl moiety of acyl-DHAP with an alkyl group derived from fatty alcohols, which are products of fatty acyl-CoA reductase 1 and 2 (FAR1/2). This reaction produces the first ether lipid precursor, alkyl-DHAP. (3) A membrane-spanning enzyme, peroxisome reductase activating PPARγ (PexRAP), reduces alkyl-DHAP and acyl-DHAP to alkyl-LPA ( Figure 1b) and acyl-LPA respectively, before transporting these products into the ER for further processing. (4) Alkyl-LPA is dephosphorylated and acylated at sn-2 in the ER to form alkylacylglycerol or AAG. (5) The ethanolamine phosphotransferase EPT1 adds a phosphatidylethanolamine (PE) group to AAG using CDP-ethanolamine as the polar head group donor, which produces alkyl-PE or plasmanyl-PE. (6) Finally, TMEM189 desaturates the ether bond at sn-1 of alkyl-PE to produce the vinyl ether bond characteristic of plasmalogens. PC-containing plasmalogens are formed by exchange of PE for PC by other enzymes (not shown). The products of AGPS, various alkyl-DHAPs, are simultaneously reduced and exported from the peroxisome en route to the ER as 1-O-alkyl-2-lyso-sn-glycero-3 phosphate or alkyllysophosphatidic acid (alkyl-LPA). Even though this pathway has been known for quite some time, the reductase/lipid transporter responsible termed peroxisome reductase activating PPARγ (PexRAP or DHRS7B) has only recently been identified in mice [20]. Acyl products of GNPAT that are not substrates for AGPS can also be reduced and exported from the peroxisome as LPA; however, most LPA is produced from lysophosphatidylcholine (LPC) in the blood by autotaxin, a circulating enzyme with phospholipase D-like activity [21,22]. Alkyl-LPA is further modified in the ER, undergoing acylation at the sn-2 position by an acyltransferase distinct from GNPAT and dephosphorylation at the sn-3 position by a phosphatidate phosphatase activity. This alkylacylglycerol (AAG) intermediate serves as a substrate for the enzyme ethanolamine phosphotransferase 1 (EPT1/SELENOI), which uses CDP-ethanolamine to add an ethanolamine group to AAG to make alkyl-PE, otherwise known as plasmanylethanolamine (plasmanyl-PE) [23]. EPT1 is mutated in certain patients with neurological disorders who display abnormal myelination of neurons and abnormal brain development [23,24]. Skin fibroblasts isolated from these patients showed reduced long-chain acyl-PE and alkyl-PE synthesis. Interestingly, skin fibroblasts synthesized normal levels of the major ester-linked lipids PC and PE, yet accumulated ether-linked PC (plasmanyl-PC), inferring that AAG substrates for EPT1 are repurposed by choline phosphotransferases. Similar reductions in PE were detected in HeLa cells bioengineered to lack EPT1 [23] or in yeast cells expressing human EPT1 with an R122P point mutation, which showed greatly reduced enzymatic activity [24]. Thus, EPT1 is the dominant enzyme responsible for long-chain plasmanyl-PE biosynthesis, which appears to be essential for normal neurological development. Alkyl-PE can be subsequently oxidized by plasmanylethanolamine desaturase 1 (PEDS1 or TMEM189) to create a double bond at sn-1 (vinyl ether bond) and the final product, 1-alkenyl-PE or plasmenylethanolamine (1-(1Z-alkenyl)-2-acylglycerophosphoethanolamine) [25,26]. Similar to EPT1, TMEM189 was only recently identified, which underscores the relative lack of attention and subsequent knowledge of ether lipid metabolism compared to the Kennedy pathway or other well-documented acyl-lipid pathways for phospholipid biosynthesis. The final lipid products of TMEM189 are known as plasmalogens (Figures 1c and 2), which represents the largest group of ether lipids known in eukaryotes, estimated to be over 50% of the total phospholipid content of certain brain regions [15]. We will return to these important ether lipids to briefly evaluate their signaling potential at the end of this review. Due to the substrate specificity of TMEM189, however, ether-linked-PC species are thought to be derived from plasmenyl-PE. The phosphoethanolamine group of plasmenyl-PE is removed by phospholipase C to yield 1-alkenyl-2-acyl-sn-glycerol, which is a substrate for choline phosphotransferase 1 (CHPT1 or CPT1) or a related enzyme with broader substrate usage such as choline/ethanolamine phosphotransferase 1 (CEPT1). Like EPT1, CEPT1 can also generate alkyl-PE from AAG, but EPT1 seems to be more clinically significant, since no disease-related mutations in CEPT1 have yet been identified [24,27]. Ether Lipid Physical Characteristics Ether lipid biosynthesis across Archaea and Bacteria domains is distinct. Archaeal membranes contain an abundance of unique ether lipids typified by isoprenoid chains (Figure 1d) linked to glycerol backbones via diether bonds in a 2,3-sn-glycerol stereochemistry (isoprenoid dialkylglycerol diether or isoDGD) [28]. Tetraether lipids also exist in Archaea, with two dialkyl glycerols covalently linked by extended isoprenoid units. Tetraethers and other alkylglycerols can be found in Bacteria as well, but these lipids follow a 1,2-sn-glycerol stereochemistry similar to eukaryotic glycerolipids [29]. These complex ether linkages are thought to confer reduced ion permeability, increased melting temperature and heat resistance to archaeal and bacterial membranes. Indeed, many thermophilic, halophilic and acidophilic species contain ether lipid-rich membranes presumably as an adaptation mechanism. Since ether lipids can be found in mesophilic and other types of bacteria not living in extreme environments, however, ether lipids may have attained more diverse functions beyond environmental adaptation. As in prokaryotes, eukaryotic ether lipids are more chemically stable than their acyl counterparts, in large part due to their resistance to lipases, which do not hydrolyze ether bonds but readily cleave ester bonds found in most glycerolipids and phosphoglycerolipids. Even though eukaryotes typically do not inhabit extreme environments, the physico-chemical characteristics imparted by ether bonds may be conserved with respect to altering membrane fluidity and dynamics. For instance, certain ether lipid precursors such as 1-O-hexadecyl-sn-glycerol (Figure 1a; HG or MG (O-16:0/0:0/0:0)) may negatively impact retrograde vesicular transport from the Golgi to the ER, which could potentially reduce intracellular trafficking of endocytosed microbial toxins such as Shiga toxin [30]. This concept of ether lipids altering the physical chemistry of biomembranes may be a central theme for this class of lipids and has been reviewed in depth previously [14]. Signaling Ether Lipids in Bacteria In addition to their apparent function in the lipid bilayer, ether lipids in bacteria were presumed until recently to be static, storage forms of energy-producing fatty acyls [9]. However, bacteria produce a wide variety of ether-linked phospholipids and alkylglycerols that appear to play signaling roles in response to environmental stresses such as starvation. For example, the gram-negative soil-dwelling myxobacteria (Myxococcus xanthus and Stigmatella aurantiaca) produce fruiting bodies rich in ether lipids that aid in subsequent differentiation into spores. In M. xanthus, two distinct metabolic pathways converge on the synthesis of these ether lipids, starting with the formation of the saturated iso-branched fatty acid, 13methyltetradecanoic acid (FA(i15:0)), which is incorporated into unique branched plasmanyl and plasmenyl phosphatidylethanolamines, including 1-O-13-methyl-1-Z-tetradecenyl-2-(13-methyltetradecanoyl)-glycero-phosphatidylethanolamine, otherwise known as vinyl ether lipid phosphatidylethanolamine or VEPE [9,11]. According to Ring et al., VEPE is metabolized through several reactions to a monoalkyldiacyl triglyceride called TG1 (TG(O-i15/i15/i15) or 1-O-(13-methyltetradecyl)-2,3-di-(13-methyltetradecanoyl) glycerol) during nutrient deprivation experiments [11]. TG1 and other alkylglycerols accumulate under these starvation conditions in both M. xanthus and S. aurantiaca, suggesting a conserved biosynthetic pathway [8][9][10][11]. Indeed, both species of myxobacteria contain a highly homologous cluster of "ether lipid biosynthesis" (elb) genes that appear to encode for enzymes critical for synthesizing important ether lipids such as TG1, and other less characterized alkylglycerols. Insertional mutagenesis of the elbD gene in both M. xanthus and S. aurantiaca showed that TG1 formation is completely abolished after 24 h of nutrient deprivation, which correlated with dramatically reduced fruiting body and spore development [10]. The time course of TG1 accumulation and decline was distinct compared to other triglycerides, which is consistent with a role for TG1 as a signaling lipid instead of an energy storage depot [31]. Further evidence for an essential role of TG1 in these developmental programs comes from experiments in which TG1 is exogenously added to bacteria under starvation conditions. TG1 rescued fruiting body formation in M. xanthus strains engineered to lack (1) key metabolic pathways for iC15:0 biosynthesis [11], (2) "E signals" involved in inducing sporulation [8] or (3) multiple elb enzymes [10]. Even though these data implicate ether lipids like TG1 as essential signaling molecules needed for the execution of complex myxobacterial life cycles, the complete metabolic pathways and molecular targets of these lipids are not yet clear. Discovery of ether lipid targets in this system may be broadly applicable to other cell types such as mammalian adipocytes and may therefore shed light on universal ether lipid signaling mechanisms, especially if those targets are conserved among other prokaryotes and eukaryotes. Mammalian Ether Lipid Signaling: PAF Arguably, the most prototypical ether lipid is platelet activating factor (PAF), a group of 1-O-alkyl-2-acetyl-sn-glycero-3-phosphocholines of varying hydrocarbon length at sn-1 and an invariable acetyl group at sn-2 (e.g., PC(O-16:0/2:0); Figure 1e). The term PAF was coined by Benveniste and colleagues when they discovered a factor derived from peroxidasesensitized leukocytes that potently stimulated histamine release and aggregation of purified rabbit platelets [32]. Because the peroxidase-reactive antibody was of the IgE class and histological micrographs revealed massive basophil degranulation and physical association with platelets, the "platelet activating factor" was concluded to have originated from basophils stimulated by IgE-antigen complexes. Around the same time, other investigators were studying potent antihypertensive effects of novel lipids termed antihypertensive polar renomedulary lipid or antihypertensive neutral renomedulary lipid (APRL and ANRL, respectively) [33,34]. When injected intravenously, these lipids caused acute and prolonged reductions in the blood pressure of rats at very low concentrations. APRL later turned out to be identical to PAF [35]. Since those seminal discoveries and the identification of the chemical structure of PAF in 1979, there have been over 14,500 papers published on the structure, signaling properties, pharmacology, physiology and pathophysiology of PAF in animal models and humans. As would be expected for such a vast research field, PAF and the enzymes involved in PAF biosynthesis and catabolism have been the topic of several excellent reviews [36][37][38][39]. PAF Biosynthesis and Signaling PAF was the first signaling ether lipid discovered and has several modes of action. PAF synthesis begins in the peroxisome (as with all ether lipids) and ends in the ER. PAF is made in a variety of cells, including platelets, myeloid leukocytes such as basophils and monocytes, as well as endothelial cells via two distinct pathways: a two-step remodeling pathway that features cytoplasmic phospholipase A 2 (cPLA 2 )-mediated cleavage of PC followed by transacetylation by lyso-PAF acetyltransferase (LPCAT1; Figure 3a) [38] and a de novo pathway using alkylglycerol intermediates first described by Snyder and colleagues [40]. The remodeling pathway appears to predominate in leukocytes and endothelial cells since genetic deficiency of cPLA 2 in mice results in an almost complete loss of PAF biosynthesis in response to the phorbol ester, phorbol myristate acetate (PMA), or the calcium ionophore A23187 [41,42]. Since cPLA 2 specifically targets arachidonylcontaining PCs as substrates, lipid mediators derived from arachidonate (FA(20:4)) such as eicosinoids (e.g., prostaglandins, leukotrienes, etc.) were also nearly absent in these knockout mice, demonstrating the tight link between PAF and eicosanoid metabolism. In the remodeling pathway, PAF can be synthesized from either arachidonyl-plasmalogens or alkyl-arachidonyl-PC (plasmanylphosphocholine) precursors ( Figure 3a). Each of these substrates is first cleaved by cPLA 2 to generate the lyso species: lysoplasmalogens or lyso-PAF (lyso-alkyl-PC or LPC) respectively. Lysoplasmalogens then undergo a reduction of the vinyl bond at sn-1 to form lyso-PAF at which point the two biosynthetic routes converge and lyso-PAF becomes acetylated at sn-2 to form the final product PAF. This route is thought to comprise the majority of PAF synthesis in response to inflammatory stimuli and likely accounts for detectable PAF bioactivity in a variety of physiological processes related to acute and chronic inflammation. Unlike the remodeling pathway, the de novo route of PAF biosynthesis was originally thought to be a constitutively active pathway that established basal levels of PAF in various tissues. PAF de novo synthesis is initiated by the acetylation of alkyl-LPA (Figure 1b) by an acetyl-CoA acetyltransferase to yield 1-alkyl-2-acetyl-sn-glycero-3-phosphate. This acetylated species is dephosphorylated by a phosphohydrolase to generate 1-alkyl-2-acetyl-snglycerol (Figure 1f), which is the substrate for a microsomal choline phosphotransferase [40]. This choline phosphotransferase is dithiothreitol (DTT)-insensitive, which biochemically distinguishes it from previously characterized choline phosphotransferases that are sensitive to DTT and that utilize DAG as a substrate [43]. Even though this DTT-insensitive choline phosphotransferase is considered the main regulatory enzyme of the de novo pathway [40,44], its identity is somewhat unclear, since both CHPT1 and CEPT choline phosphotransferases can synthesize PAF in the presence of 0.1-1 mM DTT from a diradyl glycerol substrate [45]. More recent evidence indicates that inflammation can enhance the de novo pathway and generate several ether lipid intermediates that may have signaling properties independent of PAF [46]. Since key enzymes of the remodeling pathway were also enhanced in parallel, however, the relative significance of the de novo PAF pathway remains obscure. After synthesis in endothelial cells and leukocytes, PAF is transported to the plasma membrane, where it remains largely cell-associated [47,48]. Once displayed on the surface, it interacts with a G protein-coupled receptor (GPCR) known as the PAF receptor (PAFR or PTAFR) on target cells such as neutrophils or platelets [49,50]. Neutrophils are "tethered" to endothelial cells via P-selectin-P-selectin glycoprotein ligand 1 (PSGL1 or SELPLG) interactions to facilitate PAF binding to PAFR. This novel mode of signaling has been termed juxtacrine signaling, which distinguishes cell-cell contact from other signaling mechanisms characterized by diffusible, secreted factors interacting with receptors on distant (endocrine), proximal (paracrine) or autologous (autocrine) cellular targets. PAF can also be secreted by monocytes and operate locally via autocrine/paracrine signaling or act over longer distances in an endocrine fashion (Figure 3b) [51][52][53]. hibits adenylate cyclase, a 12 transmembrane-spanning integral membrane protein that catalyzes the intracellular formation of another second messenger, cyclic AMP (cAMP). Diminished cAMP levels prevent activation of cAMP-dependent signaling molecules such as protein kinase A (PKA) that block initial stimulatory signals coming from Gαq and other sources. Since PKA is reported to initiate anti-inflammatory signals, the PAFR activation of Gαi suppresses these anti-inflammatory signals while simultaneously inducing pro-inflammatory signals through Gαq [36,38]. (1). This causes calcium-dependent activation of cPLA2, which cleaves sn-2 arachidonyl groups from ether-linked PC species (alkyl-PC) and produces alkyl-lyso-PC (alkyl-LPC), while simultaneously initiating eicosanoid synthesis through arachidonate (2). PAF can also be derived from PC-containing plasmalogens (not shown). (3) Finally, alkyl-LPC serves as a substrate for an acetyltransferase such as LPCAT to generate PAF. (b). PAF can signal through several mechanisms: in endothelial cells and certain leukocytes, PAF remains largely cell-associated and interacts with a GPCR (PAFR) on target cells to promote signaling downstream of the Gαq and Gαi heterotrimeric G proteins. Alternatively, PAF can be secreted and interact with PAFR locally (autocrine or paracrine) or distally (endocrine) to stimulate inflammation. (1). This causes calcium-dependent activation of cPLA 2 , which cleaves sn-2 arachidonyl groups from ether-linked PC species (alkyl-PC) and produces alkyl-lyso-PC (alkyl-LPC), while simultaneously initiating eicosanoid synthesis through arachidonate (2). PAF can also be derived from PC-containing plasmalogens (not shown). (3) Finally, alkyl-LPC serves as a substrate for an acetyltransferase such as LPCAT to generate PAF. (b). PAF can signal through several mechanisms: in endothelial cells and certain leukocytes, PAF remains largely cell-associated and interacts with a GPCR (PAFR) on target cells to promote signaling downstream of the Gαq and Gαi heterotrimeric G proteins. Alternatively, PAF can be secreted and interact with PAFR locally (autocrine or paracrine) or distally (endocrine) to stimulate inflammation. PAFR is coupled to intracellular Gαq and Gαi heterotrimeric G proteins, which send distinct yet synergistic signals into target cells such as leukocytes and platelets [54][55][56][57]. Upon receptor engagement, Gαq exchanges GDP for GTP, dissociates from the heterotrimer and directly activates PLCβ, which hydrolyzes membrane lipids such as phosphatidylinositol bisphosphate (PIP 2 ) into the two classical second messenger products: inositol trisphosphate (IP 3 ) and the signaling lipid DAG. IP 3 activates calcium channels within the ER to transiently increase intracellular calcium levels that execute a multitude of functions such as triggering extracellular calcium entry into the cell (store-operated calcium entry or SOCE) [58]. In platelets, calcium activates a variety of enzymes including DAG-sensitive isoforms of protein kinase C (PKC) and CalDAG-GEFI/RasGRP2, a guanyl-nucleotide exchange factor for the small GTPase Rap1 [59][60][61]. Rap1 associates directly with talin, an abundant cytoskeletal protein that changes the conformation of β3 and β1 integrins on the cell surface of platelets to drive aggregation and subsequent downstream signals from ligated integrins [62]. In immune cells, SOCE activates calcium-sensitive lipases such as cPLA 2 , which is critical for eicosanoid formation and the acute inflammatory response that is the hallmark of PAF signaling. In addition to Gαq, PAFR stimulates GTP loading of Gαi, but unlike the positive signals initiated by Gαq, the predominant signal from Gαi is inhibitory. Gαi binds and inhibits adenylate cyclase, a 12 transmembrane-spanning integral membrane protein that catalyzes the intracellular formation of another second messenger, cyclic AMP (cAMP). Diminished cAMP levels prevent activation of cAMP-dependent signaling molecules such as protein kinase A (PKA) that block initial stimulatory signals coming from Gαq and other sources. Since PKA is reported to initiate anti-inflammatory signals, the PAFR activation of Gαi suppresses these anti-inflammatory signals while simultaneously inducing proinflammatory signals through Gαq [36,38]. HAG and HG HAG has been reported to affect the biological activity of many cell types, including porcine endothelial cells, human leukemia cells, smooth muscle cells and platelets [63][64][65][66]. Exogenous HAG monomers inhibit the ability of platelets to associate with each other in a process known as aggregation [63], an essential part of blood clot formation in vivo. HAG also blocks the platelet secretion of intracellular granules that contain secondary agonists such as ADP, which reinforce aggregation via a positive-feedback mechanism [67]. In platelets and ovarian carcinoma cells, HAG is metabolized to HG by an enzyme known as arylacetamide deacetylase-like 1 (AADACL1/NCEH1/KIAA1363). AADACL1 activity is upregulated in invasive breast and ovarian cancer cells and is positively associated with cell growth [68,69]. In addition, AADACL1 expression levels may be indicative of gastric and pancreatic cancers [70,71]. Unlike the addition of HAG or inhibition of AADACL1 activity with the small molecule JW480 in platelets [72], HG does not appear to affect platelet aggregation or secretion. These data suggest that HAG but not HG is an endogenous inhibitory lipid that regulates platelet aggregation and vesicle secretion and that AADACL1 potentially regulates these important cellular events by metabolizing HAG to HG. The mechanism of action for HAG has been closely linked to PKC by many independent groups. In fact, HAG may interact directly with zinc finger domains called C1 domains within certain PKC isoforms. C1 domains bind phorbol esters and DAG, which is instrumental for PKC localization to cellular membranes and the multistep activation of PKC kinase activity [67,73,74]. HAG and related alkylglycerol binding to C1 domains typically inhibits PKC's ability to phosphorylate substrates in response to well-characterized activators, such as phorbol esters, DAG or DAG mimetics like 1-oleoyl-2-acetyl-sn-glycerol (OAG) [75][76][77][78][79]. The ether bond at the sn-1 position of HAG is thought to be important for this inhibition [80], which presumably occurs through a competition with PKC activators. Since C1 domain ligands are thought to disrupt critical autoinhibitory interactions that keep PKC in an inactive confirmation, however, it is unclear how HAG or any other C1 domain ligand could compete for binding to PKC without relieving autoinhibition and promoting kinase activity. Nevertheless, HAG by itself, in the absence of PKC activators, does not stimulate PKC activity and is therefore not a DAG mimetic. Perhaps HAG interacts nonproductively with PKC C1 domains and prevents/reduces DAG binding to maintain PKC isoforms in a catalytically incompetent state. Another possibility is that ether lipids such as HAG block PKC kinase activity by preventing PKC translocation to cellular membranes [81,82]. For example, macrophages pretreated with HAG showed reduced GFP-PKCε localization to phagosomes in response to an Fcγ ligand [81]. Importantly, engagement of Fcγ produces DAG, which is known to be required for PKCε translocation, but this important step in the PKC activation cycle seems to be antagonized by HAG. Likewise, various species of alkylacylglycerols (AAG) purified from shark liver oil antagonize phorbol ester-and calcium ionophore-mediated permeability of albumin across the plasma membranes of cultured endothelial cells, which implies a competition with DAG at C1 domains on PKC [65]. The mechanism of HAG-mediated inhibition may be more complex than a straightforward competition with PKC activators, however, since HAG may also activate PKC, at least in the presence of other molecules such as DAG or phorbol esters [74,83]. Slater et al. showed that treatment of purified PKCα or PKCβI potentiates phorbol ester binding to a distinct site, resulting in lower calcium requirements and slightly enhanced kinase activity. Unlike DAG and OAG, however, HAG does not activate PKC directly in the absence of other molecules. Consistent with this, HAG prepared in vitro from phospholipase C-mediated cleavage of PAF stimulated a PKC-like activity in neuroblastoma cell homogenates in the presence of phosphatidylserine (another stimulatory lipid for PKC). It is not clear, however, which PKC isoforms were present in these samples, or perhaps more importantly, whether other kinases capable of phosphorylating the histone III substrate in the assay were present [84]. Surprisingly, this same group also found that pretreatment with HAG, DAG or OAG blocked phorbol ester binding to smooth muscle cells, suggesting that HAG effects may depend on additional, unidentified cellular factors [84]. In addition to its inhibitory potential, HAG may also have weak agonist properties in human platelets. Platelets pretreated with high concentrations of HAG exhibit modestly enhanced aggregation and calcium flux in the presence of agonists, whereas HAG by itself causes cytoskeletal rearrangement (shape change) in the absence of agonists [67]. One possible mechanism is that HAG is acting as an intermediate in the de novo PAF biosynthesis pathway, leading to relatively low levels of PAF that do not strongly stimulate platelet activation. HAGP Perhaps much of the seemingly discordant data surrounding HAG can be explained by a single HAG metabolite, HAGP (Figure 1g). In cancer cells, platelets and smooth muscle cells, HAG can be deacetylated to HG by AADACL1 [63,68,84]. In the absence of AADACL1 activity, however, HAG can be rapidly converted to its phosphorylated form, HAGP, by a DAG kinase-like enzyme (e.g., DGKα), in both human platelets and ovarian carcinoma cells [67,85]. HAGP was originally discovered in rabbit platelets by Snyder and colleagues as a major metabolite of HAG that appears within minutes of treatment with radiolabeled HAG [40]. More recently however, the kinetics of HAG to HAGP conversion has been correlated with inhibition of aggregation, secretion and calcium flux in human platelets [67]. Inhibition of DGKα in these cells reversed the inhibitory effects of HAG on platelet aggregation, indicating that the conversion of HAG to HAGP was critical for HAG-mediated inhibition. When incorporated into lipid vesicles, HAGP directly bound to C1a domains from PKCα and PKCδ, and both HAGP monomers and HAGP-containing vesicles decreased phorbol ester-stimulated PKCα activity in vitro [67]. Interestingly, HAG does not appear to be phosphorylated in smooth muscle cells, which results in PAF-like stimulation of PKC and cell proliferation rather than PKC inhibition [84]. Although additional molecular mechanisms may exist, these intriguing data suggest that the expression level and/or activity of lipid kinases balance cellular pools of HAG and HAGP, which in turn directly modulate PKC activity and downstream cellular events such as aggregation/secretion in platelets and cell growth in smooth muscle cells, with increased HAGP/HAG ratios favoring reduced PKC activity. Therefore, HAGP and HAG likely represent novel regulatory nodes targeting PKC and perhaps other signaling molecules during critical cellular events (Figure 4). . HAG modulates PKC signaling in distinct cell types. In platelets and certain cancer cells, exogenously added HAG is deacetylated by the serine hydrolase AADACL1 to yield its inactive metabolite HG, which may serve as a precursor for other ether lipids and/or contribute to membrane dynamics. In platelets and ovarian carcinoma cells, HAG can also be phosphorylated by a DGKα-like lipid kinase to generate HAGP, which inhibits PKC activity and platelet activation (red line), likely by interfering with DAG binding to PKC isoforms. In contrast, HAG may enhance PKC-dependent cell growth in smooth muscle cells through an unknown mechanism (dashed line). Adapted from Holly et al. [67] Ether Lipids and Cancer Other PAF-like ether lipids may control important growth circuits during tumorigenesis and cancer pathophysiology, since ether lipids as a class are upregulated in several distinct types of cancer including brain, breast, ovarian and skin cancers [18,69,86,87]. Natural ether lipids may act as oncogenic ligands for GPCRs, while synthetic ether lipids may have multiple mechanisms related to apoptosis. Alkyl-LPA Alkyl-LPA is a family of ether-linked phospholipids with radyl chains of varying lengths and degrees of saturation that mainly differ from their more common acyl-LPA counterparts by their alkyl linkage at sn-1 of the glycerol backbone. As mentioned earlier, alkyl-LPA is an intermediate in the synthesis of ether lipids derived from the peroxisome, but it is also a potent agonist at GPCRs. Much like PAF, for instance, alkyl-LPA can potently drive platelet aggregation through ligation of endothelial differentiation gene (EDG) receptors including LPA5 [88,89]. Since alkyl-LPA is present in the lipid core region of atherosclerotic plaques, which upon rupture, can activate circulating platelets, it may play a pivotal role in thrombosis [90,91]. In addition to platelet activation, alkyl-LPA has also been implicated in various aspects of tumor progression related to cell movement [18,92]. To study the effect of etherlipid depletion in carcinogenesis, Benjamin et al. dramatically lowered the expression levels of AGPS, the key enzyme that introduces the characteristic ether bond in the peroxisome that is overexpressed in breast cancer, melanoma, primary human tumors and Rastransformed cells. Upon short hairpin RNA (shRNA)-mediated knockdown of AGPS in breast 231MFP and C8161 melanoma cells, the authors showed that both alkyl-LPA and its phospholipid metabolites (plasmanyl and plasmenyl species of PE and PC) were reduced in these cancer cell lines [18]. Unexpectedly, lack of AGPS activity decreased levels of non-ether lipids, such as arachidonate, FA(20:4), and palmitate, FA(16:0), as well as . HAG modulates PKC signaling in distinct cell types. In platelets and certain cancer cells, exogenously added HAG is deacetylated by the serine hydrolase AADACL1 to yield its inactive metabolite HG, which may serve as a precursor for other ether lipids and/or contribute to membrane dynamics. In platelets and ovarian carcinoma cells, HAG can also be phosphorylated by a DGKα-like lipid kinase to generate HAGP, which inhibits PKC activity and platelet activation (red line), likely by interfering with DAG binding to PKC isoforms. In contrast, HAG may enhance PKC-dependent cell growth in smooth muscle cells through an unknown mechanism (dashed line). Adapted from Holly et al. [67] Ether Lipids and Cancer Other PAF-like ether lipids may control important growth circuits during tumorigenesis and cancer pathophysiology, since ether lipids as a class are upregulated in several distinct types of cancer including brain, breast, ovarian and skin cancers [18,69,86,87]. Natural ether lipids may act as oncogenic ligands for GPCRs, while synthetic ether lipids may have multiple mechanisms related to apoptosis. Alkyl-LPA Alkyl-LPA is a family of ether-linked phospholipids with radyl chains of varying lengths and degrees of saturation that mainly differ from their more common acyl-LPA counterparts by their alkyl linkage at sn-1 of the glycerol backbone. As mentioned earlier, alkyl-LPA is an intermediate in the synthesis of ether lipids derived from the peroxisome, but it is also a potent agonist at GPCRs. Much like PAF, for instance, alkyl-LPA can potently drive platelet aggregation through ligation of endothelial differentiation gene (EDG) receptors including LPA5 [88,89]. Since alkyl-LPA is present in the lipid core region of atherosclerotic plaques, which upon rupture, can activate circulating platelets, it may play a pivotal role in thrombosis [90,91]. In addition to platelet activation, alkyl-LPA has also been implicated in various aspects of tumor progression related to cell movement [18,92]. To study the effect of ether-lipid depletion in carcinogenesis, Benjamin et al. dramatically lowered the expression levels of AGPS, the key enzyme that introduces the characteristic ether bond in the peroxisome that is overexpressed in breast cancer, melanoma, primary human tumors and Ras-transformed cells. Upon short hairpin RNA (shRNA)-mediated knockdown of AGPS in breast 231MFP and C8161 melanoma cells, the authors showed that both alkyl-LPA and its phospholipid metabolites (plasmanyl and plasmenyl species of PE and PC) were reduced in these cancer cell lines [18]. Unexpectedly, lack of AGPS activity decreased levels of non-ether lipids, such as arachidonate, FA(20:4), and palmitate, FA(16:0), as well as eicosinoids derived from arachidonate, such as prostaglandin E 2 (PGE 2 ). As a result of this decrease in both ether and acyl lipid pools, 231MFP cells were less invasive, less migratory and did not produce tumors when transplanted into living mice in xenograft assays. Addback of alkyl-LPA or PGE 2 rescued these migration and invasion defects, whereas the addition of PAF or other alkyl-phospholipids did not. Conversely, the overexpression of AGPS converted benign cancer cells into more aggressive subtypes with respect to cell migration and serumfree survival [18]. These data link both alkyl-LPA and PGE 2 to an aggressive, oncogenic phenotype, which is consistent with AGPS controlling an essential lipidomic node within neoplastic signaling networks. Synthetic Anticancer Ether Lipids Synthetic, man-made ether lipids inspired by the structure of lysophosphatidylcholine (LPC) have proven to be potent regulators of oncogenic cell growth [93,94]. In 1993, the synthetic ether lipid known as edelfosine (1-O-hexadecyl-2-O-methyl-sn-glycero-3-phosphocholine or ET-O-18) was shown by two groups to promote programmed cell death or apoptosis in leukemia cells [95,96]. Eldefosine later proved to be selective for cancerous cells compared to normal cells and thus became the prototype for the development of structurally similar drugs with ether-linked glycerol backbones designed to fight cancer progression, two of which are described below. Edelfosine Despite its structural similarity to PAF (methyl vs. acetyl at sn-2; Figure 1h), edelfosine does not appear to act as a ligand for cell surface receptors that regulate calcium flux [97]. Consequently, many signaling pathways that contribute to cancer cell death such as apoptosis have been proposed as intracellular targets of edelfosine. Gajate et al. first showed that edelfosine operates through pro-apoptotic mechanisms in HL-60 cells by demonstrating that edelfosine treatment disrupted mitochondrial membrane potential (∆Ψ m ) and subsequent activation of caspase-3, which are components of the "intrinsic" apoptotic pathway [98]. These authors also observed cleavage of poly(ADP-ribose) polymerase (PARP), DNA fragmentation and cleavage of caspase-3 itself. Moreover, cell-permeable caspase inhibitors efficiently blocked edelfosine-mediated cell death, but did not disrupt ∆Ψ m in these cells, suggesting that edelfosine's ability to alter mitochondrial membrane integrity lies upstream of caspase activation and intrinsic apoptosis. Similar pro-apoptotic results were reported for edelfosine and related ether lipids in multiple leukemic cell lines [99,100]. Edelfosine has been implicated in "extrinsic apoptosis" as well, which is distinguished from its mitochondria-centric intrinsic counterpart by the nature of the apoptotic stimuli. Extracellular ligands such as Fas ligand (FASLG) or tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) interact with "death receptors" such as the Fas receptor/CD95 (FAS) or TRAIL receptors respectively, to trigger extrinsic apoptosis. Edelfosine has been proposed to cluster FAS into detergent-resistant lipid rafts independently of the natural ligand FASLG to promote extrinsic apoptosis and cancer cell death [101][102][103]. This mechanism is somewhat controversial, however, since other groups have demonstrated FAS-independent yet edelfosine-dependent modes of cell death [104,105]. In addition to apoptotic pathways, edelfosine may target ion channels such as the calcium-gated potassium channel SK3/K(Ca)2.3, which has been implicated in cancer cell migration [106]. Acute treatment of the highly metastatic MDA-MB-435s cell line with 10 µM eldefosine reduced the calcium sensitivity of SK3/K(Ca)2.3 and subsequent cell migration without inhibiting the binding of apamin, a honeybee peptide known to block SK channels, indicating an allosteric mechanism of inhibition. The polar choline group of edelfosine is important for this activity, since neither 1-O-hexadecyl-2-O-methyl-snglycerol (HMG) nor HG had any effect on cell migration in HEK cells overexpressing SK3/K(Ca)2.3 [107]. Finally, edelfosine may disrupt signaling pathways that emanate from the most abundant phospholipid in mammalian membranes, phosphatidylcholine (PC), or an equally important group of signaling lipids, the phosphatidylinositols (PI) [108,109]. Edelfosine and other synthetic ether lipids incorporate into biological membranes and inhibit PC biosynthesis in the ER, which can trigger apoptosis [109,110]. Blockade of PC and/or PI synthesis may not only predispose cells to apoptosis, but may also decrease levels of pivotal signaling lipids, such as DAG and second messengers derived from lipase-dependent hydrolysis of these phospholipids. DAG activates "conventional" and "novel" classes of PKC isoforms, which are thought to have opposing influences on apoptosis. In general, novel PKCs (nPKC) induce apoptosis, whereas conventional PKCs (cPKC) protect against apoptotic stimuli [111,112]. For example, PKCδ promotes the secretion of autocrine factors from prostate cancer cells that bind tumor necrosis factor (TNFα) and TRAIL receptors to stimulate extrinsic apoptosis [111]. Conversely, the stimulation of PKCα, a cPKC isoform, prevents apoptosis in bladder cancer and models of leukemia, whereas the inhibition of cPKC isoforms triggers apoptosis [113][114][115][116]. Evidence from our lab supports this antagonistic paradigm between cPKC and nPKC isoforms in cancer cells. In the leukemic megakaryoblastic cell line, CMK11-5 [117,118], the bacterial metabolite and nonselective PKC inhibitor staurosporine caused caspasedependent intrinsic apoptosis, as measured by proteolytic cleavage of the caspase-3 substrate PARP and genomic DNA fragmentation ( Figure 5). Apoptosis was not reversed by the small molecule inhibitor Gö6976, which selectively targets cPKC isoforms PKCα, PKCβ and PKCβII, nor was apoptosis blocked by the previously mentioned ether lipid HAG at 1-20 µM. HAG was rather inert in this system, since it neither caused nor protected against staurosporine-induced apoptosis. Interestingly, Gö6976 alone at the highest concentration tested (10 µM) induced DNA fragmentation/ladder formation comparable to that of staurosporine ( Figure 5). These data suggest that conventional PKC isoforms tonically protect against caspase-dependent apoptosis in megakaryocytic cells, but since only small molecule inhibitors were used, this conclusion should be corroborated using more selective methods (e.g., genetics). Perifosine Despite its anticancer efficacy, edelfosine caused gastrointestinal and hemolytic toxicity in clinical trials, which spurred the development of safer and more effective synthetic ether lipids [94]. One such ether lipid currently being evaluated in clinical trials targeting various solid tumors, leukemias and lymphomas is perifosine. The structure of perifosine varies quite significantly from edelfosine in that it lacks the entire three-carbon glycerol backbone and has a dimethyl piperidine group in place of the phosphocholine group of edelfosine. As might be expected from such a deviation from the original edelfosine template, the mechanism of action for perifosine is distinct from edelfosine and involves perturbation of the subcellular localization of the serine/threonine protein kinase, protein kinase B (PKB/AKT). Due to the ability of AKT to antagonize apoptosis and thus promote cell survival in cancer cells, it has been the focus of an intense anticancer strategy for many years [119]. AKT normally localizes to membranes via its pleckstrin homology (PH) domain, which binds specific phosphatidylinositol products such as PI(3,4)P 2 and PI(3,4,5)P 3 , which are downstream products of the lipid kinase phosphatidylinositol 3-kinase (PI3K). Membrane association facilitates activation of AKT by trans phosphorylation on residue T308 by phosphoinositide-dependent kinase (PDK1/PDPK1) and on residue S473 by multiple kinases, including PDK2 and the mechanistic target of rapamycin complex (mTORC). Once activated, AKT dissociates from the membrane and phosphorylates numerous downstream effector proteins, such as the cell cycle protein p21 WAF1/CIP1 (CDKN1A) and the mitochondrial proteins BCL2-associated agonist of cell death (BAD) and X-linked inhibitor of apoptosis (XIAP), which antagonizes apoptosis and promotes cell survival [120,121]. Perifosine treatment of prostate cancer cells results in subcellular mislocalization of AKT, which prevents AKT phosphorylation and subsequent phosphorylation of downstream substrates, all of which leads to cell growth arrest [120]. Expression of a myristoylated (FA(14:0)) AKT variant rescues AKT localization and activity, effectively bypassing perifosine-dependent inhibition of the cell cycle and growth arrest. Due to its ability to incorporate into membranes [109], perifosine presumably competes with PIP 2 and PIP 3 at the AKT PH domain and prevents proper AKT localization, resulting in reduced cell survival and increased cell death. Perifosine is currently being tested alone and in combination with various anticancer drugs in Phase I and II trials targeting leukemia, myeloma and other solid cancers [122]. pendent intrinsic apoptosis, as measured by proteolytic cleavage of the caspase-3 substrate PARP and genomic DNA fragmentation ( Figure 5). Apoptosis was not reversed by the small molecule inhibitor Gӧ6976, which selectively targets cPKC isoforms PKCα, PKCβ and PKCβII, nor was apoptosis blocked by the previously mentioned ether lipid HAG at 1-20 µM. HAG was rather inert in this system, since it neither caused nor protected against staurosporine-induced apoptosis. Interestingly, Gӧ6976 alone at the highest concentration tested (10 µM) induced DNA fragmentation/ladder formation comparable to that of staurosporine ( Figure 5). These data suggest that conventional PKC isoforms tonically protect against caspase-dependent apoptosis in megakaryocytic cells, but since only small molecule inhibitors were used, this conclusion should be corroborated using more selective methods (e.g., genetics). Figure 5. Inhibition of conventional PKC isoforms does not reverse staurosporine-induced apoptosis and causes nucleosome formation in megakaryoblastic cells. CMK11-5 cells were grown in RPMI 1640 media (GIBCO) with 9% FBS at 37 °C in 5% CO2. Cells were treated with the indicated concentrations of staurosporine (STS) and/or Gӧ6976 for 24-48 h, lysed with RIPA buffer (25 mM Tris-HCl pH 7.6, 150 mM NaCl, 1% NP-40, 1% sodium deoxycholate and 0.1% SDS) and assayed for total protein concentration (BCA assay, Thermo). (a). Lysates were subjected to SDS-PAGE for Western blotting with a rabbit anti-cleaved PARP polyclonal antibody (#9541, Cell Signaling Technology) to measure apoptosis or a mouse antiβ-actin monoclonal antibody (#3700, Cell Signaling Technology) to measure protein loading. Primary antibodies were Figure 5. Inhibition of conventional PKC isoforms does not reverse staurosporine-induced apoptosis and causes nucleosome formation in megakaryoblastic cells. CMK11-5 cells were grown in RPMI 1640 media (GIBCO) with 9% FBS at 37 • C in 5% CO 2 . Cells were treated with the indicated concentrations of staurosporine (STS) and/or Gö6976 for 24-48 h, lysed with RIPA buffer (25 mM Tris-HCl pH 7.6, 150 mM NaCl, 1% NP-40, 1% sodium deoxycholate and 0.1% SDS) and assayed for total protein concentration (BCA assay, Thermo). (a). Lysates were subjected to SDS-PAGE for Western blotting with a rabbit anti-cleaved PARP polyclonal antibody (#9541, Cell Signaling Technology) to measure apoptosis or a mouse anti-β-actin monoclonal antibody (#3700, Cell Signaling Technology) to measure protein loading. Primary antibodies were recognized by anti-rabbit or anti-mouse IgG-alkaline phosphatase (AP) secondary antibodies for PARP and actin visualization respectively with chromogenic AP substrates. Cleaved PARP runs at approximately 89 kDa, whereas β-actin runs at 45 kDa (n = 3). (b). Genomic DNA was extracted from the same lysates used in panel a. and separated by electrophoresis on a 0.8% agarose gel. DNA was stained with Diamond Nucleic Acid Dye (Promega) according to the manufacturer's instructions. Note the DNA ladders present in STS-and Gö6976-treated samples (n = 2). Ether Lipid Metabolism and Other Lipid Pathways There are many examples of ether lipid biosynthesis pathways that impact seemingly unrelated lipid classes. For example, the alkylglycerol products of AADACL1 (e.g., HG) lead to the production of alkyl-LPA in cancer cells. Similar to disruption of the AGPS axis, reduced protein expression and enzymatic activity of AADACL1 via shRNA-mediated RNA interference lowered HG and downstream alkyl-LPA levels in ovarian carcinoma cells, which correlated with diminished tumor volume in AADACL1-deficient tumors transplanted into nude mice [68]. This metabolic link may not be preserved in non-cancerous cells, however, since murine bone marrow-derived macrophages (BMDM) pretreated with the pharmacological inhibitor of AADACL1, JW480, show increased and not decreased levels of alkyl-LPA [123]. JW480 decreases inflammatory cytokine production (e.g., TNFα) at least partially through alkyl-LPA, which may be acting as an LPA receptor agonist. Interestingly, JW480 treatment also upregulated sphingosine, plasmanyl-PC and plasmenyl-PC, suggesting that AADACL1-dependent ether lipid metabolism has a broad impact on the BMDM lipidome. Whether AADACL1 metabolizes unknown substrates besides HAG or whether HG synthesis impacts multiple lipid classes is not yet clear nor are the mechanisms linking ether-linked phospholipids to cytokine production in pro-inflammatory cells. Another example of crosstalk between lipid classes comes from cells with reduced alkylglycerol monooxygenase (AGMO) activity. AGMO is expressed in monocytes and macrophages and is the only enzyme capable of breaking the ether bond in alkylglycerols [124]. The reduction of AGMO protein levels and activity in a murine macrophage cell line via RNA interference resulted in predictable increases in alkylglycerols known to be substrates of AGMO such as HG [125]. Surprisingly though, levels of unrelated lipids belonging to the glycosylated ceramide and cardiolipin classes were also found to be elevated, as were many more lipids that could not be precisely identified. Interestingly, prominent ether lipids thought to be AGMO substrates including PAF and lyso-PAF were not altered by AGMO knockdown. Collectively, these data suggest that lowering the activity of an enzyme specifically involved in ether lipid metabolism can have global effects on multiple lipid classes and reveal hidden interrelationships among various biosynthetic pathways of the cellular lipidome. Plasmalogens As mentioned earlier, plasmalogens are the most abundant class of ether lipids and perhaps the most clinically relevant. They have been implicated in a variety of neurological disorders including Alzheimer's disease, autism, epilepsy, multiple sclerosis, Parkinson's disease, psychiatric depression, schizophrenia and even ischemic stroke [15,126]. All of these disorders, with the exception of epilepsy, involve reduced levels of plasmalogens in patients, which implies a loss-of-function phenotype and a potentially protective role for these phospholipids under normal physiological conditions. Mechanisms for plasmalogens in the nervous system have been mostly gleaned from knockout mice that do not express proteins required for transport into the peroxisome (e.g., PEX7) or the critical enzymes required for ether lipid biosynthesis, AGPS or GNPAT. These mechanisms range from proper development of essential brain regions such as the cerebellum to myelin sheath formation in neurons [19,[127][128][129]. Whether these phenotypes result specifically from plasmalogen deficiency or should be attributed to other types of ether lipids or to a single species of ether lipid is an open question. Plasmalogen contributions to signal transduction are even less clear than their influence on neuronal physiology, but may stem from the composition of their fatty acyl chains [13]. Both choline-and ethanolamine-containing plasmalogens are well known to be enriched in acyl-linked PUFA at their sn-2 positions, which provide an abundant source of fatty acid precursors upon liberation by cPLA 2 . Free PUFA are oxidized by several enzyme families, such as cyclooxygenases (COX), lipoxygenases (LOX) and cytochrome P450s (CYP), which produce soluble eicosinoids (e.g., thromboxanes, prostaglandins and other oxylipins) [7]. These oxylipins initiate a multitude of signaling pathways either as extracellular, high affinity ligands for GPCRs or as intracellular modulators of downstream effectors. In addition to signaling, the oxidation of ether-linked PUFA may promote iron-dependent cell death or ferroptosis in renal and ovarian cancer cells [130]. In parallel with PUFA derived from plasmalogens, the lysoplasmalogen backbone remaining after cPLA2-mediated hydrolysis has been implicated in immune cell functions such as phagocytosis [131] and "self" tolerance in the thymus [132]. Lysoplasmalogens can also be converted to major signaling lipids, namely alkyl-LPA and PAF. Even the alkyl chain at the sn-1 position of plasmalogens contributes to signaling, since liberation of this moiety by AGMO or a related enzyme generates fatty aldehydes that impact c-Jun N-terminal kinase (JNK) signaling and apoptosis [133]. Thus, the alkyl and acyl chains and the phospholipid backbone of plasmalogens ultimately serve as membrane storage precursors for signaling lipids released by lipase or monooxygenase activities, rather than direct components of signal transduction cascades. Conclusions Despite their discovery almost one hundred years ago, the primary function of ether lipids remains in large part a mystery. It is clear that this class of lipids is ancient as they can be found in the most primitive of Archaea that dwell in some of the harshest environments on the planet. Their robust structure may well aid the ability of these organisms to resist environmental stress, modulate their membrane composition and adapt to inhospitable habitats. This cannot be their singular role, however, since they are synthesized in organisms that do not face extreme conditions from soil-dwelling bacteria to metazoan eukaryotic cells. In eukaryotes, ether lipids populate many different lipid classes, including fatty acids, glycerolipids, glycerophospholipids saccharolipids and GPI-linked lipids, performing unique functions in each. Perhaps not surprisingly, this taxonomic diversity coupled with representation in structurally unrelated lipid classes does not suggest a unified function for ether lipids, but rather points to a multifactorial utility. Even though ether lipids were initially dismissed as odd constituents of lipid bilayers that alter the physico-chemical properties of cellular membranes, it is now clear that ether lipids dynamically participate in signal transduction pathways, among others. Ether lipids such as PAF and alkyl-LPA are indisputable ligands for specific GPCRs and potently elicit biological effects during complex physiological events such as inflammation and coagulation. More recently, metabolic intermediates of PAF such as HAG and HG have gained attention as signaling ether lipids that modulate PKC activity independent of their contribution to PAF biosynthesis. This raises an interesting question as to whether lipid intermediates that have dual functions are outliers or whether this represents a universal theme that has not yet been substantiated by more relevant examples of ether lipids impacting common signaling pathways. This question seems addressable, since the current momentum of ether lipid research appears to be increasing. In this context, the relatively recent discovery of the "ether lipid biosynthesis" (elb) genes in myxococci bacteria may be a landmark achievement, perhaps akin to the discovery of GNPAT and AGPS in the peroxisome. This area of investigation will likely open the door to more intriguing research in that domain, which may be applicable to other organisms and vice versa. Even within the signaling arena, there are many more open questions than answers regarding ether lipids. For example, the metabolic fate of ether lipid precursors such as alkylglycerol and their incorporation into cellular lipidomes has not been comprehensively defined. Alkylglycerol metabolism has only been examined in a few cell types to date, with platelets and smooth muscle cells metabolizing this lipid differently. Therefore, fundamental knowledge about existing lipid biosynthetic pathways in various cell types is lacking and may be cell type-specific. This knowledge gap is especially evident when relationships between the ether lipidome and seemingly unrelated lipid classes are revealed by manipulation of peroxisomal enzymes and systems approaches. Lipidomics is essential for identifying these relationships and for generating targeted hypotheses focused on causal mechanisms and potential metabolic overlap. Another open signaling question is whether synthetic ether lipids can be effective therapeutics in human patients and provide protection against neoplasia or neurodegenerative diseases. Evidence from a multitude of studies demonstrates the efficacy of synthetic ether lipids in cell-based assays, xenografts and early stage clinical trials, but none have yet achieved Food and Drug Administration (FDA) approval. Finally, high-resolution system approaches alongside hypothesis-driven strategies are poised to discover additional ether lipids and novel metabolic interrelationships between ester and ether lipidomes in diverse cell types. These approaches will likely clarify existing theories of ether lipids as dynamic membrane components, bioactive metabolites and signaling modulators, while simultaneously expanding the physiological relevance of these lipids beyond our current understanding.
12,741.4
2021-01-01T00:00:00.000
[ "Biology", "Chemistry" ]
Maximizing multiple influences and fair seed allocation on multilayer social networks The dissemination of information on networks involves many important practical issues, such as the spread and containment of rumors in social networks, the spread of infectious diseases among the population, commercial propaganda and promotion, the expansion of political influence and so on. One of the most important problems is the influence-maximization problem which is to find out k most influential nodes under a certain propagate mechanism. Since the problem was proposed in 2001, many works have focused on maximizing the influence in a single network. It is a NP-hard problem and the state-of-art algorithm IMM proposed by Youze Tang et al. achieves a ratio of 63.2% of the optimum with nearly linear time complexity. In recent years, there have been some works of maximizing influence on multilayer networks, either in the situation of single or multiple influences. But most of them study seed selection strategies to maximize their own influence from the perspective of participants. In fact, the problem from the perspective of network owners is also worthy of attention. Since network participants have not had access to all information of the network for reasons such as privacy protection and corporate interests, they may have access to only part of the social network. The owners of networks can get the whole picture of the networks, and they need not only to maximize the overall influence, but also to consider allocating seeds to their customers fairly, i.e., the Fair Seed Allocation (FSA) problem. As far as we know, FSA problem has been studied on a single network, but not on multilayer networks yet. From the perspective of network owners, we propose a multiple-influence diffusion model MMIC on multilayer networks and its FSA problem. Two solutions of FSA problem are given in this paper, and we prove theoretically that our seed allocation schemes are greedy. Subsequent experiments also validate the effectiveness of our approaches. Introduction There are all kinds of complex networks in our life, such as social networks, the Internet of things, biological networks. The dynamic transmission of information in these networks is a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 closely related to the networks' own topologies. Network scientists try to explain the dynamics of network by studying the spread of computer and mobile phone viruses [1] [2], epidemic diseases [3], rumors [4] and so on. The authors attempt to understand all aspects of network dynamics, such as: the main body of spreading, the pattern of spreading, the carrier of spreading, the efficiency of spreading, and so on. In this paper, we focus on the spreading efficiency of nodes. We strive to find the most influential nodes in multilayer networks and allocate them to the networks' customers. Next, we will introduce some related works in three categories: 1. Influence maximization in a single network, 2. Influence maximization in multilayer networks, and 3. Fair seed allocation problem. Influence maximization in a single network In recent years, with the rise of social media on the Internet, the communication between people has been enriched. Many social activities that used to be face-to-face are now available online, such as holding conferences, making friends, learning and counseling, etc. The growing demands for the content and form of social activities have led to the emergence of a large number of online social networks. More and more people like to express opinions and share information on online social networks such as Weibo, Facebook and Twitter. In these social networks, information is spread in the form of 'word-of-mouth'. Under this spreading mechanism, it is an important problem to find the most influential nodes in the social networks. Domingos et al. [5] first put forward the influence-maximization problem which is to find k most influential nodes in a social network, that is: where σ is the influence spread of k seeds in S and V is the node set. David Kempe et al. [6] proposed two basic models: independent cascade (IC) model and linear threshold (LT) model to describe the influence diffusion mechanism in a single network. However, both models are too simple to precisely reflect the diffusion mechanism on real social networks. To make up for this deficiency, many scholars focus on the improvement and modification of the models. Tim Carnes et al. [7] proposed two models for social network with competitive influences, and dealt with the influence-maximization problem from the follower's perspective. Budak C. et al. [8] proposed a Model named MCIC and studied the problem of the limitation of 'bad' influence. More models of multiple influences competing in a network are considered by researchers [9][10][11][12][13][14]. Furthermore, sale strategies for virus marketing are also studied [15]. Most previous works focused on the issue of influence propagation within a single network in the past decade. Since the influence-maximization problem is NPhard [6], so far there is no algorithm of polynomial time complexity to solve the problem. All the algorithms proposed in the previous works are approximate algorithms, they focus on achieving an equilibrium between degree of approximation and time complexity of their algorithms. The state-of-art algorithm IMM [16] has approximately linear time complexity with theoretical guarantee of approximation. Besides IC and LT model, there are some other network diffusion models, such as the Susceptible-Infectious-Susceptible model (SIS) [17,18], the Susceptible-Infectious-Recovered model (SIR) [19,20], the voter model (VM) [3] and the contact process (CP) [3], etc. However, IC and LT are more widely used in the study of influence maximization. The diffusion model used in this paper is IC model. Influence maximization in multilayer networks Nowadays people act as entities in multiple social networks. It is normal for people to communicate and disseminate information in these networks synchronously. Therefore, compared with a single network, the influence-maximization problem in multilayer networks composed of multiple networks is more worth being studied. Fig 1 is an example of multilayer networks. Nodes marked with the same number represent the account of the same user in each layer of the network. Each user is called an entity, its account in each layer is called a representative of the entity. In recent years, there have been successive works of maximizing influence in multilayer networks. Ibrahima Gaye et al. [21] proposed a centralized measurement method called 'Multi-Diffusion Degree' to select seeds in multilayer networks to maximize the influence. Li Guoliang et al. [22] used the maximum propagation path to approximate the influence between nodes and obtained several solutions of influence-maximization problem of multilayer networks. However, they did not introduce multiple influences into their models. The fact of the matter is that there is often not only one kind of influence spreading in real-world multilayer networks. For example, commercial advertising, the spread of public opinions, rumors and their suppression are all competing influences. Therefore, it is more meaningful to study the diffusion mechanism of multiple influences than that of single influence. There are other works [23] on diffusion models of multilayer networks. They are mainly concerned with the spread of diseases, opinions and information among the population and how they can interact with each other. Ting Liu et al. [24] constructed a bilayer network, one is the contact network of epidemic spreading which adopts the Susceptible-Infected-Recovered (SIR) model to depict its spreading process, the other is the network of disease information which adopts IC model to depict its spreading process. Velásquez-Rojas et al. [3] proposed the voter model (VM) and the contact process model (CP) to simulate the information propagation network and the epidemic spreading network respectively, and studied the interaction between the two models. Ken T. D. Eames [25] modeled a bilayer network containing a social network of parents and an epidemic infection network of children to study the influence of parents' social ties on whether children were vaccinated. In addition, there are some papers about bilayer networks using combinations of these diffusion models: Random Walk [26], Kuramoto [26], Voter [3], Contact [3], LACS [27] and GACS [28]. The above works are all aimed at the bilayer networks, and only one kind of information is spread in each layer. What they are concerned with is not finding the most influential nodes, but the impact of propagation on the whole network from some arbitrary nodes. Unlike them, our work is not limited to the bilayer networks. The number of influences can be arbitrary and they can be freely spread across layers, and what we concern is to find the k most influential nodes and allocate them fairly. Fair seed allocation in a single network and multilayer networks Whether for a single network or multilayer networks, the previous studies [9,10,11,12,13,14] have considered multiple influences, but mostly from the perspective of participants, that is to find the optimal seeds strategy to maximize its own influence on the premise that the opponents have already chosen their seeds. However, participants only have local information of the network, so it is more reasonable to study problems using global information from the perspective of network owners. By selling seeds to its customers, the network platform provides viral marketing services. The network platform not only considers the selection of the most influential seeds, but also considers the equilibrium of seeds allocation to different customers according to their budgets. Fair Seed Allocation (FSA) problem originates from the problem of how to distribute seeds for the network platform which carries out viral marketing. With the in-depth study of the problem of maximizing influence on single network and multilayer networks, it is realized that we should not only consider the maximization of influence, but also consider how to rationally distribute seeds to the customers involved in viral marketing from the perspective of network platform owners in order to maximize the customers' expectations. That is to say, the network platform should not only find the most influential seeds, but also distribute them reasonably to its customers to make them satisfied. FSA is first introduced by Wei Lu et al. [29] under their 'K-LT' model. Let budgets of customers be γ 1 , γ 2 , . . ., γ t , total budget be g ¼ S t i¼1 g i , and unit price be a constant F. Fair seed allocation problem is to find a total seed set S of minimum size k, s.t. sðSÞ � F � g, and a partition of S ¼ fS 1 ; :::; S t g to maximize fairness f: where sðS i ; SÞ is the influence spread of S i when S is the total seed set. Ying Yu et al. [30] discussed Fair Seed Allocation problem under their diffusion model 'TIC'. However, they are only studying FSA problem on a single network. Large Internet companies often have more than one online social network, and almost everyone is active in more than one social network. Finding the most influential seeds in multilayer networks and allocating them to their customers in a balanced way becomes an urgent need to be solved. In this work, we concern FSA problem on multilayer networks. FSA has two phases. The first phase is to find the k most influential seeds and the second phase is to allocate these seeds fairly to all customers. MMIC model and the influence-maximization problem In this section, we consider the first phase of FSA, i.e., find the k most influential seeds in multilayer networks. At first, we will propose a diffusion model of multiple influences competing in multilayer networks, called MMIC. MMIC model In a multilayer network, the total seed set is denoted by S. {S 1 , S 2 , . . ., S t } is a partition of S. Elements in S i are seeds of influence i, where i = 1, 2, . . ., t. We call influence i color i for convenience, i.e., there are t colors of seeds. The status of a node is either inactive or active. Initially S i is a set of i-color seeds, in which each node is active with color i, i = 1, 2, . . ., t and all non seeds are inactive. When a node u is active with color i, it will attempts to activate its non-seed outneighbor v with color i. Each activation attempts only once with a success probability respectively, and each node can receive i-activation more than once (i = 1, 2, . . .t), and this inactiveto-active transfer is irreversible. The propagation goes on until there are no more activations. An entity is activated when one of its representatives is active. When the activation process ends, each active entity v decides to a color i with probability P i v , where P i v is the proportion of i-color activations and all activations received by entity v. In particular, for each entity who has seeds as its representatives decides to color i with a probability of the proportion of the i-color seeds among its colored representatives. This diffusion model is called MMIC. The influence spread of S i is the expected number of i-color entities, denoted by sðS i ; SÞ. The total influence spread is the expected number of colored entities, denoted by sðSÞ. It is easy to verify that sðSÞ ¼ S t i¼1 sðS i ; SÞ. Since each active entity has to decide to a color finally, assigning all seeds to a uniform color does not change the total influence spread of S. Fig 2 gives an example of one propagation of multiple influences, i.e., colors in a multilayer network G. Red and blue nodes are seeds, pink and pale blue nodes are nodes activated by them respectively. Node D receives two types of activation. In the end, the entity 'Dd' can only decide to one color, i.e., red or blue. Now we study how to calculate the influence spread of each color i. For each edge e in network G, we delete it with probability 1 − p(e). All of the possible outcomes of this process with their probabilities constitute a probability space O. Each element of O is called a sample from O, denoted by ω. For a given sample ω, a fixed set S � V and a given entity v j , if there is a path τ from S to one of the representatives of entity v j , we call that v j is reachable from S, and the path τ is called a i-color live path from S to v j . Each edge of path τ is called a i-color live edge. For example, 'A-C-D' is a red live path, or we say that entity 'Dd' is reachable from S = {A, C, D} in Fig 2(2). A live path from S to v j indicates that seed set S can activate entity v j . Let h o j ðSÞ be the indicator function of that event. In other words, However, this formula of influence spread is not practical because the probability of the sample Pro (ω) is difficult to calculate. We use Monte Carlo method to calculate the influence spread σ(S). The uppercase and lowercase of one letter represents one entity, uppercase and lowercase nodes are the representatives of the entity in two layers respectively. Red node 'A', blue node 'a' and blue node 'B' are three seeds of two influences respectively. (2) is an outcome of G under MMIC, i.e. a sample ω. Pink and pale blue nodes are the nodes who have received red and blue activations, respectively. Because entity 'Aa' has one representative 'A' as a red seed, and one representative 'a' as a blue seed, entity 'Aa' decides to each color with a probability of 1/2. Entity 'Bb' only owns blue seed as its representative, so 'Bb' ultimately decides to blue. Entity 'Cc' only receives red activations, so 'Cc' ultimately decides to red. Entity Dd receives red activation once and blue activation once. Therefore, entity Dd decides to each color with a probability of 1/2. seed, i.e., S = {A}. The pink nodes are the nodes activated by the seed. Each subfigure is the outcome of one propagation. We set the number of iterations to be 10000. The influence spread of the seed is equal to the average number of activated nodes. We declare that network G appear in the rest of the paper is multilayer network G. Many other notations used frequently in this paper are listed in Table 1. The influence-maximization problem of MMIC The first step of FSA problem is seeking k most influential seeds in G. So we first consider that all seeds belong to the same color. The influence-maximization problem of IC model is a special case of the problem of MMIC when the weight of every cross-layer edge is 1 or the number of layers is 1. Since the influence-maximization problem of IC model is NP-hard, so is of MMIC. Therefore, there is no polynomial time complexity algorithm to solve this problem. According to [6], if σ(S) is monotone and submodular w.r.t. S, there is a greedy scheme of seed selection which can provide a (1 − 1/e)-approximation. Although it can be proved that the influence function σ(S) under MMIC model is monotone and submodular w.r.t. S, the time complexity of Algorithm 1 (Greedy) is still high, and it is not suitable for large multilayer networks. Algorithm 1 Greedy Intuitively, nodes with large out-degree are generally influential. Based on this idea, we use the following method to select seeds. First we select the node with largest out-degree, and then delete it from the network. After repeating this process k times we have k seeds, see Algorithm 2. Algorithm 2 Degree 1: S 00 � 2: G 0 = G 3: for i = 1 to k do 4: u ¼ argmax u2G 0 outdegreeðuÞ; 5: S 00 S 00 S {u} 6: G 0 = G 0 − u 7: end for 8: return S 00 The advantage of the degree method is that the computing speed is very fast, but there is no theoretical guarantee of its performance, i.e., its performance varies with different networks. Youze Tang et al. [16] proposed an algorithm IMM to solve the influence-maximization problem on a single network. It is the state-of-art algorithm we have known which has both guarantee of approximation and nearly linear time complexity. The core idea of IMM is based on the concept of reverse reachable (RR) set. A RR set of node u consists of all nodes from which u is reachable in sample ω, i.e., it contains all nodes can directly or indirectly activate node u in sample ω. Furthermore, if node u is chosen uniformly at random from V, a RR set is called a random RR set. Suppose we have θ (large enough, for example, θ = 10000) random RR sets from θ random nodes. If a node x appears 9999 times in these 10000 RR sets, we say that node x overlaps 9999 RR sets. From the definition of RR set, we can infer that x has great influence spread because it has the ability to activate many of these 10000 nodes as a seed. For a given node set S, that S overlaps more random RR sets indicates that S is capable of activating more nodes as the seed set. Therefore the influence-maximization problem of IC model is transferred to seeking for a node set S of size k to overlap the most random RR sets. For our MMIC model, the influence spread is the expected number of final active entities instead of nodes, so we need to define reverse reachable set for entity. Let v j be an entity of multilayer network G, and ω be a sample from O. Let R o j be the set consists of all nodes from which v j can be reachable. The set R o j is called the reverse reachable set of entity v j , RRE for short. If v j is chosen uniformly at random from N, and ω is a random outcome of MMIC, the RRE is called a random RRE. Firstly, we calculate the number of random RREs we needed: θ � (The value of θ � is calculated by algorithm IMM from [16]). Secondly, we generate θ � random RREs. At last, we add nodes that overlap the most random RREs as seeds greedily. RRE method of influence-maximization problem under MMIC model is presented as Algorithm 3. Algorithm 3 RRE (θ � , R) 1: Compute θ � and R from algorithm 2 in [16] 2: while |R|�θ � do 3: Select an entity v j from G uniformly at random; 4: Generate a random RRE of v j and insert it into R; 5: end while 6: S � � 7: for i = 1 to k do 8: u ¼ argmax u ðF R ðS � S fugÞ À F R ðS � ÞÞ; 9: S � S S {u} 10: end for 11: return S � Our Algorithm 3 extends the RR set of the algorithm 2 in [16] to RRE, which does not change the time complexity, so according to [16], the time complexity of our Algorithm 3 is still O ðkþlÞðnþmÞlog n ∊ 2 À � , where n and m are the numbers of nodes and edges, respectively in the multilayer networks, k is the number of seeds, The two parameters l and � are set to the same value as in [16]. Next, we will demonstrate the effectiveness and robustness of RRE method relative to degree method through simulation. Simulation. Experimental setup. We use 'Stanford Large Network Dataset Collection' from [31]. Two real social networks are selected: Wikipedia vote network and Bitcoin Alpha trust weighted signed network. Wikipedia is a free encyclopedia written by volunteers from all over the world. Anyone who wants to participate in managing the site needs to submit an application to the Wikipedia community for public discussion and voting. Wiki-Vote network contains all the voting data of Wikipedia from the inception of Wikipedia to January 2008. The nodes in the network Wiki-Vote represent Wikipedia users, and the directed edges from node i to node j indicate that user i voted for user j. Bitcoin Alpha is a Bitcoin trading platform. Since Bitcoin transactions are anonymous, users need to maintain their reputation to prevent transactions with users at risk of fraud. Each user grades the others from -10 (total distrust) to +10 (total trust). The nodes represent the users, the edges represent the trust relationships between them, which constitute the network Bitcoin-alpha. By randomly deleting ten percent of the edges of Wiki-Vote respectively, we get two networks named Wiki-Vote1 and Wiki-Vote2. After adding cross-layer edges between them, we get multilayer network Wiki-Vote0. Network Bitcoin-alpha0 is also constructed in the same way. The way to assign weights is WC. That is, the weight of edge (u, v) is the reciprocal of v's in-degree. WC is one of the most commonly used weighting methods because it is a reasonable normalized measure. That is to say, when we do not know the probabilities of node u activating its out-neighbors, a reasonable way to define these probabilities is to assume the probabilities are all the same, and the sum of them is 1. The basic statistics of the datasets are shown in Table 2. Evaluation method. Our evaluation index is the influence spread of selected k seeds by different methods. We do experiments on network Wiki-Vote0 and Bitcoin-alpha0 with three methods: random, degree and RRE. Because the time complexity of greedy algorithm is too high, we do not use it in experiments. The experimental results of influence maximization are shown in Fig 6. Nodes with large out degrees often have great influence spread, so the degree method can sometimes achieve good results, such as the results on Bitcoin-alpha0, see Fig 6(c). But for some multilayer networks, such as the results on Wiki-Vote0, see Fig 6(a), the degree method is not very effective. The reason is that it ignores nodes with small degree but associated with important nodes. RRE method is more robust than degree method because it can reach nearly (1 − 1/e) of the optimal influence spread in the worst case for any multilayer networks. From Fig 6, we can see that RRE method outperforms others on both Wiki-Vote0 and Bitcoin-alpha0. Considering time complexity and robustness, we use RRE method to select the k most influential seeds for FSA problem. Fair seed allocation problem of MMIC model After finding the k most influential seeds by RRE method, we consider the issue of fairly allocating the k seeds to t different customers (colors) from the perspective of the platform owner, which is the second phase of FSA. That is to say, make sure that for each color, the expected influence spread is proportional to its budget. Now we allocate k seeds to t different colors to maximize f in Eq (2). Before that, some preparatory knowledge should be given first. Given a sample ω, let a ω (S i , v) be the number of i-color live paths from i-color seed set S i to entity v. Let b o ðS i ; S; vÞ be the probability of that entity v is i-colored as S i is the i-color seed set and S is the total seed set. Let c S i be the number of representatives of v in S i and c be the number of representatives of v in S. Let N 1 be the set of entities who has at least one seed as their representative. Then we have: If a o ðS; vÞ ¼ 0, i.e., there is no live path from S to v, we specify that b o ðS i ; S; vÞ ¼ 0. Theorem 1. Since ProðoÞ; a o ðS; vÞ and c are all non-negative and independent of S i , we have that sðS i ; SÞ is a non-negative linear combination of a ω (S i , v) and c S i . All we need to do is to prove that a ω (S i , v) and c S i are both monotone and submodular w.r.t. S i . (i) Monotonicity Given a sample ω, for arbitrary x 2 V, the number of i-color live paths from i-color seed set (ii) Submodularity For arbitrary x 2 V and S i � S 0 i � V, there are two cases: Similarly, it is easy to verify that c S i is submodular w.r.t. S i . In summary, sðS i ; SÞ is monotone and submodular w.r.t. S i . By Theorem 1 and 2, the seed allocation phase of FSA problem is demonstrated as follows: Firstly, compute all sðs; SÞ for any s 2 S; j ¼ 1; 2; :::; k. Note that S is the set of k seeds from Algorithm 3. Secondly, allocate all k elements in S to t seed sets S 1 , S 2 , . . ., S t to make the differences among f i ¼ S s2S i sðs;SÞ g i ; i ¼ 1; 2; ::; t minimized. In other words, maximizing where f i and f are defined in Eq (2). We are going to propose two greedy methods for seed allocation which are guaranteed by the following theorem: Theorem 3. Assume that S is the total seed set obtained in the first phase of FSA problem. Allocating s 2 S one by one to i-color seed set S i who has the current least f i is the greedy choice of maximizing f. Proof. Now we have to prove that allocating seed s to i-color who has the current least f i is the greedy choice of maximizing f. In other words, it is locally optimal in each step of allocation. We use reduction to absurdity to prove that. Without losing generality, assume that f 1 � f 2 � . . . � f t currently, but we allocate seed s to j-color, j 6 ¼ 1. The orders of seed allocation are specified in a way from [30]. Firstly, we get k and S as the output of Algorithm 3. Secondly, we initialize all seed sets S i to empty sets and initialize all their f i to 0. At last, we allocate s 2 S one by one by its sðs; SÞ non-decreasingly into S i with the current least f i (by Theorem 3). In accordance with this idea, we propose Algorithm 4 to allocate seeds. Algorithm 4 Fair1 . . ., t; 5: Sort the elements of S into {s 1 , s 2 , . . ., s k } by their sðs j ; SÞ nondecreasingly; 6: for j = 1 to k do 7: Here is an example of how Algorithm 4 and 5 work. We assume there are three colors with budgets γ 1 : γ 2 : γ 3 = 1: 2: 3, and the number of seeds k = 6. Suppose that after the first phase of FSA, we obtain six seeds: s j , j = 1, 2, . . ., 6. with their influence spread sðs j ; SÞ ¼ 1; 5; 6; 7; 12; 13 for j = 1, 2, . . ., 6 calculated by Algorithm 3. Now, we use Algorithm 4 for seed allocation phase. The size of the three rectangles in Fig 7 represents the size of the budgets. In this phase, we should allocate the six seeds to three colors (S 1 , S 2 , S 3 ) to maximize f. We initialize all seed sets S i to empty sets and set all initial values of f i to be 0. We allocate s j one by one (by their sðs j ; SÞ non-decreasingly) into S i who has the least value of f i (in red) currently, and update the value of f i . The final allocation of seeds is S 1 = {s 1 , s 4 }, S 2 = {s 2 , s 6 }, S 3 = {s 3 , s 5 } and f � 66.7%. Now we will propose another method for the orders of allocations. Firstly we get k and S as the output of Algorithm 3. Secondly, we initialize all seed sets S i to empty sets, and initialize f i to the reciprocals of its budget. At last, we allocate s 2 S one by one by its sðs; SÞ non-increasingly into S i with the current least f i (by Theorem 3). Therefore, we obtain another method of seed allocation, i.e., Algorithm 5: Now, we use Algorithm 5 for seed allocation phase in stead of Algorithm 4. The process is shown in Fig 8. We set initial value of f i to be the reciprocal of its budget, that is 1 g i . We allocate Discussion From the experimental results shown in Figs 10 and 11, Fair1 and Fair2 significantly outperform random seed allocation, and Fair2 has the best results on different budgets for different networks. According to Theorem 3, both Fair1 and Fair2 are greedy strategies to maximize fairness f. The curves of Fair1 and Fair2 are oscillating, but with the increase of the number of seeds k, the amplitude of the oscillation decreases. No matter for Fair1 or Fair2, which color Maximizing multiple influences and fair seed allocation on multilayer social networks the next seed is assigned to is based on the current optimal choice, i.e., allocating the coming seed to color i with the minimum f i (If more than one f i is equal to the minimum, choose a color i from these f i randomly). If the current f is close to 0, i.e., the gap between the minimum and the maximum of {f 1 , f 2 , . . ., f t } is very large, then allocating a new coming seed usually increase the value of f. On the contrary, if the current f is close to 1, allocating a new coming seed usually break the balance among {f 1 , f 2 , . . ., f t } and even make f smaller. This is the potential reason for the oscillations. With the increase of k, i.e., the number of allocated seeds, the ratio of a new coming seed's influence spread to allocated seeds' influence spread is usually getting smaller, especially for Fair2, because the influence spread of the newly allocated seed is non-increasing. Therefore, allocating a new coming seed has less effect on f with the increase of k. This is the potential reason of the amplitude decrement and the convergence of curves with the increase of k. The difference between Fair1 and Fair2 is that Fair1 allocates seeds by their influence spread from small to large, while Fair2 allocates seeds by their influence spread from large to small. Recall that, the same allocation strategy of Fair1 and Fair2 is allocating the coming seed to color i with the minimum f i (If more than one f i is equal to the minimum, choose a color i from these f i randomly). For Fair1, the allocations of the first t seeds are all random because the initial values of f i (i = 1, 2, . . ., t) are all equal to zeros. For Fair2, since the initial values of f i (i = 1, 2, . . ., t) are the reciprocals of the budgets respectively, they may be different from each other, i.e., the allocations of the first t seeds may not be random. Therefore, Fair2 is more reasonable than Fair1 for the first t allocations. On the other hand, as we discussed before, with the increase of k, the oscillation of Fair2 decreases faster than that of Fair1. Therefore, Fair2 is superior to Fair1 in almost all cases. Conclusion In this work, we propose a multi-influence diffusion model MMIC for multilayer social networks. Unlike traditional models of single influence in a single network, MMIC model considers not only multilayer networks, but also multiple competing influences propagating within them. From the point of view of the network owner, we propose a Fair Seed Allocation problem for multilayer networks. Firstly, we propose 'RRE method' as Algorithm 3 to find the k most influential seeds. Then we allocate the k seeds to t different colors (customers) according to Fair1 (Algorithm 4) or Fair2 (Algorithm 5) so that their influence spread is proportional to their budgets. We proved theoretically that the two allocation strategies are greedy choices. Our experiments on real social networks show the effectiveness of our methods. Project administration: Yu Chen.
8,099
2020-03-12T00:00:00.000
[ "Computer Science" ]
Gaiotto duality for the twisted A2N −1 series We study 4D N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2 superconformal theories that arise from the compactification of 6D N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = (2, 0) theories of type A2N −1 on a Riemann surface C, in the presence of punctures twisted by a ℤ2 outer automorphism. We describe how to do a complete classification of these SCFTs in terms of three-punctured spheres and cylinders, which we do explicitly for A3, and provide tables of properties of twisted defects up through A9. We find atypical degenerations of Riemann surfaces that do not lead to weakly-coupled gauge groups, but to a gauge coupling pinned at a point in the interior of moduli space. As applications, we study: i) 6D representations of 4D superconformal quivers in the shape of an affine/non-affine Dn Dynkin diagram, ii) S-duality of SU(4) and Sp(2) gauge theories with various combinations of fundamental and antisymmetric matter, and iii) realizations of all rank-one SCFTs predicted by Argyres and Wittig. Introduction and summary Considerable progress has been made recently in the program of understanding the 4D theories that arise from the compactification of 6D N = (2, 0) theories on a Riemann surface, C, possibly in the presence of codimension-two defects of the (2,0) theories, which correspond to punctures on C [1][2][3][4][5]. 1 Much of the richness of the construction stems from the variety of available defects. When an N = (2, 0) theory of type J = A, D, or E has a nontrivial outer-automorphism group, there exists, in addition to untwisted defects, a sector of twisted defects equipped with an action of a element of the outer-automorphism group of J as one goes around the defect. The general local properties of twisted and untwisted defects were studied in [6]. In particular, the A 2N −1 series has a sector of defects twisted by the Z 2 outer automorphism of A 2N −1 . In this paper, we study the global properties of theories of type A 2N −1 in the presence of such Z 2 -twisted defects. Just as untwisted defects of the A 2N −1 series are classified by embeddings of sl (2) in sl(2N ), twisted defects in this series are classified by embeddings of sl (2) in so(2N + 1). Equivalently, untwisted defects are classified by partitions of 2N , while twisted defects are classified by certain partitions of 2N + 1, called B-partitions. So, for instance, the twisted sector contains a "maximal" twisted puncture, denoted by the B-partition [1 2N +1 ] and with flavour group SO(2N + 1), and a "minimal" twisted puncture, denoted [2N + 1] and with trivial flavour group. The local properties of these and other twisted punctures can be computed following [6]. In this paper we will provide some new, explicit algorithms to make these calculations easier. One especially interesting twisted defect is the one with B-partition [2N − 1, 1 2 ], which arises from the collision of a minimal untwisted and a minimal twisted defect. Such defect is unique in that it can be continuously deformed into a pair of defects [2N + 1] and [2N − 1, 1]. In the global picture, this property leads to a number of elements that were absent in the untwisted story: • three-punctured spheres that correspond to gauge theories fixed at a point in the interior (not a cusp) of their moduli space. • cylinders whose pinching leave a gauge coupling at a point in the interior of moduli space. • cylinders whose pinching decouples a semisimple gauge group. These phenomena part ways with the usual understanding that a degeneration of a Riemann surface corresponds to the weakly-coupled limit of a simple gauge group in the 4D theory and vice versa. We then study three applications of our constructions: • It is well known that Lagrangian 4D N = 2 superconformal quivers whose gauge group is a product of special-unitary groups can be constructed only in the shapes of (affine and non-affine) Dynkin diagrams of type A-D-E [7]. The A n -shaped quivers were used originally by Gaiotto to deduce local properties of untwisted defects in the A N −1 -series, and are realized as compactifications on untwisted spheres. In this paper we find a realization of the affine and non-affine D n -shaped quivers as compactifications of the A 2N −1 series on twisted spheres. An equivalent expression for the Seiberg-Witten curve for the affine D n -shaped quivers was found long ago by Kapustin [8] from a IIA brane construction. (Quite recently, a uniform way to derive the Seiberg-Witten solutions for these ADE quiver theories were found by [9,10] using instanton calculus.) • We present a full tinkertoy representation of the twisted A 3 theory, and, as an application, study the S-dual frames of SU (4) and Sp (2) gauge theories with matter in the fundamental and antisymmetric representations. • We show how to construct the rank-one 4D SCFTs studied by Argyres and Wittig in [11,12]. 2 These are SCFTs whose only Coulomb branch operator has scaling dimension ∆ = 3, 4, 6 respectively, and which are not the more familiar Minahan-Nemeschansky theories with E 6,7,8 flavour symmetry. The ∆ = 3 theory will be found in the context of the twisted A 2 theory, although we leave a systematic analysis of the twisted A 2N series for later, due to the subtle issues pointed out in [13]. We will be able to pin down numerical invariants of this ∆ = 3 theory left undetermined in [11,12]. The rest of the paper is organized into two parts. The first part consists of section 2, 3 and 4. In section 2, after recalling the general method for obtaining a 4D theory from a 6D N = (2, 0) theory on a Riemann surface, we describe the algorithms to compute the local properties of twisted punctures of type A 2N −1 , elaborating on [6]. In section 3, we develop the method to identify the behaviour of the theory when two defects are brought together. In section 4, we study atypical degenerations in detail, where the degeneration of the Riemann surface does not correspond to the emergence of a weakly-coupled gauge group. The second part of the paper deals with applications. In section 5 we show how a D n -shaped quiver gauge theory can be realized in terms of the 6D N = (2, 0) theory of type A 2N −1 on a sphere with twisted punctures. In section 6 we study the S-duality properties of all superconformal SU (4) and Sp(2) gauge theories. In section 7 we discuss rank-one SCFTs and their realizations in terms of 6D N = (2, 0) theory. In appendix A we list all twisted fixtures and cylinders for A 3 , and tabulate the properties of twisted punctures for A 5,7,9 . 4D theories and punctures In section 2.1 we recall the construction of 4D theories from the compactification of the 6D N = (2, 0) theory of type A 2N −1 on Riemann surfaces with punctures. Section 2.2 through 2.4 detail algorithms to compute local properties of punctures. We show extensive tables of local properties in appendix A. After going over section 2.1, a busy reader can skip the rest of this section, and continue directly to section 3. Punctures, the fields φ k (z) and the Hitchin field Φ(z) Consider the 6D (2,0) theory of type A 2N −1 , compactified on a Riemann surface C with a partial twist to preserve supersymmetry [1,14]. We allow for the possibility of having codimension-two defects of the (2,0) theory, localized at punctures on C. This construction leads at low energies to a 4D N = 2 SCFT. We are interested in classifying and characterizing the 4D SCFTs that arise for various choices of C and defects on it. Usually, the moduli space of the 4D SCFT can be identified with the complex-structure moduli space of C, so that cusps in the latter correspond to weakly-coupled limits of the theory, where a certain gauge group almost decouples. We will see in section 4 that there exist counterexamples to this statement when twisted punctures are included. The Seiberg-Witten curve Σ of the theory can be realized as a ramified cover of C. To describe Σ explicitly, we should consider the VEVs of certain protected operators in the 6D theory, which, upon compactification on C, give rise to meromorphic k-differentials φ k on C, where the k are the dimensions of the Casimirs of A 2N −1 , i.e., k = 2, 3, . . . , N . The φ k have poles at the positions of the punctures on C. We then have the following equation for Σ: Here λ is the Seiberg-Witten differential, which is a meromorphic 1-form on Σ. In [6], we discussed a classification of codimension-two defects and how to compute their properties. Defects are grouped into sectors that are in 1-to-1 correspondence with the conjugacy classes of the outer-automorphism group of the simply-laced Lie algebra of the same type as the (2,0) theory that one chooses. In our case, this Lie algebra is A 2N −1 = sl(2N ), which has a Z 2 outer-automorphism group generated by an element o, whose action on the k-differentials is: Then, the sector of untwisted punctures is the one corresponding to the identity element, while the twisted sector corresponds to o. As one goes around a twisted puncture on C, (2.2) tells us that the k-differentials of odd degree k must change sign. Now, untwisted punctures are classified by sl(2) embeddings in sl(2N ), whereas twisted punctures are classified by sl(2) embeddings in so(2N + 1). More practically, recall that sl(2)-embeddings in sl(2N ) are in bijection with partitions of 2N . Similarly, sl(2)embeddings in so(2N + 1) are in bijection with B-partitions of 2N + 1, which are defined as partitions of 2N + 1 where every even part has even multiplicity. For example, [4 2 , 3 3 , 2 6 ] is a B-partition, but [6, 5, 4 2 ] is not. If z is a local coordinate on C such that the puncture is at z = 0, the k-differentials near z = 0 have the behaviour: We call the set {p k }, for k = 2, . . . , 2N , the pole structure of the puncture. For an untwisted puncture, all the p k should be integer, while for a twisted one, the p k for odd (even) k must be half-integer (integer) because of (2.2). Let us now relate the discussion to the Hitchin system. Following [14], the classical integrable system associated to our 4D N = 2 theories is a Hitchin system on C with gauge group sl(2N ). Let Φ be the Higgs field for the Hitchin system, i.e., Φ is an sl(2N )-valued meromorphic 1-form on C in the adjoint representation of sl(2N ). Then, the Seiberg-Witten curve Σ of this system, (2.1), is given by the spectral curve for the Hitchin system, Thus, comparing with (2.1), we see that the φ k are polynomials in the trace invariants Tr(Φ k ) of the Higgs field. In terms of the Hitchin system, an untwisted defect on C corresponds to a local boundary condition for the Higgs field. Specifically, in local coordinates z on C, let the untwisted puncture be at z = 0. Then, we have where X is an element in sl(2N ) specifying the puncture, and the ellipsis denotes a generic element of sl(2N ). Since Φ is not gauge invariant, the defect is actually characterized by the conjugacy class of X, known as a (co)adjoint orbit in sl(2N ). When the mass parameters of the puncture are set to zero, X is nilpotent, and the orbit is called a nilpotent orbit. Nilpotent orbits in sl(2N ) are classified by sl(2) embeddings in sl(2N ), or, equivalently, by partitions of 2N . If a puncture is labeled by a partition ρ, the nilpotent orbit that defines its boundary condition (2.5) is the one corresponding to the transpose partition ρ T of 2N . The analogous boundary condition for a twisted puncture was given in [6]. First, decompose the sl(2N ) Lie algebra as a direct sum of eigenspaces of the Z 2 outer automorphism, j = j 1 + j −1 , where j 1 ≃ sp(N ) is the invariant subalgebra. Then, if the twisted defect is at z = 0, the local boundary condition for the Higgs field is Here, X is an element of a nilpotent orbit in sp(N ), and A and A ′ are generic elements of j −1 and sp(N ), respectively. As before, nilpotent orbits in sp(N ) are classified by sl(2) embeddings in sp(N ), or, equivalently, by C-partitions of 2N , which are defined as partitions of 2N where every odd part has even multiplicity. (For example, [6 2 , 3 4 , 2] is a C-partition, but [5 2 , 3, 1] is not.) Then, a twisted puncture in the A 2N −1 series is labeled by a B-partition ρ of 2N + 1, but its Higgs-field boundary condition is given by a C-partition ρ ′ of 2N . There is a map, called the Sommers-Achar map [15][16][17], which is a generalization of the Spaltenstein map on nilpotent orbits, which gives us the Hitchin-system data associated to ρ: (2.7) Here, C(ρ) is a discrete group. Then, X is the nilpotent element ρ ′ (σ + ), seeing ρ ′ as an sl(2) embedding in Sp(N ). In [6], ρ ′ was called the Hitchin pole of the puncture labeled by the Nahm pole ρ. ρ ′ is given by the C-collapse of the reduction of the transpose of ρ (a 2N + 1 partition) to a 2N partition; see section 2.2 and section 3.4.4 of [6]. Local properties of punctures The local properties of a twisted puncture that we can compute are: p k , or to the fact that a leading coefficient can be expressed in terms of more basic gauge-invariants, as we will see in section 2.4. (In principle, subleading coefficients may have been constrained too, but it turns out that this does not occur.) Once the local form of the Higgs field Φ for a specific puncture is known, as in (2.5) and (2.6), one can find the local form of the k-differentials from (2.1) and (2.4), read off the the pole structure {p k }, find the constraints, and compute the {n k }. However, carrying out this "honest" procedure is quite tedious in practice. In what follows, we describe algorithms to compute these properties directly from the B-partition, which we found after looking at a large number of examples. First, in section 2.3, we explain how to calculate the {n k }, and then, in section 2.4, how to compute the constraints. Once these are known, the pole structure {p k } can be easily reconstructed. We will see that the only twisted defect that gives rise to a Coulomb branch operator of dimension two is the one with B-partition [2N − 1, 1 2 ]. This occurs through a constraint of the form c This particular puncture will play an important role in section 4. For untwisted punctures in the A series, it is well known that there are no constraints at all, and so the pole orders {p k } are exactly the same as the {n k }, for each k. (By contrast, untwisted punctures in the D series generically do exhibit constraints [5].) The Lie algebra of the global symmetry group G flavour of a twisted puncture labelled by the embedding ρ : su(2) → so(2N + 1) is the commutant of ρ(su(2)) in so(2N + 1). It is easier to give a formula for G flavour in terms of the B-partition p corresponding to ρ: where l runs over the parts of the partition p, and n l is the multiplicity of l in p. For even l, n l must be even because p is a B-partition, so the formula above makes sense. (In our notation, Sp(1) ≃ SU(2).) As for the contributions to n h and n v (and thus a and c), these can be easily computed from the embedding ρ : su(2) → so(2N + 1). The formulas were given in [6], Here, ρ A and ρ B are the Weyl vectors of A 2N −1 and B N , respectively; h = ρ(σ 3 ) is the Cartan of the embedded su(2), and we have decomposed g = so(2N + 1) = r∈Z+1/2 g r , where g r is the eigenspace of h with eigenvalue r. The contributions to n h and n v for the twisted sectors of the A 3,5,7,9 theories are given in appendix A. Graded Coulomb branch dimensions Consider a twisted puncture in the A 2N −1 theory, specified by a B-partition p of 2N + 1. We want to compute the contributions {n k } to the dimensions of the graded Coulomb branch. The formula for the {n k } is most easily expressed in terms of a number of auxiliary sequences, which we now define. Let q = p t be the transpose partition to p. First, let us define a sequence ν by Next, let s be the sequence of partial sums of q, Finally, define a sequence r of "corrections" by r k = 1 if k ≤ N and 2k / ∈ s, 0 otherwise. (2.12) Then, the contribution n k for the twisted puncture with B-partition p is (2.14) Similarly, the {n k } for the full (or maximal) twisted puncture, p = [1 2N +1 ], are General structure of constraints The structure of constraints for twisted punctures in the A series is relatively simple. These constraints satisfy some properties: • They are polynomials in the leading coefficients c • The polynomials f ii) c iv) c 3 c 2 . The first and third examples are "squares", while the second and fourth are "cross-terms". Also, the first and second examples involve only the c (k) l , while the third and fourth involve also new parameters a (k) l . We call constraints that do not introduce any new parameters, such as the first two examples above, c-constraints. A c-constraint of scaling dimension k (which necessarily has pole order l = p k ) tells us that the leading coefficient c (k) l is dependent on others, and so the local contribution to n k should be reduced by one. By contrast, the third example, of the form c is the square of another, more basic gauge-invariant parameter, a (k) l . Thus, it effectively trades a parameter of scaling dimension 2k by a parameter of scaling dimension k; in other words, the contribution to n 2k is reduced by one, while the contribution to n k is raised by one. We call this type of constraint, which introduces a new parameter, an a-constraint. Finally, the fourth example is a cross-term involving the parameter a (k) l . However, the a (k) l will already have been introduced by an a-constraint as in the previous paragraph. Hence, this cross-term constraint should be taken to be a c-constraint, not an a-constraint. Generically, for every a (k) l , there will be exactly one a-constraint (a square in a Number of constraints Now, for a given twisted puncture, let us explain the rule to find at which scaling dimensions k there exists a constraint. Denote by p the B-partition of 2N + 1 that labels our twisted puncture. Consider, as before, the transpose partition, q = p t , and let s be the sequence of partial sums of q, as in (2.11) above. We will see that s contains all the information about constraints. Let us first note that a B-partition always has an odd number of parts, so suppose our B-partition p has 2l + 1 parts, and let p 2l+1 be the last part of p. Then, an a-constraint of scaling dimension k exists if and only if: 1. k belongs to s. k is even. 3. k is not a multiple of 2l + 1. If k = 2m satisfies these conditions, the local contribution to n 2m should be reduced by one, and the contribution to n m should be raised by one. Similarly, a c-constraint of scaling dimension k exists if and only if: 1. k belongs to s. 2. If k is odd, it must satisfy a "cross-term" condition. Let j be the unique index such that k = s j . Then: 1) s j must not be the last element of s; 2) both s j−1 and s j+1 must be even, s j−1 = 2u and s j+1 = 2v; and 3) s j = u + v. 3. If k is even, it must be a multiple of 2l + 1. 4. If k is a multiple of 2l + 1, that is, k = r(2l + 1) with r integer, then If k satisfies these conditions, the local contribution to n k should be reduced by one. 61/2 , with the same dimension and pole order as the leading coefficient c (33) 61/2 , these are independent. Explicit form of constraints Now, the rules described above (to compute the dimensions at which a-and c-constraints occur) are sufficient for most purposes, but if we want to know what the constraints look like more specifically, which we need to compute explicit Seiberg-Witten curves, we should study the constraint structure of twisted punctures a little more systematically. We will do so below. Recall our B-partition p has 2l + 1 parts, p = {p 1 , . . . , p 2l+1 }. Then, q must be of the form Hence, the first p 2l+1 parts of the set of partial sums, s, are multiples of (2l+1) in arithmetic progression: s = [(2l + 1), 2(2l + 1), 3(2l + 1), . . . , p 2l+1 (2l + 1), . . . ] (2.17) (By construction, the entry to the right of p 2l+1 (2l + 1) cannot be a multiple of 2l + 1.) This block of multiples of 2l + 1 in s will be important, since it gives rise to a particular set of c-constraints for p. So, let us look at it in detail. Consider the first 2 p 2l+1 2 multiples of 2l + 1 in s, split into two groups: For completeness, let us call s ′′′ the set of entries of s that are not in s ′ or s ′′ , so s = s ′ ∪ s ′′ ∪ s ′′′ is a disjoint union. Notice that if p 2l+1 is odd, the term p 2l+1 (2l + 1) is in s ′′′ . This term never gives rise to a constraint. Entries in s ′ . None of the entries in s ′ correspond to constraints. Rather, they can be used to define certain quantities that make c-constraints more transparent; see the example of the minimal puncture, p = [2N + 1], below. Entries in s ′′ . Each entry in s ′′ can be interpreted as a dimension for a c-constraint. Let us look at these constraints in more detail. We study first the even entries. Let 2k be in s ′′ . The corresponding c-constraint is, schematically, a square: l/2 (c) is a polynomial in leading coefficients, of total dimension k and total pole order l = p 2k (e.g., f ). The ellipsis above (and in the rest of this subsection) stands for possible cross-terms, which are of the form (2.20) Such a term would arise if and only if there exist c-constraints of dimensions 2k ′ and 2k ′′ , and if k ′ + k ′′ = 2k and l ′ + l ′′ = 2l. On the other hand, the odd entries, say 2k + 1, of s ′′ always yield c-constraints that are sums of cross-terms, as (2.20), but with k ′ + k ′′ = 2k + 1. Entries in s ′′′ . Let us now study the constraints in s ′′′ . Again, let us look at even and odd entries separately. Each even entry, 2k, in s ′′′ is the dimension of an a-constraint, Finally, let us look at the odd entries, 2k + 1, in s ′′′ . If 2k + 1 satisfies the requirements of section 2.4.2, it yields a cross-term c-constraint involving parameters introduced by a-constraints, c where u + v = 2k + 1 and l = l ′ + l ′′ . Also, if the first c-constraint dimension in s ′′′ is odd, the c-constraint will include a "mixed" cross-term of the form To write the c-constraints specifically, we define auxiliary quantities r k , for 0 ≤ k ≤ N , by r 0 ≡ 1, r 1 ≡ 0, and the rest by p k is the sum of all terms of the form (r j ) 2 or 2r j r j ′ , with j, j ′ < k, of total scaling dimension k and total pole order p k . Then, expressing the c (k) p k back in terms of the r j , for 0 ≤ j ≤ k ≤ N , reveals a nice pattern of squares of cross-terms that should be completed, in a unique way, by the sought c-constraints. For instance, for N = 5, we define . (2.24) Then, we can write: 5/2 = 2r 5 r 0 +2r 3 r 2 +2r 1 r 4 , c 7/2 = 2r 3 r 4 + 2r 2 r 5 , c Here, the expressions of scaling dimension k for 0 ≤ k ≤ 5 are equivalent to (2.24), while those with 6 ≤ k ≤ 10 are the actual c-constraints. Thus, introducing the r k makes clear what the c-constraints should be. Now, let us discuss the puncture p = [2N − 1, 1 2 ]. We find that there are a-constraints (c-constraints) for every even (odd) scaling dimension k in the range 4, 5, . . . , 2N . These constraints follow a pattern of squares and cross-terms in a puncture is the only one with an a-constraint of scaling dimension four, that is, the only one that gives rise to an independent parameter a (2) of scaling dimension two. Collisions of punctures In this section we study what happens when two or more punctures collide. We call this process the operator product expansion (OPE) of punctures. In section 3.1 and section 3.2, we discuss the overall strategy for analyzing the OPE, by first considering the OPE on an infinite plane, and then on a compact curve. We then describe an explicit algorithm to compute the OPE in section 3.3. OPE of punctures on a plane So far we have studied how to compute the properties of a single puncture. Let us now see what happens if two or more punctures come close together. First, we would like to study the simpler case of a non-compact Riemann surface, the complex plane. Consider a sixdimensional space of the form R 4 × C. We denote by z the coordinate on C, and consider k punctures of types p 1 , p 2 , . . . , p k to be localized, respectively, at z = z 1 , z 2 , . . . , z k . Now, at very large |z|, the system looks as if consisting of: • a puncture q at z = 0, with flavour symmetry F , • a 4D N = 2 superconformal theory X, which depends on the types and positions of the k punctures, and such that a certain subgroup H of its global symmetry group, G X , can be identified with a subgroup of F , and • a dynamical gauge multiplet for H, with coupling constant τ depending on the types and positions of the k punctures, which couples X to q. We call this process the operator product expansion (OPE) of the k punctures, and we call X the coefficient of the OPE. We schematically represent the outcome of the OPE as If q is the full puncture and H = G, the theory X is the same as the 4D theory obtained by compactifying the 6D theory on a sphere, with punctures of type p i at z = z i , and a full puncture at z = ∞. Otherwise, we say that the theory X is the 4D theory "obtained by compactifying the 6D theory on a sphere, S, with punctures of type p i at z = z i , and an irregular puncture at z = ∞, determined by the choice of p i ," and say that JHEP05(2015)075 "the gauge group H arises from the cylinder connecting the irregular puncture with the regular puncture of type q." 3 We denote such irregular puncture by the pair (q, H), and, if there are inequivalent embeddings H ֒→ F , we add a label to distinguish which embedding we mean; see section 3.2.3 for an example. We call q the "regular puncture conjugate to" the irregular puncture (q, H). While the detailed properties of the theory, X, depend on the punctures, p i (and the various cross-ratios of their positions), certain features are encoded purely in the pair (q, H). For instance, H, seen as a subgroup of the global symmetry group G X of the theory X, has some level k ≥ 0. (k = 0 if and only if X is the empty theory.) This level is strictly determined [5] by demanding that the H gauge theory on the cylinder Similarly, the local contribution of the irregular puncture to n h , n v and to the graded Coulomb branch dimensions of X are determined by the pair (q, H) [5]. Degeneration of a curve via the OPE Let us now consider the OPE on a compact curve. Let C be a sphere, with k + k ′ regular punctures of types p 1 , . . . , p k ; p ′ 1 , . . . , p ′ k ′ . We assume that the punctures are such that all the graded Coulomb branch dimensions are non-negative. (Otherwise, the theory is "bad", and taking the 4D limit is a more delicate issue.) Now consider the limit where C degenerates into two spheres, We would like to understand the behaviour of the 4D theory in this limit. We proceed as follows: • Replace the punctures p 1 , . . . , p k with their OPE, as in section 3.1, obtaining a regular puncture q, a gauged subgroup H of the flavour symmetry of q, and the 4D theory X, which is the 4D limit of a sphere with p 1 , . . . , p k plus an (ir)regular puncture (q, H). • Then we have the following system consisting of where I(q, q ′ ) is a sphere with two regular punctures of type q and q ′ , respectively. • As explained in [18], a sphere with two regular punctures is a supersymmetric hy-perKähler non-linear sigma model with global symmetry F × F ′ , where, in our case, we gauge the subgroup H × H ′ ⊂ F × F ′ . Any point on the target space of the non-linear sigma model breaks F × F ′ to the stabilizer subgroup, F ′′ and hence the gauge symmetry H × H ′ is always Higgsed to H ′′ ⊂ F ′′ . • Sometimes, the D-term and the F-term constraints for H ′′ force the theory X coupled to q via H to be Higgsed to a theory Y . Similarly, the theory X ′ coupled to q ′ may be Higgsed to a theory Y ′ . • In the end, we have a 4D system of the form: Now, a sphere with k regular punctures and an irregular puncture has a degeneration where we consecutively collide two punctures, so that the resulting 4D theory consists of several three-punctured spheres coupled to each other. These three-punctured spheres, which we call fixtures [4,5], contain either • three regular punctures, or • two regular punctures and one irregular puncture. A table of all possible fixtures makes finding the 4D description of an arbitrary degeneration a simple task. Let us illustrate these ideas with a few examples, all with untwisted punctures for simplicity. Example 1 Consider the A 2N −1 theory compactified on a 4-punctured sphere, with punctures • The OPE of p 1 with p 2 is a full puncture, [1 2N ], coupled via H = Sp(N ) to the theory, X, which is 4N free hypermultiplets transforming as 2 copies of the fundamental 2Ndimensional representation of Sp(N ). Here the symbol stands for the 2(N − 1)-dimensional fundamental representation of • The OPE of p 3 and p 4 yields the full puncture, q ′ = [1 5 ], coupled to the theory Here R 2,5 is a non-Lagrangian SCFT discussed in [4]. Note that, since q ′ is the full puncture and H ′ is G, the 3-punctured sphere corresponding to X ′ contains three regular punctures, • In between, we have the 2-punctured sphere with one full puncture and one [2, 1 3 ] puncture. This Higgses H × H ′ = SU(2) × SU (5) down to H ′′ = SU (2). However, in order to satisfy the F-term equations, the theory X ′ is Higgsed down to the theory The end result is , SU(2) SU (2) empty (E 6 ) 6 + 1(2) + 5 (1) an SU(2) gauging of the (E 6 ) 6 SCFT, with an additional doublet and 5 free hypermultiplets. Note that, in this case, the cylinder connects an irregular puncture with its conjugate regular puncture. Example 3 Now, let us turn to an example from the D 4 theory. Consider the 4-punctured sphere Here, each "very even partition" (e.g., [2 4 ]) corresponds to two nilpotent orbits in so (8), and our sphere includes one of each type (indicated by the red/blue colour); see [5]. When we take the OPE of p 1 with p 2 , we obtain the full puncture, q = , coupled to X = the (E 7 ) 8 SCFT, via H = Spin(7) (and similarly for the OPE of p 3 with p 4 ). However, there are three inequivalent embeddings of Spin(7) ֒→ Spin (8), depending on which of the three 8-dimensional irreducible representation of Spin(8) decomposes as 7 + 1. We can indicate this choice by putting a subscript on H, or (in the notation of [5]) by colouring the Young diagram corresponding to q: , Spin (7)) = ( , Spin(7) 8c ) In the notation of (3.2), we have H = Spin(7) 8s and H ′ = Spin(7) 8c , and the two-punctured sphere, I( , ) Higgses H × H ′ = Spin (7) 8s × Spin (7) 8c down to the common subgroup, G 2 . So the final 4D description of this limit is , Spin (7) , where, as in section 3.2.1, we have a cylinder connecting two irregular punctures. Determining the OPE via the Higgs field In light of section 3.1 and 3.2, we would like to study the basic problem of two punctures p 1 and p 2 colliding on a plane. We have seen that in the collision limit, an irregular puncture (q, H) arises, which is connected to a regular puncture q by a cylinder with gauge group H. Let us discuss how to find q and H. To determine q, we construct a solution to the Higgs field on the plane that includes p 1 and p 2 , and compute the residue that arises in the collision limit. This residue provides the Higgs-field boundary condition for q. Thus, one can determine the Nahm pole for q, e.g., by looking at the degeneracy of the mass deformations in the residue. Also, the number of independent mass deformations is equal to the rank of H. To gather more information about H, we consider the k-differentials φ k , and take the limit where the punctures collide, which reveals the scaling dimensions of the Casimirs of H. Knowing these usually suffices to identify the gauge group. Only in a handful of cases, often to distinguish Sp(k) from SO(2k + 1), must one do further consistency checks, such as computing the matter representation for the fixture that arises in the degeneration limit, and corroborating that it provides the right contribution to the beta function of H. Because of these observations, in the next subsections we will study the Higgs field on a plane with two punctures, in the limit where these collide. Later, in section 3.3.4, we will do the same for k-differentials. But before doing this, let us briefly discuss a situation that will arise often. Consider C to be the complex plane or a sphere, with complex coordinate z, and put k punctures, p 1 , . . . , p k , on C. Let the positions of the punctures be λz 1 , . . . , λz m , z m+1 , . . . z k , so that we can collide the first m of them by taking the limit λ → 0. Now consider a meromorphic k-differential on C of the form where α, s, r 1 , . . . , r k are rational numbers; A is a coefficient. This is a typical term in a k-differential, including the case of the Higgs field (k = 1). In the λ → 0 limit, we get C in the presence of the k − m punctures p m+1 , . . . , p k , plus a new puncture, q, at z = 0. The λ → 0 limit may also be represented by the conformally equivalent picture of a sphere C ′ that bubbles off C, containing the m punctures p 1 , . . . , p m , plus the irregular puncture (q, H). Such picture is obtained through the change of variables z = λ/z ′ . Then, requiring that (3.4) have a finite limit as λ → 0 in the z ′ coordinates puts a lower bound on α, (3.5) We will use this result quite often. If the bound is not saturated, the k-differential simply vanishes, in the λ → 0 limit, on both C and C ′ . On the other hand, if the bound is saturated, we have three possibilities when λ → 0: In most cases (such as in the following subsections), the coefficient A in (3.4) will represent a physical degree of freedom, and so it should not be lost in the λ → 0 limit; therefore, it will be desirable that the bound be saturated. Case 1 (2) corresponds to A being a degree of freedom for the theory on C ′ (C), whereas case 3 corresponds to A being a degree of freedom of the gauge group on the cylinder, which in the λ → 0 limit looks like a mass deformation on both C and C ′ . However, in a few cases where the coefficient A carries redundant information (because of local constraints), consistency will require that the bound not be saturated. We will see such cases in section 4. Untwisted-untwisted Consider now the Higgs field Φ on a plane with two untwisted punctures of type p 1 and p 2 at positions z = 0 and z = λ, respectively, where z is the coordinate on the plane. Let A and B be representatives of the (massless or mass-deformed) adjoint orbits in sl(2N ) corresponding to p 1 and p 2 , respectively. Then, we can write an ansatz, where P (z) is a power series in z whose coefficients are generic elements of sl(2N ). P (z) simply represents the infinite degrees of freedom contained in the plane. At finite λ, the expansions of Φ(z) near p 1 and p 2 are, respectively, In the limit λ → 0, a new untwisted puncture, q, arises at z = 0. The expansion of Φ(z) near this point is This is the expected expansion for an untwisted defect. Thus, the Higgs field boundary condition for the new puncture q is given by A + B. Notice that (3.6) saturates the bound of (3.5). In the λ → 0 limit, we have the complex plane in the presence of just the new puncture, q. The conformally-equivalent picture is that of a fixture that bubbles off the plane, containing the two original punctures, p 1 and p 2 , plus the irregular puncture, (q, H). The Higgs field for the fixture is where we used z = λ/z ′ in (3.6) and took the λ → 0 limit. The punctures q, p 1 , p 2 are at z ′ = 0, ∞, 1, respectively. Notice that the Riemann-Roch theorem for a 3-punctured sphere requires only two coefficients, not three, for Φ(z ′ ). In other words, in a fixture, the choice of representatives, A and B, for the adjoint orbits for two punctures, p 1 and p 2 , completely determines the adjoint orbit (that is, its mass deformations), plus a representative of such orbit, for the third puncture, q. Twisted-twisted Let us now take two twisted punctures of type p 1 and p 2 at positions z = 0 and z = λ on a complex plane with coordinate z. Let A and B be representatives of the (massless or massdeformed) adjoint orbits in Sp(N ) corresponding to p 1 and p 2 , respectively. Recall the decomposition of sl(2N ) in eigenspaces of the Z 2 -outer automorphism, sl(2N ) ≃ sp(N ) ⊕ o −1 . Then, we can write the following ansatz for the Higgs field, where D is a generic element of o −1 , and P (z) and Q(z) are power series in z whose coefficients are, respectively, generic elements of sp(N ) and o −1 . At finite λ, the expansions of Φ(z) near p 1 and p 2 are, respectively, These expansions are of the expected form for twisted defects. Now, in the collision limit λ → 0, a new untwisted puncture, q, arises at z = 0. The expansion of Φ near q is This is, again, the expected expansion for an untwisted defect. Thus, q has Higgs field residue A + B + D, with D generic in o −1 . For completeness, we show the Higgs field for the fixture in the conformally equivalent picture, as before, (3.14) Twisted-untwisted Finally, consider a twisted and an untwisted puncture, of types p 1 and p 2 , respectively, at positions z = 0 and z = λ. Let A and B be representatives of the (massless or massdeformed) adjoint orbits corresponding to p 1 and p 2 , respectively. Notice that A is in sp(N ), while B is in sl(2N ). Let us decompose B as B = B 1 + B −1 , where B 1 is in sp(N ) and B −1 is in o −1 . Then, we can write an ansatz for the Higgs field, where P (z) and Q(z) are power series in z whose coefficients are, respectively, generic elements of sp(N ) and o −1 . The expansions of Φ(z) near p 1 and p 2 are, respectively, (3.16) Again, these expansions have the expected forms. Now, in the limit λ → 0, a new twisted puncture, q, arises at z = 0. The expansion of Φ(z) near q is This expansion has the correct form for a twisted defect. Thus, the Higgs field boundary condition for q is given by A + B 1 , with B 1 the projection of B to sp(N ). Finally, the Higgs field for the fixture in the conformally equivalent picture is (3.18) Degenerating k-differentials Let us discuss how to find the scaling dimensions of the VEVs for the gauge group H that arises when two punctures p 1 and p 2 on a plane collide. In most cases, these provide enough information to determine H. The natural way to find such VEVs is to use (2.4) to compute the k-differentials from the Higgs field residue of the new puncture, q. If q is at z = 0, we have, in principle, a gauge group VEV u k of scaling dimension k if However, some of these u k may not be independent. If p 1 and p 2 have mass deformations, for instance, the u k might contain combinations of these masses. If a parameter u k vanishes when we turn off the the masses, then such u k is not an actual gauge group VEVs. Also, the u k might depend on each other. Hence, it is convenient, for the purpose of finding the gauge group, to consider the OPE with massless p 1 and p 2 . It might also be useful to study the k-differentials before the collision, taking into account whichever c-constraints and a-constraints the two punctures have, and then take the degeneration limit. Let us also briefly discuss the basic problem of the degeneration of a sphere with n 1 +n 2 massless punctures, in which we collide the first n 1 punctures. Each k-differential contains 1 − 2k + n 1 +n 2 i=1 p (i) k terms of the form (3.4). We want to find conditions for a gauge group VEV term (3.19) to exist at every k. Note that if k is odd, and we are colliding an odd number of twisted punctures, there cannot be a gauge group term because the power s in (3.19) must be an integer. So, if we collide an odd number of twisted punctures, 4 the discussion below is restricted to even k. If none of the punctures satisfy constraints, then, at every k, there will exist a gauge group term (3.19) if and only if t k ≡ −k + n 1 i=1 p (i) k ≥ 0 and t ′ k ≡ −k + n 1 +n 2 j=n 1 +1 p (j) k ≥ 0. If there are c-constraints and a-constraints, these will act on the parameters of the kdifferential on the side of the degeneration where they exist, and so may constrain the gauge group VEV. However, if t k ≥ n c + n a , where n c and n a are, respectively, the number of c-constraints and a-constraints of dimension k for the punctures on one side of the degeneration, and if a similar condition holds for the other side of the degeneration, then the gauge group term will not be affected by the constraints. But if these conditions do not hold, one needs to analyze the k-differential in detail to see how the gauge group VEVs are affected. Now we wish to illustrate our techniques with some examples. To write residues explicitly, we need to use a basis. We use the embedding of Sp(N ) in sl(2N ) of [19], where A, B and C are n × n complex matrices, and B and C are furthermore symmetric. We also take from [19] the nilpotent orbit representatives of sl(2N ) and Sp(N ). Example 1 Let us consider again the untwisted example of section 3. Example 2 Let us study the case of a five minimal twisted punctures, [2N + 1], on a plane, and study the consecutive collisions of two punctures. We do this to illustrate the rules for combining residues. Colliding two minimal twisted punctures yields an untwisted puncture with residue equal to a generic element in o −1 . This can be diagonalized to an element of the form diag(m 1 , m 2 , . . . , m N ; m 1 , m 2 , . . . , m N ), (3.23) with N i=1 m i = 0. Since the 2N eigenvalues are split into pairs of identical elements, this residue should be a mass-deformed representative of the untwisted [2 N ] puncture. This puncture has SU(N ) global symmetry group. The fact that there are N masses m i adding up to zero suggests that the whole SU(N ) group is gauged. This can be checked by computing the k-differentials. In fact, to leading order we have, for k = 2, 3, . . . , 2N , where the two colliding [2N + 1] punctures are at z = 0, λ. All of the u k survive the λ → 0 limit, and so each should become a Casimir of the gauge group. However, only the u k for k ≤ N are independent because of the constraints from the minimal twisted punctures. (Actually, since there are two [2N + 1] punctures, the subleading coefficients in the expansion above for k > N are also constrained.) Thus, the gauge group has only N − 1 Casimirs of scaling dimensions 2, 3, . . . , N . Hence, it must be SU(N ). Now let us find the OPE of the [2 N ] puncture with the third [2N + 1] puncture. Our prescription tells us that the residue of the new puncture will be the Sp-projection of the residue of the [2 N ] puncture. 5 The diagonalized form of the new residue is diag(r 1 , r 2 , . . . , r N ; −r 1 , −r 2 , . . . , −r N ), (3.25) with N i=1 r i = 0. For N = 2, this residue takes the more particular form diag(r, r, −r, −r). So, for N = 2, this is the [2 2 , 1] puncture, and for N ≥ 3, it is the [1 2N +1 ] puncture. To find the gauge group, we can either use three colliding [2N + 1] punctures, or the [2 N ] puncture colliding with a [2N + 1] puncture. Let us use the first. A [2N + 1] puncture has pole structure p k = k/2, and satisfies a c-constraint for k = N + 1, N + 2, . . . , 2N . Plus, for k = 2, . . . , N , we have t k = 3 i=1 p i k − k = k/2 ≥ 0, so we have an unconstrained VEV for each even k in this range. On the other hand, at each k = N + 1, . . . , 2N , we have three c-constraints, so we have t k − n c = 3 i=1 p i k − k − 3 = k/2 − 3, and we can be sure we have an unconstrained VEV for every even k in this range such that k ≥ 6. So, for N ≥ 5, the gauge group is just Sp(N ), while for N = 2, 3, 4, we need to check by hand. For N = 2, we have k = 4 < 6, so we only have one VEV of dimension 2, and the gauge group should be SU (2). For N = 3, we have k = 4 < 6, but k = 6 ≥ 6, so we have VEVs of dimensions 2 and 6, and the gauge group should be G 2 . For N = 4, we have k = 6 ≥ 6 and k = 8 ≥ 6, so we have an Sp (4) Atypical degenerations The conventional understanding of Gaiotto duality is that one starts with a Riemann surface, C, with punctures, and, in any degeneration limit of C, a weakly-coupled gauge group, G, in the low-energy 4D theory arises. There is a specific connection between the plumbing fixture q of the degenerating cylinder and the weak gauge coupling τ for G, q = e 2πiτ (4.1) JHEP05(2015)075 So, the pinch limit q → 0 of the surface corresponds to the weakly-coupled limit τ → i∞ for the gauge group. Besides, the gauge-invariant VEVs constructed from the scalars in the G-vector multiplets can be similarly assigned to the cylinder: upon degenerating the curve, the VEVs of G become mass-deformations of the new punctures that appear on both sides of the degeneration. Finally, upon complete degeneration, the G-vector multiplets completely decouple; they are not present on either side of the degeneration. In this section we want to study certain degenerations involving curves with certain combinations of twisted and untwisted punctures A 2N −1 series where the picture explained above relating cylinders and gauge groups does not hold. 6 Our goal is to understand to what extent the conventional picture of the previous paragraph is still correct, and how it should be modified when our curve contains these "dangerous" combinations of punctures. Happily, "atypical" degenerations are actually rare. By "atypical" degenerations in the twisted A 2N −1 series we refer to either of the following situations: • A degeneration brings a certain gauge coupling to a point in the interior of moduli space of couplings, instead of a weakly-coupled cusp. This interior point can either be a strongly-coupled point, or a point where two gauge group couplings become equal. • A degeneration brings not only the corresponding gauge group to its weakly-coupled cusp, but also other gauge groups (adjacent to, but not directly localized at the degenerating cylinder) to weakly-coupled cusps. Fortunately, these atypical degenerations seem to occur only when a sphere, containing certain combinations of punctures, bubbles off a generic surface. Furthermore, for the bubbling sphere to be "atypical", there is a bound on the number of punctures it may contain, as well as a restriction on the types of punctures. (These can only be minimal twisted or minimal untwisted.) Thus, these "atypical" spheres are easy to classify. Let us study these "atypical" spheres one by one. = a 2 in the [2N − 1, 1 2 ] puncture introduces a Coulomb branch VEV a of scaling dimension two. We want to argue that such a fixture represents a gauge theory whose gauge coupling τ is locked at the Z 2 -symmetric point of its moduli space, τ = i, and that the Z 2 action can be identified with the disconnected part of the flavour symmetry O(2) of [2N − 1, 1 2 ]. A 3-punctured sphere has no moduli, so the fact that a gauge coupling is frozen at a point in coupling-constant moduli space is the only way that we could have a gauge theory on a 3-punctured sphere. Now, it is not obvious that such point in the moduli space should lie in its interior, or that it should take the value τ = i. We will verify below these assertions by studying the Seiberg-Witten curve for the fixture. Let the punctures [2N − 1, 1 2 ], U , and T be at the positions z = 0, 1, ∞, respectively. Since we are just interested in seeing at which point in gauge-coupling moduli space the theory is, we remove all the unnecessary degrees of freedom, such as mass deformations and Coulomb branch VEVs of scaling dimension different from two. Then, the only surviving k-differential is φ 4 , which includes the square of the Coulomb branch parameter a. Notice that the 2-differential φ 2 vanishes because we have only three massless punctures. So, the Seiberg-Witten curve for the reduced theory is: The factor y 2N −4 tells us that the original 2N -sheeted cover Σ of the fixture splits into 2N − 4 unramified branches plus a four-sheeted cover, Σ ′ . Let us dispose of the unramified branches. The Seiberg-Witten curve factorizes: This expression tells us that Σ ′ globally splits into two double covers which differ only by the choice of sign in a. Let us explore the first factor in (4.3). Consider the transformation z = t 2 , y = y ′ /2t, which preserves the Seiberg-Witten differential, λ = ydz = y ′ dt. Then the first factor takes the form y ′2 − 4a t(t − 1)(t + 1) = 0 (4.4) Now, this is the Seiberg-Witten for a four-punctured sphere in the untwisted A 1 theory, with punctures at t = 0, 1, −1, ∞, which represents the SU(2) N f = 4 gauge theory. If the curve above had really arisen from a four-punctured sphere, we would identify the marginal coupling q of the SU(2) gauge group with the cross-ratio x of the four punctures. But here, since we started with a fixture, we are not allowed to vary the cross-ratio; the curve we obtain is fixed at the cross ratio, x = −1. This value corresponds to the Z 2 -orbifold point, τ = i, in gauge-coupling moduli space. What about the second factor in (4.3)? It clearly represents again the SU(2) gauge theory, but with the other choice of sign for a. The origin of this second factor is the aconstraint of the [2N − 1, 1 2 ] puncture, which does not fix the sign of a. This second copy is not a second, independent, SU(2) gauge theory, since its degrees of freedom simply mirror those of the first factor up to a sign. So, the original fixture we started with represents a single copy of the SU(2) gauge theory at the τ = i point in moduli space. This suggests that the original, good A 2N −1 fixture can be interpreted to contain a gauge group G with gauge coupling τ = i. This interpretation is confirmed by computing the total number of hypers and vectors for this fixture, as well as the representations for the matter, and is consistent with S-duality in all examples we studied. via an "empty" cylinder, to form the following four-punctured sphere: Gauge theory at the Z 2 -symmetric point In this degeneration, the cross-ratio x of this four-punctured sphere controls the perturbation of the gauge coupling τ away from the Z 2 symmetric point, τ = i. Locating the punctures [2N + 1], T , [2N − 1, 1], U at z 1 , z 2 , z 3 , z 4 respectively, let We will find that the relation between the cross-ratio of our original twisted 4-punctured sphere and the SU(2) gauge coupling q is given by At x = 0, the gauge coupling becomes q = −1, i.e., the Z 2 orbifold point, which is consistent with what we found for the gauge-theory fixture alone. Note that before the degeneration, the flavour symmetry comes from the puncture [2N − 1, 1] which has U(1) symmetry. After the degeneration, the flavour symmetry comes from [2N − 1, 1 2 ] which has O(2) symmetry. This contains the original U(1) together with its outer automorphism. The appearance of a square root of x in (4.5) means that the marginal-coupling moduli space of the theory, which we will denote by X 4 , is not the complex structure moduli space of the punctured sphere, M 0,4 , but a double cover of it. X 4 is parametrized by w, rather than the cross-ratio, x. This feature recurs later in the discussions in section 5. The Z 2 deck transformation for this cover, w → −w, implements the Z 2 S-duality transformation q → 1/q. (4.6) on the family of gauge theories, of which the theory at q = −1 is a fixed-point. Moreover, this Z 2 S-duality transformation is accompanied by a Z 2 outer automorphism that acts on the global symmetry group of the theory. In particular, it acts as a Z 2 automorphism of the SCFT at q = −1. Derivation Using the prescription of section 3.3.3, one can check that the OPE of the massless punctures of types [2N + 1] and [2N − 1, 1] yields the massless [2N − 1, 1 2 ] puncture. This is a remarkable property. Usually, the OPE of two massless punctures yields a mass-deformed new puncture; these masses give rise to the VEVs for the gauge group on the cylinder. In general, the masses of the new puncture are encoded in the choice of representatives of the orbits for the two original punctures, as emphasized in footnote 5. In our case, the [2N + 1] puncture has only one possible representative, the zero element. However, one still has a choice of representative for the [2N − 1, 1 Let us now derive the relation (4.5) explicitly by studying the Seiberg-Witten curve of the 4-punctured sphere. We put the punctures of types [2N + 1], [2N − 1, 1], U , T , at the positions z = 0, x, 1, ∞, respectively, and consider the x → 0 limit. As before, we discard all parameters irrelevant to the problem, that is, all mass deformations and Coulomb branch VEVs of scaling dimension different from two. So, the only non-zero kdifferentials will be φ 2 and φ 4 . In eliminating Coulomb branch VEVs, we have solved the constraints of the [2N + 1] puncture by imposing the relation c (which is itself a constraint for N = 2) and setting to zero any other parameters. Thus, effectively, we have reduced a problem in the A 2N −1 theory to one in A 3 . The k-differentials are: (4.7) The bound in (3.5) implies that γ ≥ 0 and β ≥ 0, while the constraint requires u 4 = −(u 2 ) 2 and β = 2γ − 1. Hence, the bounds are refined to γ ≥ 1/2 and β ≥ 0. But we cannot have γ > 1/2, β > 0 because then both u 2 and u 4 would disappear from either side of the degeneration when x → 0, and we would 'lose' a physical VEV. Thus, we must have γ = 1/2, β = 0. Hence, u 2 vanishes on both sides of the degeneration when x → 0, and u 4 survives only on the gauge-theory fixture side. There are no gauge group VEVs supported on the cylinder, as we expected. Also, the parameter u 4 is a square, which is consistent with the a-constraint c This expression can be written, as before, as the product of two global factors: Let us pick the first factor in this expression, and use again the transformation z = t 2 , y = y ′ /2t. We get: where w 2 = x. This is again the Seiberg-Witten curve for the A 1 four-punctured sphere representation of the SU(2) N f = 4 theory, at gauge coupling (4.5). What about the second factor in (4.9)? One arrives at a result similar to (4.10), but with u 2 and w traded for −u 2 and −w, respectively. So, this also represents the SU(2) gauge theory, with gauge coupling controlled by a cross ratio q ′ = −1+w 1+w . Notice that q ′ = 1/q, and since the points at q and 1/q are related by S-duality, both factors in (4.9) represent the gauge theory at the same point in gauge-coupling moduli space. Again, they differ only by the choice of sign in u 2 , which is left unfixed by the a-constraint c How it arises Let us study a sphere with one minimal untwisted and two minimal twisted punctures bubbling off a plane: . . . [2N + 1] empty Here the maximal untwisted puncture, [1 2N ], at the right end of the cylinder has SU(2N ) flavour symmetry, but only an SU(N )×SU(N ) subgroup is gauged. Counting hypers reveals that the sphere to the left must be empty. Both SU(N ) factors become weakly coupled when the cylinder degenerates. The second SU(N ) factor is underlined to indicate, as we will see, that there is a specific degeneration of the empty sphere that takes the SU(N ) gauge coupling to zero, but does not decouple the non-underlined SU(N ) factor. JHEP05(2015)075 Let us look at the degeneration where two [2N + 1] collide: As one would expect, counting hypers tells us that the empty sphere decomposes into two empty fixtures. The underlined SU(N ) gauge group is identified with the SU(N ) 2 on the cylinder on the left. One can make the SU(N ) 2 gauge group weakly coupled by degenerating either of the two shown cylinders. Degenerating completely the cylinder to the left turns off the SU(N ) 2 gauge coupling, and leaves just the SU(N ) 1 gauge group supported on the the cylinder on the right. Instead, if we degenerate the cylinder to the right, both SU(N ) 1 and SU(N ) 2 factors decouple, and we are left with an empty four-punctured sphere on the left. Notice that the four-punctured sphere cannot be a conformal SU(N ) gauge theory because it contains no hypers. Let us look at this degeneration in more detail. We already saw that the collision of two minimal twisted punctures, [2N + 1], yields a mass-deformed [2 N ] puncture with residue (3.23) and an SU(N ) gauge group. As in (3.23), we take the m i (with N i=1 m i = 0) to be the mass deformations of the [2 N ] puncture. Similarly, we can consider the OPE of the [2 N ] puncture with the massless [2N − 1, 1] puncture, using the prescription of section 3.3.1. The resulting residue can be diagonalized to the form diag(m 1 , m 2 , . . . , m N ; r 1 , r 2 , . . . , r N ), (4.11) with N i=1 r i = 0. The r i are related to the m i , but do not vanish if the m i are set to zero. Since all terms in (4.11) are generically different, this boundary condition must correspond to an irregular version of the [1 2N ] puncture. In this case, we have two independent sl(N ) Lie algebras, with mass deformations m i and r i , embedded in the sl(2N ) global symmetry group of the [1 2N ] puncture. Clearly, the sl(N ) factor with masses m i is identified with the sl(N ) global symmetry group of the [2 N ] puncture. In particular, if we turn off the m i , we are still left with a sl(N ) factor, with Coulomb branch VEVs r i . This is how the SU(N ) × SU(N ) cylinder arises. JHEP05(2015)075 Consider also the degeneration where [2N + 1] and [2N − 1, 1] collide: . . . We leave the discussion of the dependence of the gauge couplings on the cross-sections to section 5, which is an example that covers all the cases of atypical degenerations. SU(2) × SU(2) cylinder Let us now look at a sphere with two minimal untwisted and one minimal twisted punctures, bubbling off a plane. This example is similar to the previous one, but it involves an SU(2) × SU(2) cylinder. Here, the full SU(2) × SU(2) flavour symmetry of the [2N − 3, 1 4 ] puncture is gauged. (For N = 2, the puncture is [1 5 ], and the global symmetry group is enhanced to Sp(2), but only an SU(2) × SU(2) subgroup is gauged.) The sphere to the left contains four free hypers. Degenerating the cylinder decouples both SU(2) factors. As in the previous example, a certain degeneration of the four-punctured sphere, turns off only the underlined SU(2) factor. Such a degeneration is: As before, the underlined SU(2) is identified with the SU(2) on the left-hand cylinder. Notice that each fixture contains two hypers charged under one of the two SU(2) gauge group factors. The hypers in the left fixture are charged under SU(2) 2 . When the left cylinder decouples, the two hypers in the left fixture become free. If we instead degenerate the right cylinder, both SU(2) gauge groups decouple, and we get a 4-punctured sphere with four free hypers. This is a mass-deformed form of the [2N − 3, 1 4 ] puncture. Here r depends on t, m and n, but does not vanish if these parameters are set to zero. Thus, r and t parametrize the Coulomb branches of the SU(2) 1 and SU(2) 2 gauge groups, respectively. The second degeneration is: Here the middle fixture contains the four hypers, while the left fixture is empty. Degenerating the left cylinder does not completely decouple either of the SU(2) factors, but degenerating the left cylinder decouples both. In this latter degeneration, the resulting four-punctured sphere again contains four free hypers. We As we did for the case of the SU(N ) × SU(N ) cylinder, we leave the discussion of the dependence of the gauge couplings on the cross-sections to section 5. D-shaped quivers Consider the extended Dynkin diagrams for the simply-laced Lie algebras, given in figure 1, where we have indicated the Dynkin label of each node. It is well known that one obtains a conformally-invariant quiver gauge theory by assigning an SU(l i N ) gauge theory to the i-th node (whose Dynkin label is l i ), and a hypermultiplet, in the bi-fundamental, to each link. It has not, however, been known whether all of these affine quiver gauge theories can be realized as compactifications of the (2,0) theory. The realization of the affine A n quivers is well-known: compactify the A N −1 theory on a torus with n simple punctures. In this section, we present the analogous six-dimensional realization of the affine D n quivers in the twisted A 2N −1 theory. This was first found by Kapustin [8] using a chain of string dualities. We will first present our construction using twisted punctures, and then compare it with Kapustin's. On the other hand, it is also known that any quiver gauge theory with SU gauge groups which is semiclassically conformal has its gauge groups arranged in the form of a non-affine Dynkin diagram, with SU(N i ) gauge groups on the i-th node and bifundamentals associated to the edges, together with some fundamental flavours at each of the nodes. The realization of the non-affine A n -shaped quivers is known: compactify the A N −1 theory on an untwisted sphere with two regular punctures and a number of simple punctures. At the end of this section, we show how an arbitrary non-affine D n -shaped quiver can be analogously realized in the twisted A 2N −1 theory. Affine D n -shaped quivers The affine D n -shaped quiver arises from the compactification of the Here we show only one of the two ends of the affine D n quiver, since the other end is identical. The SU(2N ) cylinders here represent the nodes with a label "2" in the affine D-series Dynkin diagram in the figure above, and the bifundamental fixtures represents the links. The non-trivial piece is the nodes with a "1" at each end of the Dynkin diagram, which correspond to SU(N ) gauge groups. In the sphere above, this piece is represented by the 5-punctured sphere at the left end of the figure. We have deliberately not degenerated the 5-punctured sphere at the end of quiver, since the punctures there are in combinations that lead to atypical degenerations, as studied in the previous section. Let us then examine the degenerations of this 5-punctured sphere in detail. (2)). The Lagrangian field theory arises only in an "atypical" degeneration (in the nomenclature of the previous section). Degenerations of the 5-punctured sphere Degeneration A. The only degeneration that can be understood in the usual, nonatypical sense is: The SU(2) and the SU(N ) gauge groups shown here become weakly coupled as their respective cylinders degenerate. The representations for the matter above are those for the product SU(2) × SU(N ) × SU(2N ). The next five degenerations are all atypical. Degeneration B. In this degeneration all of the dynamics is supported on the middle fixture, which is a gauge theory fixture: Here, the underlined SU(N ) is identified with the SU(N ) cylinder on the right hand side. It goes to zero gauge coupling when either cylinder pinches off. Degeneration D. In this degeneration, the two SU(N ) gauge couplings become equal when the cylinder on the right pinches off. The moduli space of coupling constants vs. the complex structure moduli space Let us locate the punctures at The ring of meromorphic functions onM 0,5 consists of all rational functions of the independent cross-ratios Then the compactified moduli space of the 5-punctured sphere,M 0,5 is obtained by blowing up CP 1 × CP 1 described by s 1 and s 2 at 3 points; the result is a del Pezzo surface, dP 4 . The boundary divisor consists of 10 rational curves, D ij , 1 ≤ i < j ≤ 5, corresponding to the locus where the points z i and z j collide. Each D ij has self-intersection number = −1, and intersects precisely three others D ij ∩ D kl = +1 for i, j, k, l all distinct. (5.3) The moduli space of the coupling constants, however, is notM 0,5 . Instead, it is a 4-sheeted branched cover X 5 →M 0,5 , branched over the compactification divisor. We will be more precise about the nature of the ramification, below, but X 5 is most effectively described as being the rational surface whose ring of meromorphic functions consists of all rational functions of y, w where The UV gauge couplings of our two decoupled gauge theories are where q = 0, ∞ correspond to a weakly-coupled SU(N ) gauge group and q = 1 is the point where the dual SU(2) gauge group is weakly-coupled. There is a natural action of the dihedral group, D 4 , on our family of gauge theories, generated by α : While they are easy to compute from (5.2), (5.4) and (5.5), we indicate, in table 1, the number of sheets over a generic point on the divisor and the behaviour of the gauge couplings (5.5) on the pre-images of each of the components of the compactification divisor. E.g., on one of the pre-images of D 34 , q 1 ≡ 1, while q 2 varies. On the other pre-image, q 1 varies, while q 2 ≡ 1. From these, we easily see that the behaviour of the gauge theory, at each of the degenerations discussed in the previous subsection, is as we claimed. For instance, on D 13 ∩ D 24 (or D 14 ∩ D 23 ), we have y = 0, w = ∞ (y = ∞, w = 0) and hence q 1 = q 2 = −1. Comparison to Kapustin's work In [8], Kapustin realized the affine D-shaped quiver in Type IIA string theory. As always, consider 2N D4-branes extending along directions 01236, and k NS5-branes extending along directions 012345. Furthermore, introduce a suitable orbifold whose action includes Divisor # sheets (q 1 , q 2 ) (1, 1) This particular orbifold is known to be magnetically charged under the B-field, like an NS5-brane. The M-theory lift of the configuration is then given by 2N M5-branes wrapped on a torus parametrized by z, together with the M-theory orientifold action z → −z, which has four fixed points. Each of the two orbifold planes lifts to a pair of fixed points, each pair having an M5-brane on top of it. We also have k M5-branes intersecting the torus. We can move the two M5-branes away from the fixed points, and the final configuration becomes 2N M5-branes on a torus with the orientifold action z → −z, plus k + 2 M5-branes. We can take the decoupling limit. Each of the M-theory orientifold fixed points becomes a twisted simple puncture of type [2N + 1]; the torus divided by z → −z is a sphere; and the intersection with an M5-brane is a untwisted simple puncture of type [2N − 1, 1]. Thus we have the 6D theory of type A 2N −1 on a sphere with four twisted punctures of type [2N + 1] and k + 2 untwisted simple punctures of type [2N − 1, 1], which reproduces our previous analysis. We saw above that the degeneration limit where an untwisted simple puncture collides with a twisted simple puncture corresponds to the point where the two gauge couplings of an SU(N ) × SU(N ) gauge group become equal. In the M-theory construction, this corresponds to the fact that to take the IIA limit, two of the fixed points need to be paired, with an M5-brane on top of them. The cylinder connecting the two full punctures has one side spontaneously broken to SU(n 1 ) × SU(n 1 ), and the other to SU(n 1 + n 2 ). So, in the 4D limit, it supports an SU(n 1 ) × SU(n 2 ) gauge group. The combined system hence realizes the non-affine Dshaped quiver (5.8). SU(4) and Sp(2) gauge theories As an application of the A 3 twisted theory, we study below S-duality of the SU(4) and Sp (2) superconformal gauge theories with matter in all allowed combinations of the fundamental and antisymmetric representations. The full tables of twisted and untwisted punctures, cylinders and fixtures of the A 3 theory are shown in appendix A.1. 4(6) Finally, we can take both fixtures from the twisted sector. 6(4) This classic example of Argyres-Seiberg duality is realized in the untwisted sector of the A 3 theory. Degeneration A. • The new SCFT with ∆(u) = 4 is obtained from the A 3 theory on a sphere with an untwisted puncture of type [2, 1 2 ] and two twisted punctures of type [2 2 , 1]. This realization comes with a free half-hypermultiplet in the fundamental of the SU(2) flavour symmetry. This is the example we studied in section 6.2.3. On a new rank-1 SCFT with ∆(u) = 3 To obtain the new SCFT with ∆(u) = 3, we need to extend our analysis to the twisted A 2n theory. The Z 2 twist of the A 2n theory is particularly subtle, as emphasized in [13]. Hence, we prefer to postpone a systematic analysis of the twisted A 2n theory. Still, it is possible to show how to obtain the missing new SCFT from a 6D construction. In [11], the new theory with ∆(u) = 3 is introduced in the following way. Consider the SU(3) gauge theory with one hyper in the fundamental and one hyper in the symmetric tensor representation. The S-dual theory is an SU(2) gauging of the new SCFT, coupled to n half-hypermultiplets in the doublet. The field-theory arguments in [12] constrain n to be 0 or 2, and require the flavour symmetry h of the SCFT to satisfy k h = (8 − n)/I, where I is the index of the embedding of su(2) in h. Now, the tensor product of two fundamentals of SU(3) decomposes as the direct sum of a fundamental plus a symmetric representation. The tensor product can in turn be obtained from the bifundamental of a product group SU(3) 1 ×SU(3) 2 , by taking a diagonal subgroup SU(3) diag , such that SU(3) diag is embedded in SU(3) 1 in the standard way, but embedded in SU(3) 2 with the action of complex conjugation, i.e., the nontrivial outer automorphism. So, consider the A 2 theory on a fixture with a simple puncture and two full punctures, which by itself simply gives rise to a bifundamental. Then, the SU(3) gauge theory with matter in the 1(3) + 1(6) can be realized by connecting the full punctures in the fixture by a cylinder with a Z 2 twist line looping around it. In other words, we have the A 2 theory on a torus with one simple puncture and a Z 2 twist loop. See the left side of the figure below. SU(3) SU (2) In the S-dual frame, shown on the right side of the figure, we have a fixture with an untwisted simple puncture and two twisted full punctures (denoted by ), and the full punctures are connected by a cylinder with a Z 2 -twist line along the cylinder. Clearly, this gives a weakly-coupled SU(2) gauge field coupled to the fixture. Also, the flavour symmetry group of the fixture must contain the explicit SU(2) 2 × U(1) as a subgroup. Thus, based on our findings, the fixture may be: • an interacting SCFT with flavour symmetry group H ⊃ SU(2) 2 × U(1) (if n = 0), or • an interacting SCFT with flavour symmetry group H ⊃ SU(2) 2 × U(1), plus free hypers in the (2, 1) + (1, 2) of SU(2) 2 and neutral under U(1) (if n = 2). To see which of these two possibilities is the right one, recall [18] that when the Riemann surface has two twisted (or untwisted) full punctures with flavour symmetry G, the holomorphic moment maps µ 1 , µ 2 of the two G-actions on the Higgs branch must be equal, tr µ 1 2 = tr µ 2 2 (7.1) Let us see what happens if n = 2. In this case, the Higgs branch is X × H 1 × H 2 , where X is the Higgs branch of the interacting SCFT with an action of SU(2) 1 × SU(2) 2 , and H 1,2 is the Higgs branch for the SU(2) 1,2 free hypers, respectively. Then we have tr µ i 2 = tr µ X,i 2 + tr µ 2 H i (7.2) for i = 1, 2. However, tr µ 1 2 depends on the position on H 1 , but not on the position on H 2 , while for tr µ 2 2 the opposite is true. Hence, n = 2 does not satisfy the condition (7.1). Then, we conclude that n = 0, and so the A 2 fixture with one untwisted simple puncture and two twisted full punctures contains just the new interacting rank-1 SCFT with ∆(u) = 3.
19,313
2015-05-01T00:00:00.000
[ "Mathematics", "Physics" ]
Bistable Dynamics Underlying Excitability of Ion Homeostasis in Neuron Models When neurons fire action potentials, dissipation of free energy is usually not directly considered, because the change in free energy is often negligible compared to the immense reservoir stored in neural transmembrane ion gradients and the long–term energy requirements are met through chemical energy, i.e., metabolism. However, these gradients can temporarily nearly vanish in neurological diseases, such as migraine and stroke, and in traumatic brain injury from concussions to severe injuries. We study biophysical neuron models based on the Hodgkin–Huxley (HH) formalism extended to include time–dependent ion concentrations inside and outside the cell and metabolic energy–driven pumps. We reveal the basic mechanism of a state of free energy–starvation (FES) with bifurcation analyses showing that ion dynamics is for a large range of pump rates bistable without contact to an ion bath. This is interpreted as a threshold reduction of a new fundamental mechanism of ionic excitability that causes a long–lasting but transient FES as observed in pathological states. We can in particular conclude that a coupling of extracellular ion concentrations to a large glial–vascular bath can take a role as an inhibitory mechanism crucial in ion homeostasis, while the pumps alone are insufficient to recover from FES. Our results provide the missing link between the HH formalism and activator–inhibitor models that have been successfully used for modeling migraine phenotypes, and therefore will allow us to validate the hypothesis that migraine symptoms are explained by disturbed function in ion channel subunits, pumps, and other proteins that regulate ion homeostasis. Introduction The Hodgkin-Huxley (HH) model is one of the most successful models in mathematical biology [1]. This formalism, i.e., a HHtype model, describes voltage changes across cell membranes that result in excitability. Not only neurons are excitable cells, also myocytes, pancreatic b-cells, and even a plant cell (Chara corallina) exhibit excitable dynamics [2][3][4]. The dynamic range of phenomena includes single action potentials (spikes), periodic spiking, and bursting (slow modulation of spiking). For example, in pancreatic b-cells bursting is induced by a calcium current [4,5]. A more complete treatment of this phenomenon, however, also requires the inclusion of Na z =K z pumps [6]. The dynamics of ion pumps and ion concentrations is also crucial for cardiac alternans (periodic beat-to-beat variations) and higher-order rhythms in the ischemic ventricular muscle [7][8][9]. In the literature such augmented HH-type models are also called second-generation HH models [10]. In the context of certain pathologies of the brain, whose fundamental dynamic structure we study here, we prefer the simpler name 'ion-based' models. This indicates that ion concentrations are major dynamical, that is, time-dependent variables. Their dynamical role in neuron models goes beyond merely modulating spiking activity. Ion dynamics can lead to a completely new type of ionic excitability and bistability, that is, the phenomena of so-called 'spreading depolarizations' and 'anoxic depolarization', respectively. (Spreading depolarizations are also called 'spreading depression' and we will use both names interchangeably in this paper.) These depolarized states of neurons are related to migraine, stroke, brain injury, and brain death, that is, to pathologies of the brain in which a transient or permanent break-down of the transmembrane potential occurs [11,12]. Another even more characteristic property of this 'twilight state close to death' [13] are the nearly completely flat transmembrane ion gradients. The almost complete break-down of both membrane potential and-due to reduced ion gradients-Nernst potentials together cause a nearly complete release of the Gibbs free energy, that is, the thermodynamic potential that measures the energy available to the neurons for normal functioning. We hence refer to this state as a state of free energy-starvation (FES). We want to stress that such phenomena require the broader thermodynamical perspective, because it goes beyond the HH description in terms equivalent electrical circuits in membrane physiology (see discussion). The object of this study is to clarify quantitatively the detailed ion-based mechanisms, in particular the time-dependent potentials, leading to this condition. In fact, early ion-based models have been introduced in a different context to describe excitable myocytes and pancreatic b-cells with variable ion concentrations [14][15][16]. Neuronal ion-based models have been used to study spreading depolarizations (SD) [17][18][19][20][21][22] and anoxic depolarizations [21]. In these phenomenological studies the types of ion dynamics related to the pathologies have been reproduced, but not investigated in a bifurcation analysis. Hence the fundamental phase space structure of these high-dimensional models that underlies the ionic excitability characterisitic of SD remains poorly understood. Furthermore, neuronal ion-based models have been used to study seizure activity [23,24] and spontaneous spiking patterns in myelinated axons with injury-like membrane damaging conditions (e.g., caused by concussions) [25,26]. In these models, the phase space structure was investigated, however, only with respect to the modulating effect of ion concentrations on the fast spiking dynamics (seizure activity, injuries), and with respect to spiking node-to-node transmission fidelity (myelinated axons). In this paper we present bifurcation analyses of several minimal biophysical ion-based models that reveal bistability of extremely different ion configurations-physiological conditions vs. free energy-starvation-for a large range of pump rates. In related models certain bistabilities have been explored before. For example, Fröhlich et al. [27][28][29] found coexistence of quiescence and bursting for certain fixed extracellular potassium concentrations and also bistability of a physiological and a strongly depolarized membrane state in a slow-fast analysis of calcium gated channels. Bistability of similar fixed points has also been found for the variation of extracellular potassium [30] or, similarly, the potassium Nernst potential [31]. Also the effect of pump strength variation has been explored under fixed FES conditions [32]. In this paper, however, we do not treat slow variables as parameters and show bistability of fast dynamics, but instead we address the stability of ion concentrations themselves, which are subject to extremely slow dynamics. This allows us to find bistability of extremely different ion distributions, a feature that distinguishes these two states from the polarized and depolarized states studied in the afore mentioned work. A study that also had significantly different ion distributions was done by Cressman et al [33], however, the seizure-like phenomena discussed in their work are quite different-though clinically related-from those presented in this paper. Because of the occurrence of ion state bistability we conjecture that our model describes a threshold reduction of a mechanism that leads to ionic excitability in form of spreading depolarizations. In other words, we conclude that an important inhibitory mechanism to describe ion homeostasis such as glial buffering or diffusive regulation of extracellular ion concentrations plays a crucial role in ion homeostasis and the Na z =K z pumps alone are insufficient to recover from free energy-starved states. We show that when the extracellular K z concentration is regulated by linearly coupling it to an infinite bath, the bistable system changes to an excitable system, which we call ionic excitability. The effect of turning off glial buffering and diffusion has been discussed in more detailed ion-based models [27,29] before, but has not been related to the fundamental phase space structure of the system. Our conclusions have been validated by demonstrating the robustness of the results in a large variety of minimal ion-based models, which all consistently show this insufficiency of Na z =K z pumps, and also in a very detailed membrane model that has been used intensively for computational studies of spreading depolarizations and seizure-like activity [17,34]. Hodgkin-Huxley (HH) model and reductions A simple ion-based neuron model can be obtained as a natural extension of the Hodgkin-Huxley (HH) model [1]. We list the basic equations of HH that we used for the sake of completeness, and also comment on two often used model reductions of which one must be modified for our study. Furthermore leak currents are specified, which is necessary for the extension towards ion-based modeling. In the HH model, single neuron dynamics is described in terms of an electrically active membrane carrying an electric potential V , and the three gating variables n, m and h that render the system excitable. Ion species included are sodium, potassium, and an unspecified ion carrying a leak current, which can be attributed to chloride in our extended model. The rate equations read [1]: dn dt~n dh dt~h dm dt~m The top equation is simply Kirchhoff's current law for a membrane with capacitance C m and membrane potential V . I app is an externally applied current that may, for example, initiate voltage spikes. The gating variables n, h, and m are the potassium activator, sodium inactivator, and sodium activator, respectively. Their dynamics is defined by their voltagedependent asymptotic values x ? and relaxation times t x (x~n, m, h). These are given by x a x zb x and Author Summary Theoretical neuroscience complements experimental and clinical neuroscience. Simulations and analytical insights help to interpret data and guide our principal understanding of the nervous systems in both health and disease. The Hodgkin-Huxley-formulation of action potentials is certainly one of the most successful models in mathematical biology. It describes an essential part of cell-to-cell communication in the brain. This model was in various ways extended to also describe when the brain's normal performance fails, such as in migraine hallucinations and acute stroke. However, the fundamental mechanism of these extensions remained poorly understood. We study the structure of biophysical neuron models that starve from their 'free' energy, that is, the energy that can directly be converted to do work. Although neurons still have access to chemical energy, which needs to be converted by the metabolism to obtain free energy, their free energy-starvation can be more stable than expected, explaining pathological conditions in migraine and stroke. t x~1 w(a x zb x ) for x~n, m and h: Here w is a common timescale parameter, and the Hodgkin-Huxley exponential functions are The individual ion currents read with g l,g ion denoting leak and gated conductances. In fact, Hodgkin and Huxley set up their model with an unspecified leak current and non-leaking sodium and potassium channels. As long as ion dynamics is not considered this is mathematically equivalent to specifying the leak current as being partially sodium, potassium and chloride, but it is physically inconsistent because the reversal potentials for the ions differ. In an ion-based approach, however, the main task of the ion pumps under physiological conditions is to compensate for sodium and potassium leak currents (see next section) while gated currents are extremely small in the equilibrium. So at this point leak currents for all ion species are important. The Nernst potentials E ion are given in terms of the ion concentrations ½ion in the intracellular space (ICS) and the extracellular space (ECS) denoted by subscripts i and e, respectively: for ion~K, Na, and Cl and z ion is the ion valence. All model parameters are listed in Table 1. The units chosen are those typically used and appropriate for the order of magnitude of the respective quantities. Time is measured in msec, potentials in mV, and ion concentrations in mMol=l. The units for conductance densities imply that ionic and pump current densities are in mA=cm 2 . For better readability we omit the square brackets on the ion concentrations and simply write K i=e , Na i=e , and Cl i=e . For I app~0 this model is monostable with an equilibrium at V~{68 mV. Note that E Na =V and E K =V imply that under equilibrium conditions neither I Na z nor I K z vanish, but only their sum does. Sufficiently strong current pulses can-depending on their duration-initiate single voltage spikes or spike trains. Constant applied currents can drive the system to a regime of stationary oscillations. The minimal current required for this is usually called rheobase current. The HH model can be reduced to two dynamical variables in a way that preserves these dynamical features. One common simplification [35] is to eliminate the fastest gating variable m adiabatically and set Second, there is an approximate functional relation between h and n that is usually realized as a linear fit [36]. The ion-based model presented in this article, however, contains a stable fixed point with large n, and a linear best fit would then lead to a negative h. Therefore we will use the following sigmoidal fit to make sure h is non-negative: After this reduction the remaining dynamical variables are V and n. Minimal ion-based model While in the original HH model ion concentrations are model parameters, in ion-based modeling intra-and extracellular ion concentrations become dynamical variables, which causes the Nernst potentials to be dynamic. The model defined by the rate eqs. (1), (2) and contraint eqs. (15), (16) can straightforwardly be extended to make ion concentrations dynamic since currents induce ion fluxes. However, under those equilibrium conditions found in HH neither I K z~0 nor I Na z~0. Hence we need to include ion pumps [15] to make sure that the rate of change in ion concentration inside the cell (i) and extracellular (e) can vanish in the resting state ( _ N Na i=e~_ K K i=e~_ C Cl i=e~0 ). The rate equations for ion concentrations in the intracellular space (ICS) are then The factor c converts currents to ion fluxes and depends on the membrane surface A m and Faraday's constant F : Dividing the ion fluxes by the ICS volume v i gives the change rates for the ICS ion concentrations. The pump current I p represents the ATP-driven exchange of ICS sodium with potassium from the extracellular space (ECS) at a 3=2-ratio. It increases with the ICS sodium and the ECS potassium concentration. Chloride is not pumped. We are using the pump model from [23,24]: where r is the maximum pump current. As a consequence of mass conservation ion concentrations in the ECS can be computed from those in the ICS [33]: with the ECS volume v e . Superscript zero indicates initial values. Since all types of transmembrane currents, i.e., also the pumps, must be included in eq. (1) for the membrane potential, we have to add the net pump current I p : dV dt~{ The rate equations for the ion-based model are thus given by eqs. (2), (17)- (19), (23). These rate equations are complemented by the gating constraints eqs. (15), (16) and the mass conservation constraints K e~K Dynamic ion concentration imply that the Nernst potentials in eqs. (11)-(13) are now dynamic (see eq. (14)). The additional parameters of the ion-based model are listed in Table 2. The morphological parameters A m and v i are taken from [17]. In cortical ion-based models, the extracellular volume fraction f~v e =(v e zv i ) ranges from 13% in [17] to 33% in [21]. In experimental studies, f is about 20%, a value that can increase, for example, in focal cortical dysplasias type II, a frequent cause of intractable epilepsy, to 27% [37] or during sleep to 32% (the latter only, if we transfer the increase observed in mouse data to human) [38]. It is important to note that in experimental studies, the extracellular volume fraction refers to the fraction with respect to the whole tissue, which includes also the glial syncytium. Assuming equally sized neuronal and glial volume fractions of 40% each, an experimentally measured value of 20% would in our model, which does not directly include the volume of the glial syncytium, correspond to f~0:33 or 33%. We choose an intermediate value of 25% for f , but address the influence of the volume ratio in Sec. Results. We prefer to give these morphological parameters in the commonly used units which are appropriate to their order of magnitude rather than unifying all parameters, e.g. the cell volume is given in mm 3 instead of l which ion concentrations are related to. Consequently c from Table 2 must be multiplied by a factor of 10 to correctly convert currents to change rates for ion concentration in the given units. Because of the extremely small value of c the membrane dynamics, i.e., the dynamics of V and n, is five orders of magnitude faster than the ion dynamics. A consequence of this large timescale separation is that the system will attain a Donnan equilibrium when the pumps break down. The Donnan equilibrium is a thermodynamic equilibrium state (not to be confused with merely a fixed point, though it is one) that is reached for ion exchange across a semipermeable membrane. Since we have not explicitly included large impermeable anions inside the cell, this is at first surprising. For no applied currents and I p~0 , the ion rate equations imply that an equilibrium requires all ion currents to vanish. Since conductances are strictly positive it follows that all Nernst potentials and the membrane potential must be equal. Ion concentrations will then adjust accordingly. However, eqs. (17)- (19) and (23) imply the following constraint on the ICS charge concentration Q i : where D denotes the difference between the initial and final value of a variable. Since c is very small, changes in ion concentrations must practically satisfy electroneutrality. This condition together with the equality of all Nernst potentials defines the Donnan equilibrium, so we see that it is contained in our model as the limit case with no pumps and no applied currents. It should be noted that this observation provides a necessary condition for the correctness of biophysical models. In this extension of the HH model the ion dynamics makes Nernst potentials time-dependent. The simultaneous effect of a diffusive and an electrical force acting on a solution of ions is described more accurately by the Goldman-Hodgkin-Katz (GHK) equation though. Nevertheless we prefer Nernst currents, because this formulation allows us to use wellestablished conductance parameters so that the model is completely defined by empirically estimated parameters. In Sec. Results we will see how GHK currents can be modelled and that the qualitative dynamical behaviour of the system is not affected. Phase space analysis of ion-based model In the ion-based model introduced above current pulses can still initiate voltage spikes (not shown). However, extremely strong pulses, in fact comparable to those used in [17] to trigger spreading depolarizations, can drive the system away from the physiological equilibrium to a second stable fixed point that is strongly depolarized (see Fig. 1(a)). This is a new dynamical feature. The depolarized state can also be reached when the ion pumps are temporarily switched off (see Fig. 1(b)). Apart from the depolarization this state is characterized by almost vanishing ion gradients. This free energy-starvation (FES) is reminiscent of the Donnan equilibrium. Extracellular potassium is increased from 4 to more than 40 mMol=l while the extracellular sodium concentration is reduced from 120 to less than 30 mMol=l. The gated ion channels are mostly open (potassium activation n is 60%), and it is no longer possible to initiate voltage spikes. In this section we will present a phase space analysis of the model and derive conditions for the observed bistability between a physiological equilibrium and a state of FES. Note that the transition from the physiological state to FES happens via ion accumulation due to spiking, and we will see in Sec. Results that indeed the membrane ability to spike is a necessary condition for the bistability. Similar processes of ion accumulation were regarded as unphysiological in modelling of cardiac cells [8], but are familiar in cortical neurons where ion accumulation is central to seizure-like activity [24,33] and spreading depression [17] (SD). In fact, we will briefly demonstrate how the bistability relates to local SD dynamics. Symmetry of the ion-based model. Prior to a bifurcation analysis we need to discuss a conservation law (symmetry) of eqs. (17)- (19), (23). The direct extension of a membrane model to include ion dynamics as presented above naturally leads to a linear dependence of dynamical variables. In our case this is reflected by the following relation for I app~0 . As a consequence the determinant of the Jacobian is always zero and the system is nowhere hyperbolic. For the continuation techniques used by software tools like AUTO [39], however, the inverse Jacobian plays a central role, so they cannot be applied to the system unless this degeneracy is resolved. Furthermore the phase space structure of such nonhyperbolic systems can be changed with arbitrarily small perturbations which is why they are called structurally unstable [40]. Note that the linear dependence can be avoided when the rate equation for V contains an additional current with a fixed reversal potential breaking the symmetry. Such strictly speaking unphysical currents are indeed often included in neuronal ion-based models [17,24,33,34], but we will rather make use of the symmetry and eliminate one linearly dependent variable. The physiological view on the instability should be as follows. Assume that the system is in its physiological equilibrium and then apply a constant current I app to the voltage rate eq. (23). Then eqs. (17)- (19) and (23) imply that the equilibrium conditions _ V V~0 and _ K K i~_ N Na i~_ C Cl i~0 are contradictory, so the equilibrium will vanish even for arbitrarily small currents. In fact for any constant and positive I app , the system will evolve in a highly nonphysiological manner with K i , Na i and Cl e slowly tending to zero. To avoid even the theoretical possibility of such behaviour we will now use eq. (28) to reduce the system and thereby make it structurally stable. We can, for example, eliminate V and express it in terms of the ICS ion concentrations rather than treating V as an independent dynamical variable: This was also done in Ref. [15]. The physiological meaning of this reduction is simply that the possibility of unspecified applied currents is ruled out. For instance, a perturbation on the sodium rate eq. (17) should be interpreted as a sodium current. The above constraint describes the simultaneous effect on V . It would be equivalent to apply perturbations to eq. (17) and eq. (23) consistently to model the full effect of an applied sodium current, so the additional constraint should be seen as a consistency condition. (The curves in Fig. 1(a) were computed for a sodium current pulse.) This consistency rule does not at all change the dynamics unless unspecified currents are applied, and even then it practically does not change the dynamics, because any deviation in ion concentrations scales with c and is hence negligible. The structural instability is thus a rather formal feature of the degenerate model and we remark that its physiological equilibrium is nevertheless stationary. Instabilities that lead to an unphysiological drift of ion concentrations for very long simulation times have been reported and resolved in cardiac cell models [7,8]. Our case is different though, because the physiological state is a stationary one and the response to moderate stimulation is physiologically realistic. For the bifurcation analyses presented in this paper we have eliminated Na i rather than V for numerical reasons. This is completely equivalent, because we only vary the pump rate and morphological parameters. So in our reduction we have replaced rate eq. (17) by the following constraint: The model is then defined by the rate eqs. (2), (18), (19) and (23) and the constraint eqs. (15), (16), (24)- (26) and (29). Bifurcation analysis. We have used the continuation tool AUTO [39] to follow the polarized fixed point of the system under variation of the maximal pump rate r. Stability changes and the creation of stable or unstable limit cycles are detected by the software which helps us to interpret the dynamical behaviour. For a better overview we will extend our bifurcation analysis even beyond the physiologically relevant range. The full bifurcation diagram is presented in Fig. 2. In the (r,V )-plane the fixed point continuation yields a smooth z-shaped curve where unstable sections are dashed. The physiological equilibrium is marked by a green square. For higher pump rates the equilibrium remains stable and becomes slightly hyperpolarized. If r is decreased the physiological equilibrium collides with a saddle point at r LP1~0 :894006 mA=cm 2 in a saddle-node bifurcation (limit point, LP). In a LP the stability of a fixed point changes in one direction (zero-eigenvalue bifurcation). Thus after LP1 the fixed point is a saddle point with one unstable direction. In a Hopf bifurcation (HB) at r HB1~2 9:2336 mA=cm 2 two more directions become unstable. Via another LP at r LP2~3 4:5299 mA=cm 2 the last stable direction switches to unstable and the saddle becomes an unstable node. In HBs at r HB2~3 3:7285 mA=cm 2 and r HB3~2 4:6269 mA=cm 2 the fixed point becomes a saddle and a stable depolarized focus, respectively. The stability is indicated by the (n { ,n z )-tuples along the fixed point curve with n {,z denoting the number of stable and unstable directions. In every HB a limit cycle is created. Our model only contains unstable limit cycles that are created in subcritical HBs. In the diagram they are represented by their extremal V values. Such unstable limit cycles are not directly observable, but in the bistable regime they can play a role for the threshold behaviour for the transition from one fixed point to the other. All limit cycles in the model disappear in homoclinic bifurcations (HOM). In a HOM a limit cycle collides with a saddle. When it touches the saddle it becomes a homoclinic cycle of infinite period. After the bifurcation the limit cycle does not exist any more. The limit cycles created in HB1, HB2 and HB3 disappear in HOMs at r~27:6463 mA=cm 2 , r~33:7027 mA=cm 2 and r~1:024291 mA=cm 2 . The limit cycle emanating from HB1 collides with the upper (i.e., less polarized) saddle, for the other two HOMs the situation is clear, because there is only one saddle available. Since the limit cycles are all unstable these bifurcation details are physiologically irrelevant, but mentioned for completeness. This bifurcation analysis shows that our model is bistable for a large range of pump rates r LP1 vrvr HB3 . Strongly depolarized and electrically inactive states of neurons with nearly vanishing ion concentration gradients have been reported in pathological states [11,13], but in real systems such free energy-starvation (FES) is not stable. In the below section we show how this bistability can be resolved. Ionic excitability. We will now briefly show how the above analyzed model can be modified such that the unphysiological bistability turns into excitability of ion dynamics. For this we follow [24,33] and include an additional regulation term for extracellular potassium. This means that K e becomes an independent dynamical variable and the constraint eq. (25) must be replaced by its rate equation: The regulation term I reg can be interpreted as a diffusive coupling to an extracellular potassium bath or as a phenomenological buffering term. It takes the following form: where K reg is the potassium concentration of an infinite bath reservoir coupled to the neuron or a characteristic parameter for glial buffering, and l is a rate constant (values given in Table 3). K reg takes values of physiological potassium concentrations and hence stabilizes the physiological equilibrium. This is how I reg regulates ion homeostasis and destabilizes the energy-starved state. If we now stimulate the system with a current pulse or temporarily switch off the pump as we did in Fig. 1 the system no longer remains in the depolarized state, but repolarizes after a long transient state of FES (see Fig. 3). After the repolarization ion concentrations start to recover from FES. Full recovery to the initial physiological values is an asymptotic process which takes very long (about two hours), but the neuron is back to normal functioning already after nine to ten minutes. Similar dynamics is described in numerical [17,34] and experimental SD models. The bistable and excitable dynamics can be nicely compared in a projection of the respective trajectories onto the (Na i ,K i )-plane. For the bistable model the conditions _ K K i~0 and _ N Na i~0 define three-dimensional hypersurfaces called nullclines. Adding the necessary fixed point conditions on the remaining dynamical variables, namely V~E Cl and n~n ? (V ), allows us to specify curves that represent these nullclines and only depend on Na i and K i . In the buffered model K e is another dynamical variable and its fixed point condition is K e~Kreg . Electroneutrality and mass conservation imply that certain (Na i ,K i )-combinations would lead to a negative Cl i or K e . In the plot, these unphysiological configurations are shaded. In Fig. 4, we see that the bistable model has three nullcline intersections, i.e., fixed points, while the buffering term deforms the nullclines so that only one stable fixed point remains. In the bistable case an initial current pulse stimulation (dashed part of trajectory) drives the system into the basin of attraction of the FES-state, which it then asymptotically approaches (solid line). After the same stimulation the buffered system performs a large excursion in phase space with extremal ion concentrations comparable to FES, but eventually returns to the physiological equilibrium. This large excursion in the ionic variables characterizes what we refer to as ionic excitability or excitability of ion homeostasis. The simulations presented in this section support the hypothesis that it is caused by the bistability of the unbuffered model. Note that intersections of nullcline curves and trajectories do not have to be horizontal or vertical since they may (and do) differ in the non-ionic variables. The main purpose of the nullcline curves is to indicate the existence and location of fixed points. Robustness of results The ion-based model we have analysed so far has been motivated as a natural extension of the Hodgkin-Huxley membrane model. However, there are different variants of ionbased models [17][18][19][20][21][22][23][24]32] that use different pump and current models, ion content, and ion channels. We will hence address the question how general our results are in this respect. Furthermore we vary the geometry-dependent parameters (membrane surface and extracellular volume fraction) continuously to test their effect on the phase space, too. Model variants. As we noted before, transmembrane currents are more accurately described as Goldman-Hodgkin-Katz (GHK) rather than Nernst currents, even though we prefer the latter. It is hence important to check which difference the choice of current model makes. To generalize the Nernst currents in eqs. (11)-(13) to GHK currents we assume that both models have the same steady state currents under physiological equilibrium conditions. The GHK version of the sodium current is with membrane permeabilities P l Na and P g Na instead of conductances. To compute these permeabilities we set the GHK current Fig. 1(a). Due to the additional potassium regulation the system returns to the physiological equilibrium after an approximately 60 sec lasting FES and subsequent hyperpolarization. (b) Similar dynamics as in (a) is observed for a temporary pump switch-off like in Fig. 1(b). doi:10.1371/journal.pcbi.1003551.g003 equal to its Nernstian counterpart for the equilibrium conditions given in Table 1. This leads to a common conversion factor from conductances g l,g Na to permeabilities P g,l Na . With this ansatz we obtain conversion factors for the three different ion species that lead to the conductances listed in Table 4. There is also a certain freedom in the choice of a pump model. It is a general feature of Na + /K + pumps that their activity is enhanced by the elevation of ECS potassium and ICS sodium. Still different models exist, and to investigate the role of the particular pump model we replace the pump from eq. (21), now referred to as I p,A , with the following one from [17]: In order to retain the equilibrium at V~{68 mV we have to set the maximum pump current to r B~5 :72 mA=cm 2 . This is slightly higher than the previous pump value (r A~5 :25 mA=cm 2 ), but in the same range. From the rate equations of the HH membrane model (see Sec. Model) it is obvious that the chloride leak current stabilizes the equilibrium membrane potential. To test its stabilizing effect in the context of ion-based modeling we compare models that either do or do not contain this current. We are further interested in the question whether membrane excitability and ion bistability are related. Therefore also the effect of in-and excluding active ion channels is tested. In this section we will only discuss fixed points and their stability, but not the unstable limit cycles belonging to HBs. In Fig. 6. It shows the parameter ranges for bistability and for monostability of a physiological state or FES. The most striking result of this bifurcation analysis is that this bistability occurs in all models with gated ion channels, but not in any model with only leak channels (grey-shaded graphs in the insets of Fig. 5). The comparison of any model with active gates and its leak-only counterpart shows that whenever the physiological equilibrium of the first one exists it is identical to the equilibrium of the latter one. While the physiological state disappears in a LP for all bistable models at small pumping, the fixed points of the leak models remain stable, but depolarize drastically for further decreasing pump rates until the Donnan equilibrium for r A,B~0 is reached. The absence of the second fixed point in leak-only models is plausible if we consider Fig. 1 again. The depolarized state is characterized by large ion concentrations K e and Na i which implies an increased pump current (see eq. (21)). Since the differences between the Nernst potentials and the membrane potential are even smaller in the depolarized state, higher, hence gated, conductances are required to compensate for the pump currents and maintain the depolarized state. Besides the requirement of active ion channels, the bistability is a very robust feature of these simple ion-based models. Let us now consider the effect of the different model features on the minimal physiological pump rate, i.e., the pump rate required for a stable physiological fixed point, and the recovery pump rate that destabilizes the depolarized state of FES and allows the neuron to return to physiological conditions. These rates are the lower and upper limit of the bistable regime, and low values are physiologically desirable. In Fig. 6 we see that pump model A, GHK currents and chloride each lead to a lower minimal physiological (see the inset) and a lower recovery pump rate than pump model B, Nernst currents and the exclusion of chloride. Quantitative differences should be noted, though. The inset of Fig. 6 shows that all models with pump A have lower minimal physiological pump rates than models with pump B. So the stability of the physiological equilibrium with respect to pump strength reduction depends mostly on the choice of pump. On the other hand four of the five lowest recovery pump rates are from models that include chloride. In fact, it is only the combination of both, the GHK current model and pump A, that make the recovery threshold of the chloride excluding model 7 slightly lower than that of the chloride including model 2. However, one should note that even the lowest recovery pump rate is as high as r~14:3 mA=cm 2 (model 8). This is still an almost threefold increase of the normal rate. So even if we assume pump enhancement due to additional mechanisms, for example increased cerebral blood flow, the threshold for recovery from FES seems to be too high. Thus it is true for a large class of ion-based neuron models that realistic neuronal homeostasis cannot rely on Na + /K + -ATPase alone, but rather on a combination of ion pumps and further regulation mechanisms like glial buffering. There is another effect of chloride to be pointed out. In Fig. 5 we see that it raises the Donnan equilibrium potential (see potentials at r~0 mA=cm 2 ) significantly. To understand this effect note that without chloride electroneutrality forces the sum of DK e and DNa e to be zero, while the presence of the decreasing anion species Cl e implies D(K e zNa e )v0. According to eq. (14) this leads to lower Donnan equilibrium Nernst potentials E K and E Na , and consequently to a lower membrane potential. Since the conditions of FES for physiological pump rate values are very close to the Donnan equilibrium, this depolarized fixed point is shifted in the same way. The effect of the current model on the two characteristic pump rates is less pronounced than that of chloride or the pump choice. It lowers the minimal physiological rate more than chloride, but not as much as a pump change from model B to A. Its effect on the recovery threshold is the weakest. While above we describe and investigate minimal Hodgkin-Huxley model variants to obtain SD behavior in the simplest neuron model types, in the current literature biophysically much more detailed neuron models have been developed for this phenomenon. We do not intend to investigate such detailed models thoroughly, but as an example that further demonstrates the robustness of our results, we also replicate our results with a much more detailed membrane model as first described by Kager et al. [17]. This detailed model contains five different gated ion channels (transient and persistent sodium, delayed rectifier and transient potassium, and NMDA receptor gated currents) and has been used intensively to study spreading depolarizations and seizure-like activity. In fact, one modification is required so that we can replicate our previous results. The detailed model contains an unphysiological so-called 'fixed leak' current that has a fixed reversal potential of {70 mV and no associated ion species. This current only enters the rate equation for the membrane potential V and thereby implies that V~{70 mV is a necessary fixed point condition. In other words, the type of depolarized fixed point that we have found in the simpler model is ruled out by this current. If we, however, replace this unphysical Figure 6. Overview of the parameter regimes for bistability, polarized and depolarized stability for different models (1)(2)(3)(4)(5)(6)(7)(8) current with a chloride leak current as in our model (see eqs. (13), (19), (23)) and furthermore neglect the glial buffering model use in Ref [17], we find the same type of bistability as in our model. The fixed point continuation of this model (for a complete list of rate equations see [17,34]) in Fig. 7 shows that again FES conditions and a physiological state coexist for a large range of pump rates. This model has slightly different leak conductances and equilibrium ion concentrations, and consequently the characteristic pump rates also differ from ours. However, the only important thing to note here is that the recovery pump rate defined by the subcritical Hopf bifurcation of the upper fixed point branch is large compared to the physiological value (marked by the green square), so also in this rather different membrane model recovery from FES due to pump enhancement is practically impossible. The limit cycle continuation also bears a strong similarity to the one in Fig. 2 with the only main difference being that the limit cycles of the detailed model are stable in two narrow parameter regimes (see solid circles in Fig. 7). We remark that the physiologically irrelevant unstable fixed point branches in Fig. 7 do not connect in a saddle-node bifurcation, but saturate for very high pump rates. The occurrence of the same type of bistability in a Hodgkin-Huxley-based ion model and this very detailed one, and also the similarity of the SD trajectories in Fig. 3 to those presented in [17], support the physiological relevance of the minimal ion-based ansatz that we developed and follow in this paper. Moreover, we assume that bistability is an universal feature also for other more detailed membrane model, which are yet more elaborate variants of the model described by Kager et al. [17]. We will hence use the model from Sec. Model for further investigations. Variation of membrane surface and extracellular volume fraction. After the overview of different variants of ion content, ion channels, pumps and current models we finally address the role of the neuron geometry. Therefore we vary the membrane surface and the extracellular volume fraction in the model from Sec. Model. For the surface variation we introduce the relative surface size parameter x A and replace A m with A m x A which implies the replacement (see eq. (20)) c?cx A wherever c occurs, i.e., in the ion rate eqs. (18) and (19) and the sodium constraint eq. (29). The extracellular volume fraction, typically denoted as f , is defined as where v tot~vi zv e is the total volume of the system. When f is varied, the above expressions for v i=e must be inserted in both ion rate eqs. (18) and (19), and in all ion constraint eq. (24)- (26) and (29). The surface parameter x A is varied from 0:1 to 10, f is varied from 2% to 50%. The standard values of these parameters are x A~1 and f~25% and parameters are understood to take these values when they are not varied. We start from the bifurcation diagram of Fig. 2 and perform two-parameter continuations of the detected bifurcations to find out how the membrane surface and the volume ratio change the bistable regime. The (r,x A )and (r,f )-continuation curves are shown in the left and right plot of Fig. 8. We see that the x A variation has hardly any effect on the bifurcation values of r. This can be understood from the structure of the model. The fixed point curve is defined by setting the rate eqs. (2), (18), (19), (23) to zero and the constraint eqs. (15), (16), (24)- (26) and (29). When x A is varied the only modification to these conditions is in eq. (29) (sodium constraint). But this modification is of order O(10 {5 ) and does practically not affect the shape of the fixed point curve, so the limit point bifurcations [17]. The physiological equilibrium is at r~13 mA=cm 2 , the minimal physiological pump rate is r~9:8 mA=cm 2 , and the recovery rate is r~107 mA=cm 2 . The limit cycle emanating from the HB undergoes four saddle-node bifurcations of limit cycles (indicated by the stability changes, but not explicitly labled) before it disappears in a homoclinic bifurcation (HOM). Like in Fig. 5 LP1 and LP2 are almost not changed. Hopf bifurcation could be shifted, but a rescaling of the (initial and dynamical) ion concentrations by x A transforms the rate and constraint equations such that x A only appears in the pump currents. Their derivatives are then multiplied by x A , but for all HBs the pumps are saturated and hence x A does not contribute to the Jacobian. The variation of f , however, does change the width (with respect to r) and the threshold values of the bistable regime. A small value of f (corresponding to a small extracellular space) reduces the recovery pump rate, and also increases the minimal physiological pump rate. This means that both, depolarization and recovery, are enhanced. However, the minimal physiological pump rate is much less affected than the recovery pump rate, so basically a big cell volume supports recovery from the depolarized state. It is known that in spreading depression (SD), where metastable depolarized states that resemble the energy-starved fixed point of our model occur, the osmotic imbalance of ICS and ECS ion concentrations leads to a water influx that makes the cells swell. Our analysis shows that such a process helps the neuron to return to its physiological equilibrium. Extracellular volume fractions of down to 4% are reported in SD, but even for such extreme volume fractions the required recovery pump rate is too high for pump driven recovery of the neuron (see the lowest value for r HB3 in the right plot of Fig. 8). We remark that the bifurcation curves in Fig. 8 do not saturate for f w50%, but, except from the LP1-curve that remains very low, bend down, probably due to an approximate symmetry of ICS and ECS ion concentration dynamics. In summary also the analysis of different cell geometries confirms that ion homeostasis cannot be provided by Na + /K + pumps alone. For example, in computational models of dynamically changing pump rates due to oxygen consumption maximal rates of twice the physiological values are considered [22]. Discussion Computational neuroscience complements experimental and clinical neuroscience. Simulations help to interpret data and guide a principal understanding of the nervous systems in both health and disease. The HH-formulation of excitability was ''so spectacularly successful that, paradoxically, it created an unrealistic expectation for its rapid application elsewhere'' as Noble remarked [9]. While his statement refers to modeling of cardiac cells it certainly holds true also for neurological diseases and brain injury [11,13]. In both fields, the incorporation of the Na + /K + pump in the original excitability paradigm formulated by Hodgkin and Huxley is of major importance. The fundamental structure of such models has to our knowledge not been exploited in neuroscience beyond merely modulating spiking in epileptiform activity [23,24] or in models that have energy-starved states [17][18][19][20][21][22]32] yet without investigating the fundamental bifurcation structure. As we stressed in the introduction, this extension of the original HH model enforces a physical or rather thermodynamical perspective, which was, of course, the starting point of Hodgkin and Huxley, too. For instance, we also considered the Goldman-Hodgkin-Katz (GHK) current equation which is derived from the constant field assumption applied to the Nernst-Planck equation of electrodiffusion. Electroneutrality is important to consider, as can be seen by the indirect insertion of impermeable counter anions only reflected by observing a thermodynamic Donnan equilibrium. Furthermore, a thermodynamic description of osmotic pressure (which would require a direct insertion of a concentration A n{ of a counter anion with valence n) and corresponding changes in cell volume can be included. There are further physical mechanisms that may alter the dynamics in biophysical ion-based models. At the same time, we have to avoid ''an excruciating abundance of detail in some aspects, whilst other important facets […] can only be guessed'' [41], like using various new currents but guessing the correct value of the valence n of an impermeable counter anion. For this reason, we decided to use the original ion currents from the HH model. The comparison of our results to a physilogically more realistic and much more detailed membrane model in Fig. 7 support the assumption that the basic structure will not be changed by just adding or modifying gating. This question has also been addressed experimentally and in simulations by showing that only the simultaneous blockade of all known major cation inward currents did prevent hypoxia-induced depolarization with FES [17,42]. In the model [17], five different Na z currents were investigated. Of course, to apply our model to a particular pathological condition, like migraine which is a channelopathy [43,44] (disease caused by modified gating), these details will become important. This can Fig. 2. In the left plot the dimensionless surface size parameter x A is varied, in the right plot the extracellular volume fraction f is changed. The insets show the LP1 curves that mark the minimal physiological pump rate. The pump rates for which the system is bistable range from the LP1 to the HB3. The HB3 pump rate is required to repolarize a neuron that is in the depolarized equilibrium. The parameter x A (left plot plus inset) almost does not change the stability of the system, but f (right plot) reduces the recovery pump rate significantly. The inset shows that the minimal physiological pump rate is much less affected. In each plot and inset the standard parameter value is indicated by the light-blue vertical line. doi:10.1371/journal.pcbi.1003551.g008 easily be incorporated in future investigations. Moreover, note that changes in cell volume, which are very important in brain injuries, are in this study only treated by varying it as a parameter. Our bifurcation analysis shows that a whole class of minimal ion-based models is bistable for a large range of pump rates (r LP1 vrvr HB3 ). Bistable dynamics was suggested by Hodgkin to explain spreading depression [11,12], and a corresponding model has been investigated mathematically by Huxley but never been published (cf. Ref. [45]). Dahlem and Müller suggested to extend this ad hoc approach, i.e., a single so-called activator variable with a bistable cubic rate function, by including an inhibitory mechanism in form of an inhibitor species with a linear rate function coupled to the activator [45]. This, of course, leads to the well known FitzHugh-Nagumo paradigm of excitability type II [46,47], that is, excitability caused by a Hopf bifurcation [48], but should not be mistaken as a modification of conductance-based excitability in form of HH-type model in the 'first generation' and the interpretation as an equivalent electrical circuit. FitzHugh used his equation in this way, he investigated a long plateau as seen in cardiac action potentials. Dahlem and Müller suggested to use the same mathematical structure of an activator-inhibitor type model [45] to describe a fundamental new physiological mechanism of ionic excitability that originates from bistable ion dynamics. Our current results provide the missing link between this ad hoc activator-inhibitor approach, which has been widely used in migraine and stroke pathophysiology [45,[49][50][51][52][53][54][55][56][57][58], and biophysical plausible models. The major result from this link is the new interpretation of the physiological origin of the proposed inhibitory variable [45]. We wrongly interpreted it as being related to the pump rate [49,51,53,57]. As our ion-based model shows bistable dynamics, we see it as essentially capturing the activator dynamics of an excitable system and briefly show in Figs. 2 and 3 that it can be transformed into such a system by the introduction of a inhibitory process. Vice versa, excitable systems can be reduced to bistable dynamics by singular perturbation methods. Such reductions are referred to as a threshold reduction. From this perspective our model can be interpreted as the threshold reduction of an excitable system, and we conclude that without contact to an ion bath, physically realistic ion-based models miss an important inhibitory mechanism. Our analysis shows that unlike what we thought before [49,51,53,57], ion pumps alone are insufficient. If the pump rate is temporarily decreased to less than the minimal physiological rate, the neuron depolarizes, and normal pump activity does not suffice to recover the physiological state. Depending on the particular model the required recovery pump rates range from three times up to more than 30 times the original value. These high values suggest that also more detailed pump models that, for example, include the coupling of the maximal pump rate to oxygen or glucose [22] will not resolve this bistability. It can, however, also be seen that a regulation term for the extracellular ion concentrations that mimics glial buffering and coupling to the vasculature will allow only monostability. An additional diffusive coupling to a bath value in the extracellular rate equations forces all such buffered extracellular species to assume the respective bath concentrations. There are no two points on the solution branch that share the same extracellular potassium concentrations (see Fig. 4). Hence one fixed point is selected, the other state becomes unstable. We consequently suspect that coupling to some bath (glia/vasculature) plays a crucial role in maintaining ion homeostasis and our results from Figs. 3 and 4 confirm that an ion-based model including such coupling will recover from superthreshold perturbations by a large excursion in phase space that is characterized by long transient free energy-starvation.
12,006.4
2013-10-07T00:00:00.000
[ "Physics" ]
Characterizing the relationship between peak assistance torque and metabolic cost reduction during running with ankle exoskeletons Background Reducing the energy cost of running with exoskeletons could improve enjoyment, reduce fatigue, and encourage participation among novice and ageing runners. Previously, tethered ankle exoskeleton emulators with offboard motors were used to greatly reduce the energy cost of running with powered ankle plantarflexion assistance. Through a process known as “human-in-the-loop optimization”, the timing and magnitude of assistance torque was optimized to maximally reduce metabolic cost. However, to achieve the maximum net benefit in energy cost outside of the laboratory environment, it is also necessary to consider the tradeoff between the magnitude of device assistance and the metabolic penalty of carrying a heavier, more powerful exoskeleton. Methods In this study, tethered ankle exoskeleton emulators were used to characterize the effect of peak assistance torque on metabolic cost during running. Three recreational runners participated in human-in-the-loop optimization at four fixed peak assistance torque levels to obtain their energetically optimal assistance timing parameters at each level. Results We found that the relationship between metabolic rate and peak assistance torque was nearly linear but with diminishing returns at higher torque magnitudes, which is well-approximated by an asymptotic exponential function. At the highest assistance torque magnitude of 0.8 Nm/kg, participants’ net metabolic rate was 24.8 ± 2.3% (p = 4e–6) lower than running in the unpowered devices. Optimized timing of peak assistance torque was as late as allowed during stance (80% of stance) and optimized timing of torque removal was at toe-off (100% of stance); similar assistance timing was preferred across participants and torque magnitudes. Conclusions These results allow exoskeleton designers to predict the energy cost savings for candidate devices with different assistance torque capabilities, thus informing the design of portable ankle exoskeletons that maximize net metabolic benefit. Supplementary Information The online version contains supplementary material available at 10.1186/s12984-022-01023-5. Robotic assistance in the form of portable, wearable exoskeletons has the potential to increase accessibility to and interest in running by reducing the energy required to participate in the sport. Reducing energy cost would likely also reduce perceived exertion and could improve confidence and perceived competence, both of which are associated with increased enjoyment and frequency of exercise [5,6]. Allowing runners to match the pace of fitter friends could also leverage the benefits of exercising with others [7][8][9]. Recently, various wearable technologies have been developed that successfully reduce the energy cost of running. While passive systems can lower energy cost by moderate amounts, powered exoskeleton devices have the potential to provide larger reductions in the metabolic cost of running by injecting energy into the humanexoskeleton system. Passive support, which leverages the compliance and resilience of materials to provide a metabolic benefit, have included the Nike Vaporfly marathon shoe (4% reduction) [10]; a passive elastic hip exosuit (5% reduction) [11]; a rubber band connected between the feet (6% reduction) [12]; a passive exo-tendon hip exoskeleton (7% reduction) [13]; and a passive torsional spring hip exoskeleton (8% reduction) [14]. Powered untethered exoskeletons, however, could be especially helpful to individuals like novice or ageing runners who need larger metabolic reductions to perceive or benefit from assistance. Most recently, Witte et al. (2020) used tethered, powered ankle exoskeletons to reduce the energy cost of running by 24.7% over running in the unpowered devices and 14.6% over normal running shoes [15]. In the same study, the tethered ankle exoskeletons were used to emulate passive, spring-like ankle plantarflexion assistance, which was found to only reduce energy cost by 2.1% over running in the unpowered devices [15]. These results indicated that powered, portable ankle exoskeletons could greatly reduce the metabolic cost of running compared to passive ankle exoskeletons and have the potential to reduce energy cost more than devices that assist the hip. The large metabolic cost reductions during running obtained in a previous study by Witte et al. (2020) were achieved by optimizing the timing and magnitude of plantarflexion assistance torque with tethered exoskeleton emulators [15]. These devices have powerful offboard motors that allow for rapid control strategy testing, thus enabling experimenters to identify energetically optimal torque assistance patterns for users in real-time through a process known as human-in-the-loop optimization [16]. Torque assistance varied as a percentage of stance time and was parameterized by magnitude of peak torque and three timing nodes: timing of torque onset ("onset time"), timing of peak torque ("peak time"), and timing of torque removal ("off time"). In the previous study by Witte et al. (2020), the optimized running assistance patterns among participants were consistently characterized by large peak assistance torque applied in late stance, but there was greater variation in onset time of assistance between participants. It is unclear how metabolic cost related to each of the optimization parameters, and how much participant customization mattered. Translating the metabolic benefits of powered ankle exoskeletons outside of the laboratory environment introduces a tradeoff between the metabolic penalty of device mass and the amount of assistance provided to the user. If ankle exoskeletons are to become a viable product, the motors, power supply, and control system must be worn on the user. In a portable system, the mass of these components, which was unconstrained in tethered experiments, will strongly impact the user's net metabolic cost, especially mass that is located more distally from the user's center of mass. Furthermore, the mass of the exoskeleton end effector-here, the portion worn on the feet and shank of the user-was fixed in the tethered system but could be scaled as a function of the peak device assistance magnitude in a portable system. Kim et al. (2019) successfully demonstrated that portable, powered devices can provide a net metabolic benefit to the user by reducing the metabolic cost of running by 4% with a lightweight, soft exosuit that assists in hip extension [17]. Hip exoskeletons are an attractive approach to reducing the metabolic cost of running because the penalty associated with mass worn at the hip is low relative to mass worn at more distal locations such as the ankle. However, it is possible that portable, powered ankle exoskeletons can be carefully designed to achieve even greater reductions in the energy cost of running. To maximize the tradeoff between increasing device assistance and increasing the metabolic penalty of added device mass, it is necessary to gain a better understanding of the relationship between magnitude of ankle exoskeleton assistance and metabolic cost reduction. Characterizing this relationship could allow device designers to estimate and optimize the net metabolic benefit of candidate ankle exoskeletons. The peak torque magnitude provided by ankle exoskeleton assistance may be strongly related to metabolic cost reductions in running. In prior studies to reduce the metabolic cost of walking and running with tethered ankle exoskeleton emulators, optimal peak assistance torques were on the higher end of the allowable range [15,16], suggesting that peak torque has an influence on metabolic cost. Quinlivan et al. (2017) found that net metabolic rate continually decreased with increasing peak ankle exosuit assistance torque during walking, and that that this relationship fit a linear model well [18]. Characterizing the relationship between ankle exoskeleton assistance and metabolic cost in a similar manner for running, while optimizing other assistance characteristics at each peak torque level, would allow us to predict the metabolic benefit of a portable exoskeleton design. Peak torque also directly influences the mass requirements of a portable exoskeleton through the device architecture and transmission characteristics, which in turn affects metabolic cost. The metabolic cost of running increases with device mass and its distance from the user's center of mass [19]. Peak torque, unlike the timing of assistance, dictates the strength requirements of the exoskeleton architecture, which will strongly predict the mass of the device. A lightweight, soft exosuit that relies on shear forces to transmit assistance to the user would incur a smaller metabolic cost penalty than a heavier, framed device. However, a rigid framed device would be able to withstand higher peak torques and might more comfortably transfer those to the user with nominal shear forces. Peak torque also plays an important role in motor and transmission selection for portable device design, as motor size is strongly associated with stall torque limits [20]. Assuming similar ankle kinematics across torque magnitudes, peak torque also strongly correlates with peak mechanical power, which affects electrical energy consumption and battery size. While mechanical power is also a strong predictor of device performance, it is more difficult to systematically vary due to its dependence on human joint moment and resulting joint velocity throughout the gait cycle. The net metabolic benefit of a portable ankle exoskeleton could be estimated and optimized using knowledge of the relationships between peak assistance torque and metabolic cost reduction, peak assistance torque and exoskeleton mass, and exoskeleton mass and metabolic cost increase. It may be that the benefits of assistance are minimal until a substantial peak assistance torque is reached. Alternatively, it may be that metabolic cost decreases linearly as peak assistance torque increases, as Quinlivan et al. (2017) found in walking [18]. It is also possible that metabolic cost does not decrease (or even increases) once a threshold of peak assistance torque is reached. For example, Kang et al. (2019) found a quadratic relationship between peak hip assistance torque and metabolic cost during slow walking, where the greatest reduction in metabolic cost was not achieved at the highest assistance magnitude [21]. Knowledge of this relationship, along with a model of device mass, would be useful to predict the net metabolic benefit of a portable ankle exoskeleton. Known relationships between user's body weight, added mass, the location of mass placement on the user, and metabolic cost during running [19,22] can be used to predict the mass penalty of wearing an exoskeleton. For an existing design with known peak torque capability, the metabolic benefit of assistance torque could be subtracted from the metabolic penalty of carrying the mass to predict the net metabolic cost reduction. Performing this simple analysis before building the device could save substantial time and resources by avoiding costly prototyping of inadequate devices. In the early stages of device design, researchers could also construct an optimization routine to select a motor and transmission that would maximally reduce metabolic cost. The purpose of this study was to characterize the relationship between peak assistance torque and metabolic cost reduction during steady-state running with ankle exoskeletons. Participants underwent human-in-the-loop optimization with a bilateral ankle exoskeleton emulator at four fixed peak assistance torques. The timing of assistance torque-parameterized by onset time, peak time, and off time-was optimized at each peak torque level to minimize metabolic cost. After finding the participant's optimized parameters, we recorded the steady-state metabolic rate of participants while running in the devices with optimized assistance ("assistance"), in the unpowered devices ("zero torque"), and in the device footwear without the exoskeleton attached ("normal shoes"). We then fit a curve to these data to estimate the relationship between peak assistance torque and metabolic cost reduction. A secondary goal of this study was to investigate how optimal timing of plantarflexion assistance varies with peak torque magnitude and participant, as well as to understand the relative effect of each timing parameter on the optimization of metabolic cost. We expect these results to inform the design tradeoffs for portable ankle exoskeletons. Exoskeleton hardware and control parameterization Participants wore tethered bilateral torque-controlled ankle exoskeletons with a mass of 1.1 kg each. Exoskeletons were actuated by off-board motors connected via a series elastic Bowden cable transmission ( Fig. 1) [23]. Insoles with 4 force sensing resistors (Nike, Inc.) were used to detect foot strike and toe-off. Strain gauges assembled in a full Wheatstone bridge were calibrated and mounted to the end effectors to measure applied torque about the ankle joint for feedback control. An identical set of the exoskeleton footwear (Nike, Inc.) was used to evaluate the metabolic cost of running without the exoskeleton ("normal shoes"). The parameterization of desired torque patterns followed a similar approach to the powered assistance controller in [15], but with only three parameters that defined the timing of assistance: timing of peak torque (peak time), onset of torque (onset time), and return to zero torque (off time), all as percentages of average stance time. The peak torque magnitude, which was defined as the fourth parameter in [23], was fixed. The three timing nodes were connected by cubic spline to form an assistance torque curve. Participant demographics Three recreational runners (n = 3; 1 F, 2 M; age: 22-41 years; body mass: 57.5-84 kg; height: 1.68-1.84 m) participated in this study (Additional file 1: Table A1). To be eligible for this extensive study design, each participant had previously run a half marathon, was running at least 20 miles per week, and could run comfortably for at least 1 h at a speed of 3.35 m/s (approximately 8 min/ mile pace). These criteria ensured that the participant would be able to complete the experimental protocol in the aerobic respiration range, as the standard equation to calculate average metabolic rate is only valid for aerobic respiration [24]. All participants were consistent mid-to-rearfoot strikers at the study pace of 2.68 m/s. The study protocol was approved by the Stanford University Institutional Review Board, and all participants provided written informed consent before participating in the study. Participants were compensated 15 dollars per hour. Two additional participants were consented and participated but did not complete the study protocol due to circumstances surrounding the COVID-19 pandemic. These participants' data were discarded. The sample size of this study was informed by an a priori power analysis and resource tradeoffs. In a previous study, powered ankle exoskeleton assistance led to a metabolic cost reduction of 24.7 ± 6.9% relative to the unpowered condition (n = 11) [15]. Using this result, we found that a sample size of three participants gave a statistical power of 0.85 (two-tailed t-test, α = 0.05). Strict study inclusion criteria and lengthy protocol time (10 sessions lasting 4 h per session) favored a smaller sample size of well-trained participants. We performed two data collections per participant and per peak assistance torque level, one following optimization and one final validation session, to improve accuracy of within-participant results. Experimental protocol Participants experienced 9 or 10 total experimental sessions, during which they ran on a treadmill (Woodway USA, Inc.) at a pace of 2.68 m/s (Fig. 2). Participants took at least 1 day of rest between each session. The first session was a short introductory session in which participants were introduced to the exoskeleton controller. Participants ran at least 5 min in the exoskeletons without assistance torque ("zero-torque") followed by 5 min of generic assistance with timing similar to that from Witte et al. (2020) at a peak torque magnitude of 0.6 Nm/ kg, normalized to participant body mass in kilograms [15]. Participants were encouraged to run in the devices (zero-torque or assisted) until they felt comfortable running in the exoskeletons (Additional file 2). Optimization sessions Following the introductory session, participants experienced 7 or 8 sessions of human-in-the-loop optimization to reduce metabolic rate as described in "Optimization Strategy" below. Participants fasted for 2 h before each session to reduce the thermic effect of food on metabolic rate measurements within each generation of optimization. This fasting requirement ensured that any metabolic rate measurements were taken after the initial steep increase in the thermic effect of food had passed [25]. The slow decline of the thermic effect of food on metabolic rate over the course of the study was assessed to have negligible effect on the reported metabolic results, especially because all validation trials were repeated in reverse. During each optimization, the peak assistance torque was fixed. In each experimental session, the optimization phase lasted approximately 1 h (including a 2-min warm-up with generic assistance). Following the final session of optimization at a fixed peak torque level, a series of 6-min validation trials were performed to evaluate the effects of assistance torque compared to zerotorque and normal shoes. Each experimental session lasted approximately 2.5-4 h with approximately 1-1.5 h of running. Four peak assistance torque levels were evaluated to provide sufficient resolution across the search space within the constraints of a reasonable protocol length (up to 10 days). The highest level of peak torque that was found comfortable in pilot testing was 0.8 Nm/kg, normalized to participant body mass in kilograms. This value was slightly higher than the average optimized peak torque of 0.75 Nm/kg from a previous study [15]. The other three peak assistance torque levels were evenly spaced between the maximum value and zero torque: 0.6 Nm/kg, 0.4 Nm/kg, and 0.2 Nm/kg. In the first 3 or 4 experimental sessions of optimization, participants experienced one of the higher assistance levels, beginning with 0.8 Nm/kg for Subject 1 and 0.6 Nm/kg for Subjects 2 and 3. The order of the first and second conditions was altered for two of the participants after the first participant experienced muscle soreness from beginning with the highest assistance level. The multi-session optimization ensured that the participants had adapted to the assistance and that the optimization parameters had fully converged. The optimization parameters were considered to have fully converged once the optimizer step size σ (initially 10) dropped below half of the initial step size. The mean parameters from the end of the first and final session at the same assistance level were also compared to assess convergence. After the longer optimization period at the first assistance torque level, participants then experienced 2 sessions with optimization at a different high assistance torque level (0.6 Nm/kg for Subject 1, 0.8 Nm/ kg for Subjects 2 and 3). Two sessions were completed Fig. 2 Sample experimental protocol. All participants experienced an introductory session in which they ran in the unpowered exoskeleton ("zero torque") and with assistance. Next, participants experienced several sessions with human-in-the-loop optimization at various fixed peak assistance torque levels. (Subjects 2-3 first experienced four sessions of optimization at a peak torque level of 0.6 Nm/kg, then two sessions at 0.8 Nm/ kg.) On the final day of optimization at each peak torque level (indicated by an asterisk), a series of validation trials were performed ("Day-by-Day Validation") to compare the optimized assistance at that peak torque level to zero torque and normal shoes. During the final validation session ("Final Validation"), the optimized assistance strategy from each of the peak torque levels was tested, along with the zero torque and normal shoes conditions at this second assistance torque level to ensure participant adaptation translated to a different assistance torque condition, and that the optimizer had fully converged by the same step size criterion. Participants then experienced 1 session of optimization at each of the lower assistance levels (0.4 Nm/kg, then 0.2 Nm/ kg). These shorter durations of optimization were a result of minimal shift in the optimized parameters and a clear downward trend in optimizer step size. The ordering of peak torque conditions was chosen to maximize participant adaptation and facilitate optimization convergence. Poggensee and Collins (2021) found that novice users require over 100 min to learn how to maximally benefit from walking with ankle exoskeleton assistance, and that users adapt most slowly to peak torque magnitude [26]. Although all three participants in the present study had prior experience running with ankle exoskeleton assistance, additional sessions of optimization at the higher peak torque levels mitigated any effects of adaptation on the measured study outcomes. These training effects were expected to translate to lower peak torque levels. In addition, it is possible that beginning with the lowest peak torque magnitude of 0.2 Nm/kg would have resulted in poor optimization convergence. Low magnitude of assistance torque was expected to have less effect on metabolic rate, which could make it more challenging to identify the optimal timing parameters due to the higher noise-to-signal ratio and interaction effects between peak torque and timing of assistance. During the final experimental session of optimization at a fixed peak torque level, a series of validation trials occurred after the optimization phase ("Day-by-Day Validation"), separated by a minimum break of 10 min. Each validation trial was 6 min in duration, with data collected from the last 3 min. In the first validation trial, participants stood quietly for 6 min to obtain their resting metabolic rate. The participants then ran in three separate validation conditions: assistance (the optimized assistance torque pattern at that peak torque level), zero torque, and normal shoes. These three running conditions were randomized and repeated in reverse ("bidirectional validation") for a total of 6 running validation trials to reduce any effects of ordering on metabolic cost. Participants took at least 2 min of rest between running trials. These data were used to characterize the relationship between peak torque and metabolic cost reduction (referred to as "Day-by-Day Validation"). In some interim sessions at the same peak torque level, validation was also performed to track participant adaptation to assistance (Additional file 1: Fig. A2). Due to some participant scheduling constraints, single-direction validation (trials were not repeated in reverse) or no validation was performed in some of these interim testing sessions. Less than 2% variation in metabolic cost reduction across experimental sessions indicated that the participant had become accustomed to running with the prescribed assistance torque level. Final validation session In the final experimental session (referred to as "Final Validation"), the effect of assistance across all 4 torque levels was compared against running with zero torque and normal shoes in a series of validation trials. No optimization occurred during this experimental session. At the beginning of the experimental session, participants stood quietly for 6 min to obtain their resting metabolic rate. Participants then ran in each of the 6 running conditions for 6 min, with data collected from the last 3 min. For each torque level, the subject-specific optimized assistance pattern was applied. The order of the running conditions was then reversed to improve measurement accuracy and reduce potential effects of ordering. The results of Final Validation were compared with the results of Day-by-Day Validation to ensure that the ordering of optimization sessions did not have an effect on participant adaptation. Human-in-the-loop optimization Participants underwent human-in-the-loop optimization to determine the assistance timing parameters that maximally reduced their metabolic cost at each assistance torque level. A covariance matrix adaptation evolution strategy (CMA-ES) was used [15,16]. Each generation of 7 candidate assistance strategies-defined by the three timing parameters discussed above-were sampled from a multivariate normal distribution about the current mean parameter set. Each candidate assistance strategy was applied to the participant for 2 min of running, during which an estimate of steady-state metabolic rate was obtained from raw respirometry data [27]. At the end of each generation, the metabolic rate results were used to update the mean parameter set and optimization state variables that defined the multivariate normal distribution. The candidate assistance strategies in the next generation were sampled from the resulting distribution. During each optimization phase, participants experienced 4 generations of optimization for a total of 28 torque assistance strategies, equating to 56 min of running. The mean parameter set calculated from the final generation was used as the optimal assistance torque pattern at that peak torque magnitude for validation. For each participant, the optimization was initially seeded with the following set of mean timing parameters: onset time of 25% of stance, peak time of 75% of stance, and off time of 95% of stance. The optimization state variables were initialized using the same approach as [16]. The step size ( σ ) was initialized to 10, and the covariance matrix was initialized to the identity matrix. Peak time and off time were scaled by a factor of two to allow for finer search, as those parameters had smaller comfortable ranges than onset time during pilot testing. During experimental sessions with continued optimization at the same torque level, the optimized mean parameter set and optimization state variables were carried over from the previous experimental session. At each subsequent torque level, the optimization was seeded with the optimized mean parameter set from the previous torque level, but the optimization state variables were reset to the baseline. During optimization, each of the three timing parameters was restricted to a range that was comfortable in pilot testing: onset time ranged from 5 to 60 percent of average stance time, peak timing ranged from 40 to 80 percent, and off time ranged from 60 to 100 percent. After a new generation of assistance strategies was sampled from the current distribution, values sampled outside of the search region were projected onto the constraint boundary. Furthermore, the onset time and off time of torque were constrained to occur at least 20 percent before and after the peak time, respectively. Metabolic rate The reported outcomes for each condition from a single experimental session are taken as the average from the two 6-min validation trials to reduce the effects of noise. Metabolic rate, the primary outcome of this study, was measured using a respirometry system (Quark CPET, Cosmed), which was calibrated according to manufacturer instructions. Carbon dioxide and oxygen rates were measured during the last 3 min of a validation trial and substituted into a standard equation [24] to obtain average metabolic rate. The metabolic rate reported for all running validation trials was calculated by subtracting the metabolic rate of the quiet standing validation trial from the metabolic rate for the running condition. Percent reduction in metabolic rate relative to zero torque was evaluated by dividing the change in metabolic rate from zero torque by the net metabolic rate from the zerotorque condition. Percent reduction in metabolic rate relative to normal shoes was evaluated by dividing the change in metabolic rate from normal shoes by the net metabolic rate from the normal shoes condition. All metabolic rate results were normalized to participant body mass. Exoskeleton mechanics Exoskeleton mechanics were evaluated using the last 3 min of data from each validation trial. Average exoskeleton work was calculated by integrating the exoskeleton torque over ankle angle during the stance period for each stride, then averaging across all strides. Average exoskeleton power was calculated by dividing average exoskeleton work by the average stride time, as no work was performed during swing. Peak exoskeleton power was calculated as the maximum over an average gait cycle of the product of torque and ankle angular velocity. All exoskeleton mechanics were normalized to participant body mass. Step frequency and duty factor Stride and stance time for a single leg were determined from force-sensing resistors at the heel, hallux (big toe), and first metatarsophalangeal (MTP) joint. Data from the lateral MTP joint sensor was not used because it frequently stayed compressed during swing. Data were collected during the last 3 min of each assistance validation trial. At each assistance torque level, stride time and stance time data were averaged across both legs and validation sessions (Day-by-Day and Final). Step frequency (SF) was calculated from single-leg stride time and reported in steps per minute: SF = 2( 60 t stride ) . Duty factor (DF) was calculated as the ratio of stance over stride time: Statistical analyses We pooled the data from the Day-by-Day Validation and Final Validation sessions for all participants to obtain the mean and standard deviation (SD) of the measured outcomes. To evaluate whether the level of assistance torque had an effect on the measured outcomes, we performed a mixed-effects ANOVA (fixed effect: peak torque; random effect: participant) to account for repeated measures. On measures that showed significant trends, we performed paired, two-sided t-tests comparing each assistance torque condition to the zero-torque condition from the same validation session for that participant with a Šidák-Holm stepdown correction for multiple comparisons ( α = 0.05). In addition, an exponential model y = a·[1 − exp(b · x)] was fit to the data relating peak assistance torque to percent change in metabolic rate using iterative non-linear least squares. Ninety-five percent confidence intervals (CI) were calculated for the fit parameters. Significance of the model fit was determined by an ANOVA model comparison with the constant model ( y = a). An asymptotic exponential model was selected over a linear model ( y = a · x ) because it gives a theoretical maximum metabolic reduction that can be achieved as peak torque continues to increase. We would not expect percent change in metabolic cost to continue to decrease linearly, as the model would eventually predict an extreme at which there is no metabolic cost to run. Rather, we would expect the benefits of increasing assistance to level off as assistance replaces the contributions of the biological ankle. If the better model was indeed linear, the least squares exponential fit would be nearly linear in the region of interest and have a very large asymptote. Relative likelihood, which provides the likelihood that one model is a better fit to the data than the other, was used to compare the asymptotic exponential fit to a linear fit [28]. Relative likelihood is based on the Akaike information criterion (AIC), which estimates the relative amount of information lost by a given model using the maximum likelihood of the model. AIC also accounts for the number of parameters fit by the model, thus reducing the risk of overfitting. The relative likelihood formula for model comparison is given by The resulting value is the likelihood that Model 2 ( H 2 ) results in less information loss than Model 1 ( H 1 ). The residual sum of squares for each model was also calculated to compare the asymptotic exponential model to the linear model. The significance level for all model comparisons was α = 0.05. Data processing was performed using Matlab (Mathworks, Inc.) and statistical analysis was performed in R (R Core Team). Effect of timing parameters To evaluate the relative importance of the three timing parameters on metabolic cost, metabolic rate estimates from human-in-the-loop optimization were recorded for each assistance strategy. Across all three participants, a total of 637 assistance strategies were tested, each of which was associated with a peak torque level, onset time, peak time, off time, and an estimate of steady-state metabolic rate. We performed a mixed-effects ANOVA ( M = t onset + t 2 onset + t peak + t off ; fixed effects: onset time ( t onset ), peak time ( t peak ), off time ( t off ); random effects: participant, experimental session) to evaluate the effect of timing parameters on metabolic rate. Here, M is the steady-state metabolic rate, normalized to participant body mass (W/kg). In the optimization data, unlike the validation data, quiet standing metabolic rate was not subtracted because a measurement was not taken for every experimental session. Experimental session was treated as a random effect to capture offsets in metabolic rate between experimental days; this included metabolic rate offsets due to torque level and changes in quiet standing metabolic rate. A second-order polynomial was fit to onset time, as we would expect the cost landscape to be bowl-like. Because optimal peak time and off time were near the limit of the allowable range, the relationship between these timing parameters and metabolic rate were assumed to be linear in the region of interest. We assumed no interaction effects between timing parameters. A single-subject pilot study (Subject 3) was conducted to further examine the effect of onset time on metabolic cost. Peak torque was fixed at 0.8 Nm/kg, and peak time and off time were fixed at the participant's optimized values (79.7% and 100% of stance, respectively). Six minutes of quiet standing metabolic data were recorded at the beginning of the experimental session to obtain an estimate for resting metabolic rate. The participant then ran in a series of 6-min validation trials under six conditions: four conditions to sweep torque onset time (5%, 15%, 25%, and 35% of stance), zero-torque, and normal shoes. The order of these running trials was randomized. The running trials were repeated in reverse order for a total of 12 trials. The measured outcome of this pilot study was the percent reduction in metabolic rate over the zero-torque condition. A second-order polynomial least squares model was fit to the single-subject pilot study data relating onset time to percent reduction in metabolic rate. The adjusted R-squared value is reported in addition to the model significance. Exoskeleton mechanics Human-in-the-loop optimization converged rapidly to late peak timing and off timing of assistance torque for all participants and across all torque levels (Fig. 3). At assistance torque levels with multiple sessions of optimization, mean parameters were similar after the first and final sessions of optimization. Optimized peak time shifted by an average of 0.7% of stance (1.5% maximum) and optimized off time shifted by an average of 0.4% of stance (0.8% maximum). Onset time shifted by an average of 3.3% of stance (9% maximum), although further analysis suggests that onset time had little effect on metabolic cost. These results indicate that convergence was achieved within a single session of optimization. The step size convergence criterion (half of the initial step size) was met for all multi-session optimizations, and the step size showed a strong decreasing trend for single-session optimization. Optimized timing of peak torque converged to a value near the latest allowable (80% of stance) at 79.3 ± 0.86% (mean ± SD) of stance. Optimized timing of torque removal converged to a value near the latest allowable (toe-off, or 100% of stance) at 99.8 ± 0.26% of stance. The timing of torque onset saw more variation, occurring at 22.5 ± 7.75% of stance. There was greater variation in torque onset time across torque levels than between participants, and later torque onset time was associated with lower peak torque magnitudes (Table 1). Average exoskeleton mechanical power (P) increased with peak torque and was well-approximated by a linear relationship (linear mixedeffects model; P avg = 0.754τ peak − 0.007 ; P avg (W/ kg), τ peak (Nm/kg); marginal R 2 = 0.98, p < 2e-16). Peak exoskeleton mechanical power also increased linearly with peak torque (linear mixed-effects model, P peak = 10.04τ peak − 0.24 ; P peak (W/kg), τ peak (Nm/kg); marginal R 2 = 0.97, p < 2e-16) ( Table 2). Metabolic rate Net metabolic rate decreased as the peak torque magnitude of exoskeleton assistance increased (mixed-effects Metabolic results. Metabolic cost decreased with increasing peak torque assistance for all subjects. In the Subject-Specific plots, the final day validation results are shown in a darker-colored line than the day-by-day results following optimization. In the Line of Best Fit plot, the relationship between peak torque and change in metabolic cost is well approximated by an exponential fit in the tested range of peak torque values. The decaying exponential model fit the data better than a constant model (ANOVA; p = 6e-14) ANOVA; p = 2e-12; Fig. 4; Table 3). In the Final Validation session, metabolic rate results were similar to the results from Day-by-Day Validation sessions. By the end of the first experimental session, all participants had adapted quickly to assistance, showing less than 2% variation in metabolic rate reduction between the first and final session of optimization at the same torque level (Additional file 1: Fig. A2). Participants did not exceed a steady-state respiratory exchange ratio of 1.0, which indicates that they completed the entire study in the aerobic range. The relationship between percent reduction in net metabolic rate and peak assistance torque magnitude is wellapproximated by a decaying exponential curve of the form: % reduction = a · [1 − exp(b · τ peak )] , a =-46.24 (Fig. 4). The exponential model fit the data better than a constant model (ANOVA, p = 6e-14). Furthermore, the relative likelihood of the decaying exponential model with respect to a linear model ( % reduction = c · τ peak )] , where c = −33.19 ) was 108.6. Thus, the decaying exponential model is 108.6 times as probable as the linear model to minimize the loss of information [28]. The residual sum of squares of the decaying exponential fit was 77.33, which was lower than the residual sum of squares of the linear fit of 124.2. Step frequency and duty factor Step frequency (SF) decreased as peak assistance torque increased (linear mixed-effects model; SF = 162 − 10.6τ peak ; SF (steps/min), τ peak (Nm/kg); conditional R 2 = 0.98, p = 2e-5), resulting in a gait with more "push" characteristics as described by Oeveren et al. (2021) [29]. To obtain an estimate for step frequency for a new participant, step frequency in the provided equation should be scaled to the new participant's leg length by multiplying by a factor of 0.914/L 1/2 0 , where L 0 is the new participant's leg length from the greater trochanter to lateral malleolus in meters [30]. Duty factor also decreased with peak assistance torque (linear mixed-effects model; DF = 0.38 − 0.036τ peak ; p = 0.001, conditional R 2 = 0.97), which is indicative of a running style with more "bounce" characteristics [29]. Effect of timing parameters A mixed-effects model (fixed effects: onset time, peak time, off time; random effects: participant, experimental session) was fit to data from human-in-the-loop optimization to evaluate the effect of timing parameters on the steady-state metabolic rate of running ( M = 17.04 − 0.047t onset + 0.31t 2 onset − 0.028t peak − 0.052t off ) . We found that peak time (ANOVA, p = 2e-6) and off time (ANOVA, p = 2e-7) had strong effects on metabolic rate, but onset time did not (ANOVA, p = 0.64). In the single-subject pilot study to sweep onset time across 5, 15, 25, and 35% stance, the net metabolic rate was reduced by 23.9 to 26.2% compared to running in the zero-torque condition (Additional file 1: Fig. A1). The participant was unable to accurately rank the ordering of onset time trials but described the 35% onset as "a little less comfortable" than earlier onset times. Onset time did not have an effect on percent reduction in metabolic cost, as a quadratic model did not fit the data better than a constant value (second-order polynomial least-squares fit; % reduction = −24.95 − 1.16t onset + 0.90t 2 onset ; ANOVA, p = 0.48; adjusted R 2 = 0.31). Discussion Increasing peak exoskeleton assistance torque up to 0.8 Nm/kg led to a slightly nonlinear decrease in percent change in the metabolic cost of running for all participants, with diminishing returns at higher torques. The effectiveness of these large torques is consistent with the results of a previous human-in-the-loop optimization study in which peak torque could vary as an optimization parameter, which found that high peak torque magnitudes were preferred by the optimizer for most participants [15]. In the present study, the averaged data across participants show that the effects of assistance began to level out slightly at the highest tested torque level, with a relationship that is well-approximated by a decaying exponential (Fig. 4). The asymptote of this curve gives a theoretical maximum metabolic reduction (46%, CI: 26-67%) that can be achieved with ankle exoskeleton assistance, if the peak torque were to exceed the maximum level applied in this study. However, peak torque was limited to 0.8 Nm/kg due to user stability preferences, and there may be little value in designing devices that exceed this limit if users are not able to adapt to higher torque magnitudes. These results suggest that the net metabolic benefit of running with an untethered device will likely be optimized at a value of peak torque near the higher end of the tested range (0.6-0.8 Nm/ kg). In the absence of a user-preferred peak torque limit, continuing to increase peak assistance torque would eventually result in a decrease in net improvement, as device mass is expected to continually increase with peak torque, whereas the benefits of assistance level off. Additional training may allow participants to tolerate higher torques and achieve even larger reductions in energy cost, up to some user-preferred limit. Previously, Witte et al. (2020) found that some participants did not optimize to the maximum allowable peak torque magnitude [15]. This suggests that, for users at some stages of training, large assistance torques can increase metabolic cost, perhaps by requiring additional stabilizing muscle activity through, e.g., co-contraction [31]. More recently, Poggensee and Collins (2021) found that naïve exoskeleton users required much longer training periods to become expert, more than 100 min of exoskeleton exposure [26]. In the present study, experiencing optimization sessions fixed at a peak torque magnitude of 0.8 Nm/kg may have allowed for more complete motor learning, enabling participants to discover less metabolically costly solutions that afforded similar levels of stability [32]. While our results predict that further increasing peak assistance torque would lead to greater reductions in metabolic cost, the peak torque magnitude was limited to 0.8 Nm/kg because participants expressed that they felt unstable at a high peak torque magnitude. One participant was involved in pilot testing at 1.0 Nm/kg and had difficulty running on the treadmill without mis-stepping. They experienced muscle soreness that they attributed to "bracing to stay in the same place on the treadmill". Another participant stated that running with peak assistance torque of 0.8 Nm/kg made it "harder to control the device" and felt that they "had to be very consistent about landing and launching. " This feedback suggests that the cost function that the exoskeleton user is trying to optimize contains additional terms besides metabolic cost such as perceived instability and level of discomfort. It is possible that participants would feel comfortable with-and might even prefer-higher levels of assistance at faster speeds or in less-constrained environments such as overground running. The limit of peak torque magnitude might also vary on an individual basis, and scaling peak torque with the user's body mass might not capture other participant characteristics. Future research could investigate how user-preferred torque limits change with speed, environment, and duration of optimization. Late peak time and off time of torque were consistently preferred by all participants across all torque levels. During the first session of human-in-the-loop optimization, peak timing and off timing of torque quickly shifted within a single generation from the initial seed (75% and 95% of stance, respectively) toward the latest allowable (80% and 100% of stance). This preference for the latest allowable torque application was maintained across optimization sessions for all peak torque magnitudes. The final optimized peak time and off time parameters were consistent across participants and peak torque magnitudes within a standard deviation of less than 1% of stance (Table 1). These results support the observation that biomimetic assistance with a peak around 50% of stance is not metabolically optimal [15]. A similar trend is observed in walking; greater metabolic benefit is achieved by providing peak torque assistance later in stance than peak biological ankle moment [16,33]. Rather, it is more effective to assist in late stance when force-generating capacity of the plantarflexor muscles is limited due to high shortening velocity and sub-optimal muscle fascicle length [34,35]. Onset time within the tested range appeared to have little effect on metabolic cost, especially compared to the timing of peak torque and torque offset. Onset time varied more greatly across torque levels than between participants, with later torque onset favored at lower torque levels. While this variation might suggest a need for customization according to torque level and participant, onset time was also slow to converge to the optimized values compared to peak time and off time. This phenomenon shows that there might not be a clear optimal onset time; small differences in metabolic rate between onset time conditions might have been dominated by noise in the metabolic rate estimates, causing the optimizer to converge more slowly. Furthermore, the data from various assistance strategies tested during human-in-the-loop optimization indicates strong effects of peak time (p = 2e-6) and off time (p = 2e-7) on metabolic cost, but no clear effect of onset time (p = 0.64). Similarly, the singlesubject pilot study to explore the metabolic effects of onset time found little variation in metabolic cost, and there was no clear optimal value for onset time. This supports the conclusion that onset time across the search range used in this study has little effect on metabolic cost compared with peak time and off time of assistance torque. The optimized onset timing from human-in-the-loop optimization was likely part of a range of values that would have resulted in a similar metabolic cost. Future research could be conducted to generalize this claim across more participants and assistance torque magnitudes for all three timing parameters. It is likely that there are ranges for peak time and off time that are similarly effective in reducing metabolic cost, although these ranges are expected to be much smaller (e.g., 1% of stance) than the range of onset time. In a future study, each timing parameter could be swept individually to determine the effects on metabolic rate, or a grid search could be conducted to also assess interaction effects between timing parameters (and with peak torque as well), which were not considered in this analysis. The consistency in timing and metabolic results across participants suggests that users could obtain nearoptimal metabolic benefits from an exoskeleton that provides generic assistance timing. Customization of assistance to individual users is less suited to the development of a portable exoskeleton that can be made commercially available, especially if it might have a very weak effect on metabolic cost. The results from the present study indicate that late timing of peak torque (80% of stance) and torque offset (100% of stance) would be well suited for a range of participants. The weak effects of torque onset time on metabolic cost from analysis of the human-in-the-loop optimization data indicates that a generic, comfortable onset time (e.g., 20% of stance) could be similarly beneficial across users. Longer training periods, especially for novice users, may be required for users to maximize metabolic benefit from a generic assistance profile. The results of this study suggest that a portable ankle exoskeleton capable of high peak torque in the range of 0.6-0.8 Nm/kg might deliver the maximum net metabolic benefit relative to running in normal shoes, and that low levels of ankle assistance might not provide a worthwhile benefit. At the lowest peak torque level of 0.2 Nm/kg, the 7.7% metabolic benefit of assistance did not exceed the 8% net benefit achieved by the most effective portable, passive hip exoskeleton to date [14]. Thus, it would not make sense to design portable ankle exoskeletons for this low assistance torque level, as the device would need to be nearly massless to have a comparable effect to existing portable technologies. However, running with the highest peak assistance torque level of 0.8 Nm/kg in tethered exoskeletons (1.1 kg each), which were not mass-optimized, led to a 14.1% decrease in metabolic cost relative to running in normal shoes. A portable exoskeleton design would need to incorporate the added mass of an onboard actuation system, thus increasing metabolic cost. However, it is likely that the frame mass could be significantly reduced to avoid a large increase in metabolic penalty, because the tethered devices used in this study were designed to withstand higher torque magnitudes. It is possible that a portable ankle exoskeleton could be constructed with a similar mass on the lower leg and an additional 2-3 kilograms at the waist for a power supply and control unit, which would incur an additional 2-3% metabolic penalty [19]. These simple considerations suggest that a metabolic cost reduction of more than 10% could be achieved by a powerful portable exoskeleton.
11,079.6
2022-05-12T00:00:00.000
[ "Engineering", "Medicine" ]
A structure theorem for generalized-noncontextual ontological models It is useful to have a criterion for when the predictions of an operational theory should be considered classically explainable. Here we take the criterion to be that the theory admits of a generalized-noncontextual ontological model. Existing works on generalized noncontextuality have focused on experimental scenarios having a simple structure: typically, prepare-measure scenarios. Here, we formally extend the framework of ontological models as well as the principle of generalized noncontextuality to arbitrary compositional scenarios. We leverage a process-theoretic framework to prove that, under some reasonable assumptions, every generalized-noncontextual ontological model of a tomographically local operational theory has a surprisingly rigid and simple mathematical structure -- in short, it corresponds to a frame representation which is not overcomplete. One consequence of this theorem is that the largest number of ontic states possible in any such model is given by the dimension of the associated generalized probabilistic theory. This constraint is useful for generating noncontextuality no-go theorems as well as techniques for experimentally certifying contextuality. Along the way, we extend known results concerning the equivalence of different notions of classicality from prepare-measure scenarios to arbitrary compositional scenarios. Specifically, we prove a correspondence between the following three notions of classical explainability of an operational theory: (i) existence of a noncontextual ontological model for it, (ii) existence of a positive quasiprobability representation for the generalized probabilistic theory it defines, and (iii) existence of an ontological model for the generalized probabilistic theory it defines. Introduction For a given operational theory, under what circumstances is it appropriate to say that its predictions admit of a classical explanation?This article starts with the presumption that this question is best answered as follows: the operational theory must admit of an ontological model that satisfies the principle of generalized noncontextuality, defined in Ref. [1].Admitting of a generalizednoncontextual ontological model subsumes several other notions of classical explainability, such as admitting of a positive quasiprobability representation [2,3], being embeddable in a simplicial generalized probabilistic theory (GPT) [4][5][6][7], and admitting of a locally causal model [8,9].(Note that the first two of these results are first proved for general compositional scenarios in this paper.)Additionally, generalized noncontextuality can be motivated as an instance of a methodological principle for theory construction due to Leibniz, as argued in Ref. [10] and the appendix of Ref. [11].Finally, operational theories that fail to admit of a generalized-noncontextual ontological model provide advantages for information processing relative to their classically explainable counterparts [12][13][14][15][16][17][18][19].Because the notion of generalized noncontextuality is the only one we consider in this article, we will often refer to it simply as 'noncontextuality'. To date, prepare-measure scenarios are the experimental arrangements for which the consequences of generalized noncontextuality have been most explored.A few works have also studied experiments where there is a transformation or an instrument intervening between the preparation and the measurement [18,[20][21][22][23][24].However, generalized noncontextuality has not previously been considered in experimental scenarios wherein the component procedures are connected together in arbitrary ways, that is, in arbitrary compositional scenarios.Indeed, generalized noncontextuality has not even been formally defined at the level of compositional theories prior to this work; rather, it and several related concepts have only been formally defined for particular types of scenarios.In this work, we give a process-theoretic [25][26][27][28] formulation of the various relevant notions of operational theories and of representations thereof, enabling the study of noncontextuality in arbitrary compositional scenarios, and indeed of the noncontextuality of operational theories themselves.We then derive a number of results regarding the structure of noncontextual representations of operational theories, and we ultimately put strong constraints on the nature of these representations. Like Ref. [4], this work is sensitive to the distinction between operational theories and quotiented operational theories, commonly termed generalized probabilistic theories (or GPTs) [27][28][29][30][31].In an operational theory, one understands the primitive processes (e.g., preparation, transformation, and measurement procedures) to be lists of laboratory instructions detailing actions that can be taken on some physical system.Such a theory also makes predictions for the statistics of outcomes in any given experimental arrangement (without making any attempt to explain these predictions).As lists of laboratory instructions, the processes in an operational theory contain details which are not relevant to the observed statistics; any such details are termed the context of the given process [1].In contrast, a quotiented operational theory, or GPT, arises when one removes this context information by identifying any two processes that differ only by context-that is, which lead to all the same statistical predictions, and so are said to be operationally equivalent. Our formalization of both operational theories and generalized probabilistic theories follows that of quotiented and unquotiented operational probabilistic theories (OPTs), as in Refs.[32][33][34].For completeness, we provide a pedagogical introduction to our notation and conventions in Sec. 2. The framework presented here is also a precursor to a more novel framework presented in Ref. [35], which is motivated by the objective of cleanly separating the causal and inferential aspects of a theory. There are multiple different representations of operational theories and generalized probabilistic theories that one can consider, often motivated by the aim of explaining the predictions of the theory by appealing to some underlying realist model of reality.The quintessential sort of explanation is an ontological model of an operational theory, which presumes that the systems passing between experimental devices have properties, and that the outcomes of measurements reveal information about these properties.A complete specification of these properties for a given system is termed its ontic state.The variability in this ontic state mediates causal influences between the devices.Ontological models may be defined for either operational theories or generalized probabilistic theories.Another type of representation which has been widely considered (particularly by the quantum optics community) is that of quasiprobabilistic representations.Such representations are only defined for generalized probabilistic theories, and can be viewed as ontological models using quasiprobability distributions-that is, analogues of probability distributions in which some of the values can be negative. Our formalization of ontological models is more general than that which is usually given, since we define them in a compositional manner, and for arbitrary theories rather than for particular scenarios.(Although note that this was already done for the special case of quantum theory in Ref. [36].)Furthermore, our formalization of quasiprobabilistic representations is more general than that which is usually given, since we define them for arbitrary GPTs (not necessarily quantum).In this latter case, our formalization was strongly influenced by the prescription suggested in Ref. [37]. We can now return to the question of when a theory's predictions admit of a classical explanation. As argued above, our guiding principle is that of noncontextuality.The principle of noncontextuality is a constraint on ontological models of operational theories: namely, that the representation of operational processes does not depend on their context.That is, operational processes which lead to identical predictions about operational facts are represented by ontological processes which lead to identical predictions about ontological facts.If such a noncontextual ontological model exists, we take it to be a classical realist explanation of the predictions of the operational theory.Hence, the notion of classical-explainability for operational theories is the existence of a generalized-noncontextual ontological model. If one takes the processes of a generalized probabilistic theory as the domain of one's representation map, then there is no context on which a given representation (be it an ontological model or a quasiprobabilistic representation) could conceivably depend.This point was first made in Ref. [4], and we expand on it in this work, in particular in Appendix A. In such an approach, one cannot directly take noncontextuality-independence of context-as a notion of classicality for generalized probabilistic theories.Still, Ref. [4] showed that, in prepare-measure scenarios, the notion of noncontextuality for operational theories induces a natural and equivalent notion of classicality for generalized probabilistic theories.In particular, a generalized probabilistic theory admits of a classical explanation if and only if there is a simplex that embeds its state space and furthermore the hypercube of effects that is dual to this simplex embeds its effect space.Such an embedding can be viewed as an ontological model of a GPT [4,5].Hence, the resulting notion of classical explainability for a GPT is the existence of an ontological model for it.Our work extends this result to the case of arbitrary compositional theories and scenarios. We also extend (from prepare-measure scenarios to arbitrary compositional scenarios) the proof that positive quasiprobabilistic models are in one-to-one correspondence with noncontextual ontological models [2,4].Note that our proof-like the special case given in Ref. [4]corrects some issues with the original arguments in Ref. [2]. 1 1 Ref. [2] was not careful to distinguish between quotiented and unquotiented operational theories, and as such did not stipulate whether quantum theory was being considered as an operational theory or as a GPT.As a result, it failed to note that the most natural domain for a quasi-probabilistic representation is quantum theory as a GPT, while the domain of a noncontextual ontological representation is necessarily quantum theory as an operational theory.Also as a consequence, it argued that a positive quasiprobability representation is the same thing as a noncontextual ontologi-Note that simplex-embeddability can also be motivated as a notion of classicality as follows.First, a simplicial GPT, i.e., one in which all of the state spaces are simplices, transformations are arbitrary convex-linear maps between these simplices, effects are elements of the hypercubes which are dual to the simplices, and which is tomographically local, has been argued to capture a notion of classicality among operational theories [29,30].If a GPT satisfies simplex-embeddabiltiy, then it follows that the set of states and the set of effects therein can be conceptualized as a subset of those arising in a simplicial GPT, implying that every experiment describable by the GPT can be simulated within the simplicial GPT.It follows that simplex embeddability captures the possibility of simulatability within a classical operational theory, hence a notion of classicality.Furthermore, the existence of a positive quasiprobabilistic representation is a notion of classicality in the sphere of quantum optics.Hence, our results ultimately show that three independently motivated notions of classicality (namely these two, and the existence of a noncontextual ontological model of an operational theory) all coincide in general compositional situations (such as is relevant, for example, in quantum computation). Most importantly, this equivalence allows us to prove that every noncontextual ontological model of a tomographically local operational theory which satisfies an assumption of diagram preservation has a rigid and simple mathematical structure.In particular, every such model is given by a diagram-preserving positive quasiprobabilistic model of the GPT associated with the operational theory, and we prove that every such quasiprobabilistic model is in turn a frame representation [3,38] that is not overcomplete.As a corollary, it follows that the number of ontic states in any such model is no larger than the dimension of the GPT space. This rigid structure theorem and bound on the number of ontic states shows that there is much less freedom in constructing noncontextual ontological models than previously recognized.In particular, it means that once the representation of the states is fixed (i.e., by choice of frame) then there is no remaining freedom in the representations of the measurements and transformations.Moreover, in many ontological models the number of ontic states is taken to be infinite (e.g., corresponding to points on the surface of the Bloch ball); however, all such models are immediately ruled out by our bound on the number of ontic states.These results also imply new cal model.Our recasting of the relation between quasiprobability representations and noncontextual ontological representations is explicit about such distinctions, and consequently we show that a positive quasiprobability representation is not in and of itself a noncontextual ontological model.Rather, it is just that the sets of these are in one-to-one correspondence. proofs of the fact that operational quantum theory does not admit of a generalized-noncontextual model and simplifies the problem of witnessing generalized contextuality experimentally. Categorically, we view the GPT as being a particular monoidal category, and the representations thereof as being particular strong monoidal functors into subcategories of FVect R (the category of linear maps between finite dimensional real vector spaces).In the case of tomographically local GPTs, our structure theorem states that any such functor is naturally isomorphic to a standard representation of a tomographically local GPT within FVect R .In particular, this means that ontological models for such theories, should they exist, are essentially unique. We now summarize our key results and main assumptions in more detail. Results We begin by providing informal statements of our main results.The first result in this list extends the results of Ref. [4] from the case of prepare-measure scenarios to arbitrary scenarios.The second entry in the list constitutes the main technical result of this work.The third is a primary consequence of this for the study of noncontextual ontological models. 1. We refine and generalize the notions of quasiprobabilistic models and ontological models of operational theories and of GPTs to arbitrary compositional scenarios and theories, and we show a triple equivalence between: (a) a positive quasiprobabilistic model of the GPT associated to the operational theory, (b) an ontological model of the GPT associated to the operational theory, and (c) a noncontextual ontological model of the operational theory.2. We then prove a structure theorem for representations of a GPT which implies that: (a) every diagram-preserving quasiprobabilistic model of a GPT is a frame representation that is not overcomplete, i.e., an exact frame representation (b) every diagram-preserving ontological model of a GPT is a positive exact frame representation, and (c) every diagram-preserving noncontextual ontological model of an operational theory can be used to construct a positive exact frame representation of the associated GPT, and vice versa.3. A key corollary of these is that the cardinality of the set of ontic states for a given system in any diagram-preserving ontological model is equal to the dimension of the state space of that system in the GPT.For instance, a noncontextual ontological model of a qudit must have exactly d 2 ontic states.Similarly, the dimension of the sample space of any diagrampreserving quasiprobabilistic model is the GPT dimension.These results show that by moving beyond preparemeasure scenarios, the concept of a noncontextual ontological model of an operational theory becomes constrained to a remarkably specific and simple mathematical structure.Moreover, our bound on the number of ontic states yields new proofs of the impossibility of a noncontextual model of quantum theory (e.g., via Hardy's ontological excess baggage theorem [39]) and dramatic simplifications to algorithms for witnessing contextuality in experimental data (e.g.reducing the algorithm introduced in Ref. [4] from a hierarchy of tests to a single test). Assumptions The assumptions that are needed to prove our results will be formally introduced as they become relevant.For the sake of having a complete list in one place, however, we provide an informal account of them here.These assumptions can be divided into two categories. First, we have assumptions limiting the sorts of operational theories that we are considering. 1. Unique deterministic effect: We consider only operational theories in which all deterministic effects (corresponding to implementing a measurement on the system and marginalizing over its outcome) are operationally equivalent [32].2. Arbitrary mixtures: We assume that every mixture of procedures within an operational theory is also an effective procedure within that operational theory.That is, for any pair of procedures in the theory, there exists a third procedure defined by flipping a weighted coin and choosing to implement either the first or the second, depending on the outcome of the coin flip.3. Finite dimensionality: We assume that the dimension of the GPT associated to the operational theory is finite.4. Tomographic locality: For some of our results, we moreover limit our analysis to operational theories whose corresponding GPT is tomographically local (namely, where all GPT processes on composite systems can be fully characterized by probing the component systems locally [29]).Second, we have assumptions that concern the ontological model (or quasiprobabilistic model). 1. Deterministic effect preservation: Any deterministic effect in the operational theory is represented by marginalization over the sample space of the system in the ontological (or quasiprobabilistic) model.2. Convex-Linearity: The representation of a mixture of procedures is given by the mixture of their representations, and the representation of a coarsegraining of effects is given by the coarse-graining of their representations.3. Empirical adequacy: The ontological (or quasiprobabilistic) representations must make the same predictions as the operational theory.4. Diagram preservation: The compositional structure of the ontological (or quasiprobabilistic) representation must be the same as the compositional structure of the operational theory.(Formally, this means that we take these representations to be strong monoidal functors.) The most significant assumption regarding the scope of operational theories to which our results apply is that of tomographic locality.Among the assumptions concerning the nature of the ontological (or quasiprobablistic) model, the only one that is not completely standard is that of diagram preservation. As we will explain, however, the assumption of diagram preservation does not restrict the scope of applicability of our results; rather, it is a prescription for how one is to apply our formalism to a given scenario.Furthermore, our main results do not require the full power of diagram preservation, but rather can be derived from the application of this assumption to a few simple scenarios: the identity operation, the prepare-measure scenario, and the measure-and-reprepare operation.However, full diagram preservation is a natural generalization of these assumptions, as well as of a number of other standard assumptions that have been made throughout the literature on ontological models, and so we will build it into our definitions rather than endorsing only those particular instances that we need for the results in this paper.We discuss these points in more detail in Section 5, and provide a defense of full diagram preservation in Ref. [35]. Preliminaries In this section we provide a pedagogical introduction to the diagrammatic notation that we will employ and its application to operational theories, tomographically local GPTs, and their ontological and quaisiprobabilistic representations.This section should be treated largely as a review of the relevant literature, which we include to have a self-contained presentation of the necessary formalism for our main results. Process theories In this paper we will represent various types of theories as process theories [26], which highlights the compositional structures within these theories.We will express certain relationships that hold between these process theories in terms of diagram-preserving maps2 .We give a brief introduction to this formalism here.Readers who would like a deeper understanding of this approach can read, for example, Refs.[25][26][27][28]35]. A process theory P is specified by a collection of systems A,B,C,... and a collection of processes on these systems.We will represent the processes diagrammatically, e.g., where we work with the convention that input systems are at the bottom and output systems are at the top.We will sometimes drop system labels, when it is clear from context.Processes with no input are known as states, those with no outputs as effects, and those with neither inputs nor outputs as scalars.The key feature of a process theory is that this collection of processes P is closed under wiring processes together to form diagrams; for example, Wirings of processes must connect outputs to inputs such that systems match and no cycles are created. We will commonly draw 'clamp'-shaped higher-order processes [41,42] such as: which we call testers.These can be thought of as something which maps a process from A to B to a scalar via: These are not primitive notions within the framework of process theories and instead are always thought of as being built out of particular state, effect and auxiliary system 3 .In other words, the tester τ is really just shorthand notation for a triple (s τ ,e τ ,W τ ) where: Wτ . ( A diagram-preserving map M : P → P ′ from one process theory P to another, P ′ , is defined as a map taking systems in P to systems in P ′ , denoted as S → M (S), (6) and processes in P to processes in P ′ .Taking inspiration from [40,44] this will be depicted diagrammatically as M :: such that wiring together processes before or after apply- 3 The recent work of Ref. [43] has shown how these, and other, higher order processes can actually be incorporated as primitive notions within a process-theoretic framework. ing the map is equivalent, that is: In particular, this means that M maps the identity processes in P to identity processes in P ′ . Remark 2.1.If we interpret these process theories as symmetric monoidal categories, then any strict monoidal functor M defines a diagram preserving map M , simply by taking M (A) = M (A) and M (f ) = M (f ).Note that this latter equation is not obviously well-typed, as, according to Eq. (7), M (f ) : However, in the case of strict monoidal functors, we have that , and so this is not actually a problem.If, on the other hand, one instead has a strong monoidal functor, M , in which this equality is relaxed to the natural isomorphism [45] µ, , one can still use this to define a diagram preserving map.The difference is that now (following Ref. [40]) we need to use the natural isomorphisms µ in order to define M (f ), that is, we define In this paper we will always be considering strong monoidal functors where M (I) = I, but if one merely had that M (I) ϵ ∼ = I, then one must also incorporate this natural isomorphism when defining the action of M on states, effects, and scalars. We will also use the concept of a sub-process theory, where the intuitive idea is that P ′ is a sub-process theory of P, denoted P ′ ⊆ P, if the processes in P ′ are a subset of the processes in P that are themselves closed under forming diagrams.Formally, we do not require that a sub-process theory is such a theory, but only that it is equivalent to such a theory; that is, we say that P ′ ⊆ P if and only if there exists a faithful strong monoidal functor from P ′ into P. The key process theory which underpins this work is FVect R , defined as follows: Example 2.2 (FVect R ).Systems are labeled by finite dimensional real vector spaces V where the composition of systems V and W is given by the tensor product V ⊗ W . Processes are defined as linear maps from the input vector space to the output vector space.Composing two processes in sequence corresponds to composing the linear maps, while composing them in parallel corresponds to the tensor product of the maps.If a process lacks an input and/or an output then we view them as linear maps to or from the one-dimensional vector space R. Hence, processes with no input correspond to vectors in V and processes with no output to covectors, i.e., elements of V * .This implies that scalars-processes with neither inputs nor outputs-correspond to real numbers.FVect R is equivalent to the process theory of real-valued matrices.However, representing the former in terms of the latter requires artificially choosing a preferred basis for the vector spaces. The first is the process theory of (sub)stochastic processes.Here, systems are labeled by finite sets Λ which compose via the Cartesian product.Processes with input Λ and output Λ ′ correspond to (sub)stochastic maps, and can be thought of as functions where for all λ ∈ Λ we have λ ′ ∈Λ ′ f (λ ′ |λ) ≤ 1.When this inequality is an equality, they are said to be stochastic (rather than substochastic).For any pair of functions f : Λ×Λ ′ → [0,1] and g : Λ ′ ×Λ ′′ → [0,1] (where the output type of f matches the input type of g), sequential composition is given by g•f : Λ×Λ ′′ → [0,1] via the following rule for composing the functions: For any pair of functions It is sometimes more convenient or natural to take an alternative (but equivalent) point of view on this process theory (e.g., this view makes it more clear that this is a sub-process theory of FVect R ).In this alternative view, the systems are not simply given by finite sets Λ, but rather are taken to be the vector space of functions from Λ to R, denoted R Λ .Then, rather than taking the processes to be functions f : Λ × Λ ′ → [0,1], one takes them to be linear maps from R Λ to R Λ ′ , denoted by: where for all λ ′ ∈ Λ ′ , we define f (v)(λ ′ ) := λ∈Λ f (λ ′ |λ)v(λ).It is then straightforward to show that sequential composition of the stochastic processes corresponds to composition of the associated linear maps and that parallel composition of the stochastic processes corresponds to the tensor product of the associated linear maps.For example, for sequential composition we have that for all v ∈ R Λ , = = = Moreover, consider processes with no input-that is, linear maps p : R → R Λ .By using the trivial isomorphism R ∼ = R ⋆ where ⋆ is the singleton set ⋆ := { * }, one can see that these correspond to functions p : The second subtheory of FVect R is QuasiSubStoch, which is the same as the process theory of (sub)stochastic processes, but where the constraint of positivity is dropped.The systems can be taken to be finite sets Λ, and the processes with input Λ and output Λ ′ can be taken to be functions These are said to be quasistochastic (as opposed to quasisubstochastic) if they moreover satisfy λ∈Λ ′ f (λ ′ |λ) = 1 for all λ ∈ Λ.The way that these compose and are represented in FVect R is exactly the same as in the case of substochastic maps. Summarizing, we have Example 2.4 (QuasiSubStoch).We define QuasiSubStoch as the subtheory of FVect R where systems are restricted to vector spaces of the form R Λ and processes are those that correspond to quasi(sub)stochastic maps. By construction, SubStoch ⊂ QuasiSubStoch ⊂ FVect R ; however, in contrast to FVect R , SubStoch and QuasiSubStoch do come equipped with a preferred basis for each system.It is known that quantum theory as a GPT (QT) can be represented as a subtheory of QuasiSubStoch (see, for example, [37]). Operational theories We now introduce a process-theoretic presentation of the framework of operational theories as defined in Ref. [1], resulting in a framework that is essentially that of (unquotiented) operational probabilistic theories [34].An operational theory Op is given by a process theory specifying a set of physical systems and the processes which act on them (where processes are viewed as lists of lab instructions), together with a rule for assigning probabilities to any closed process.A generic laboratory procedure has an associated set of inputs and outputs, and will be denoted diagrammatically as: Of special interest are processes with no inputs and processes with no outputs, depicted respectively as The former is viewed as a preparation procedure and the latter is viewed as an effect, corresponding to some outcome of some measurement.We depict the probability rule by a map p, as That is, the application of p on any closed diagram yields a real number between 0 and 1.Note that this is not a diagram-preserving map as it can only be applied to processes with no input and no output.(Nonetheless, we will see shortly how it has a diagram-preserving extension to arbitrary processes-namely, the quotienting map).This probability rule must be compatible with certain relations that hold between procedures [31,41].First, it must factorise over separated diagrams, for example, Moreover, if T 1 is a procedure that is a mixture of T 2 and T 3 with weights ω and 1 − ω respectively4 , then it must hold that for any tester τ , we have Additionally, if one operational effect E 1 is the coarsegraining of two others, E 2 and E 3 , then Pr(E 1 ,P ) must be the sum of Pr(E 2 ,P ) and Pr(E 3 ,P ) for all P . Our main result holds only for operational theories satisfying the following property: In other words, any two processes T and T ′ that give the same statistics for all local preparations on their inputs and all local measurements on their outputs also give the same statistics in arbitrary circuits.Such operational theories are alternatively characterized by the fact that the GPT defined by quotienting them satisfies tomographic locality, as we show below. Two processes with the same input systems and output systems are said to be operationally equivalent [1] if they give rise to the same probabilities no matter what closed diagram they are embedded in.The testers from Eq. (5) facilitate a convenient diagrammatic representation of this condition.That is, two processes are operationally equivalent, denoted by if they assign equal probabilities to every tester5 , so that It is easy to see that operational equivalence defines an equivalence relation.Hence, we can divide the space of processes into equivalence classes, and each process T in the operational theory can be specified by its equivalence class T , together with a label c T of the context of T , specifying which element of the equivalence class it is.For a given T , c T provides all the information which defines that process which is not relevant to its equivalence class.Hence, each procedure is specified by a tuple, T := ( T ,c T ), and we will denote it as such when convenient.In the case of closed diagrams, the equivalence class can be uniquely specified by the probability given by the map p, and so any information beyond this forms the context of the closed diagram. Next, we define a quotienting map ∼ which maps procedures into their equivalence class (exactly as is done to construct quotiented operational probabilistic theories in Ref. [34]).Given a characterization of each procedure as a tuple of equivalence class and context, the quotienting map picks out the first element of this tuple, taking ( T ,c T ) → T . 6Diagrammatically, we have We prove that it is diagram-preserving in Appendix E.1.For processes which are closed diagrams, one can always choose the representative of the equivalence class to be the real number specified by the probability rule. Hence, the map ∼: Op → Op can be viewed as a diagrampreserving extension of the probability rule p.This implies that the quotiented operational theory reproduces the predictions of the operational theory, since It is worth noting that in the quotiented operational theory, a closed diagram is equal to a real number (the probability associated to it), while in the operational theory these are not equal until the map p is applied to the closed diagram. We will assume that every deterministic effect for a given system, A, in the operational theory (corresponding to implementing a measurement on the system and marginalizing over its outcome) is operationally equivalent.We denote these deterministic effects as: c where c labels the context. The GPT associated to an operational theory It is well known [31][32][33] that a quotiented operational theory, Op, is nothing but a generalized probabilistic theory [29,30], and in fact for this paper we view this as the definition of a GPT.We will now demonstrate this by showing that Op is tomographic (a notion that will be defined momentarily), is representable in real vector spaces, is convex, and has a unique deterministic effect.This is analogous to how quotiented OPTs arise from unquotiented OPTs in [32,33].Firstly, note that the quotiented operational theory is tomographic.For a generic process theory, P, being tomographic means that processes are characterized by scalars.That is, given any two distinct processes f, g : there must exist a tester h ∈ P that turns each of these processes into a closed diagram, i.e., a scalar, such that the scalars are distinct: That Eq. (32) implies Eq. (31) for processes in a quotiented operational theory is trivial; we now give the proof that Eq. (31) implies Eq. (32).Consider two distinct processes, T and T ′ , in the quotiented operational theorythe images under the quotienting map of process T and T ′ in the operational theory-such that: By definition, we know that T ̸ = T ′ implies that T ̸ ≃ T ′ , and hence there exists some tester τ such that Since the action of p is identical to that of ∼ on closed diagrams, this implies that Finally, we can use the fact that the quotienting map is diagram-preserving to write that there exists τ such that This establishes that Eq. (31) implies Eq. (32), and so the quotiented operational theory is tomographic.This means that we can identify an operational equivalence class of processes, T , with a real vector, K T , living in R T A→B , where T A→B denotes the set of testers for processes with input A and output B. Concretely, we define these vectors component-wise via Clearly, following on from the discussion around Eq. (36), we have that K T = K T ′ if and only if T = T ′ .This vector space representation, however, is generally infinite dimensional, and gives a highly inefficient characterisation of processes.We can instead focus on some minimal subset of fiducial testers F A→B ⊂ T A→B which, for notational convenience, we index as The term fiducial means that this subset of testers satisfies two key properties.The first is that they must also suffice for tomography, i.e., We can therefore use these fiducial testers to define a finite dimensional vector representation R T of a process T , defined componentwise via for α = 1,2,...,m A→B .This new representation, R T , has a straightforward relation to the original vector representation, K T ; all one must do to go from the original to the new representation is to restrict the K T vectors to the relevant subset of their components.We can think of this as a linear restriction map F A→B : R T A→B → R F A→B :: K → K| F A→B .This allows us now to relate these two representations via the observation that The second key property of fiducial sets of testers is that they define a linear compression of the K T vectors. Formally, what we mean by this is that there is a linear map E A→B : R F A→B → R T A→B which is the inverse to the restriction map F A→B on vectors K T , that is, for all We reiterate that F A→B is taken to be a minimal fiducial set of testers, which means that it is a minimal cardinality set of testers satisfying these two properties.Note that minimal fiducial sets are typically not unique. Consider now how the sequential composition of processes is represented.Given representations of a pair of processes (R T , R T ′ ) we know (as R is injective) that we can determine T and T ′ , compute their composition T ′ • T , and via Eq.(39) obtain R T ′ • T .We denote the sequential composition map on the vector representation as R T ′ □R T := R T ′ • T .Similarly, for parallel composition we can define R T ′ ⊠R T := R T ′ ⊗ T .As we demonstrate in Appendix E.2, both □ and ⊠ can be uniquely extended to bilinear maps on the relevant vector spaces.Specifically, we have: Lemma 2.5.The operation □ can be uniquely extended to a bilinear map and the operation ⊠ can be uniquely extended to a bilinear map This implies that transformations act linearly on the state space, and also that the summation operation distributes over diagrams, i.e.: It is generally easier to work with this vector representation of processes rather than directly with the abstract process theory of operational equivalence classes of procedures.We will do so when convenient, abusing notation by dropping the explicit symbol R, and simply denoting the vector representation of the equivalence classes in the same way as the equivalence classes themselves.That is, we will denote R T by T .For example, we will write Eq. (42) as Note that generic linear combinations such as i r i T i need not correspond to any process in the operational theory.However, some linear combinations correspond to mixtures and coarse-grainings, and these will correspond to other processes in the operational theory.Namely, if T 1 is a procedure that is a mixture of T 2 and T 3 with weights ω and 1−ω, then by Eq. ( 25) it follows that for all τ , which in turn implies that and so by the fact that quotiented operational theories are tomographic, Hence, the mixing relations between preparation procedures in the operational theory are captured by a convex structure in this representation.More generally we find that for arbitrary coefficients r i .Hence, for example, the coarse-graining relations that hold among operational effects are captured by the linear structure in this representation of the quotiented operational theory. If one makes the standard assumption that every possible mixture of processes in the operational theory is another process in the operational theory, then it follows that the quotiented operational theory is convex. Finally, the fact that we assumed that each system A had a unique equivalence class of deterministic effects means that the quotiented operational theory will have a unique deterministic effect [32] for each system: In summary, a quotiented operational theory satisfies the key properties of a GPT: being tomographic, representability of each system in R d (for some d), convexity, and uniqueness of the deterministic effect for each system.Henceforth, we will refer to the quotiented operational theory as the GPT associated to the operational theory, and we presume that every GPT can be achieved in this way. For example, quantum theory qua operational theory is the process theory whose processes are laboratory procedures (including contexts), while quantum theory qua GPT is the process theory whose processes are completely positive [47,48] trace-nonincreasing maps, whose states are density operators, and so on.When one quotients quantum theory qua operational theory, one obtains quantum theory qua GPT. It is worth noting that a quotiented operational theory should not be viewed as an instance of an operational theory 7 .In an operational theory, it is not merely that contexts are permitted, rather they are required in the definition of the procedures.The primitives in an operational theory are laboratory procedures, and these necessarily involve a complete description of the context of a procedure.For example, "prepare the maximally mixed state" specifies a preparation when viewing quantum theory as a quotiented operational theory, but not when viewing it as an operational theory.In the latter case, one must additionally specify how this mixture is achieved, e.g., which ensemble of pure states was prepared or which entangled state was followed by tracing of a subsystem. Tomographic locality of a GPT Tomographic locality is a common assumption in the GPT literature-indeed, in some early work on GPTs it was considered such a basic principle that it was taken as part of the framework itself [30].Intuitively, it states that processes can be characterized by local state preparations and local measurements.In this section, we will show that all tomographically local GPTs can be represented as subtheories Op ⊂ FVect R , using arguments similar to those in, e.g., the duotensor formalism in Ref. [31]. A GPT is said to satisfy tomographic locality if one can determine the identity of any process by local operations on each of its inputs and outputs: One can immediately verify that an operational theory satisfies Eq. ( 26) if and only if the GPT obtained by quotienting relative to operational equivalences is tomographically local. There are many equivalent characterizations of tomographic locality for a GPT.The most useful for us is the following condition, first introduced in Sections 6.8 and 9.3 of [31], which allows us to show that tomographically local GPTs can be represented as subtheories of FVect R . Consider an arbitrary set of linearly independent and spanning states { P A j } j=1,2,...,m A on system A and an arbitrary set of linearly independent and spanning effects { E B i } i=1,2,...,m B on system B. (If the systems are composite, these should moreover be chosen as product states and product effects respectively.)Define the 'transition matrix' in this basis for a process T from system A to system B as Lemma 2.6.A GPT is tomographically local if and only if one can decompose the identity process, denoted 1 A , for every system A as where M 1 A is the matrix inverse of the transition matrix N 1 A of the identity process, that is, This was originally shown in [31,Sec. 6.8], and we reprove this in our notation in Appendix E.3. The vector space spanned by the set { P A j } j=1,2,...,m A of states is R m A , and the vector space spanned by the set , which generically implies that M 1 A is not the identity matrix (nor is it equal to N 1 A ), counter to intuitions one might have from working with orthonormal bases.The following corollary then makes explicit some extra structure which was implicit in the vector representation R T of the previous section.In particular, it shows that the vector space R m A→B of transformations from A to B is isomorphic to the vector space of linear maps from , where a process T is represented as a vector R T in the former and a matrix M T in the latter. Corollary 2.7. A GPT is tomographically local if and only if one can decompose any process T as where Proof.To prove that a GPT is tomographically local if one can decompose any process as in Eq. ( 53), simply note that for the special case of T = 1 A , Eq. ( 53) implies Eq. ( 51), and hence, by Lemma 2.6, implies tomographic locality. To prove the converse, we assume tomographic locality and apply Lemma 2.6 to decompose the input and output systems of an arbitrary process T as which can be rewritten as: at which point we can simply identify Given the vector representation of two equivalence classes, R T and R T ′ , we showed how to compute the representation of either the sequential or parallel composition of these (via □ and ⊠, respectively).However, if we represent equivalence classes by matrices M T instead, then how must we represent the parallel and sequential composition of processes?It turns out that parallel composition is represented by where the ⊗ on the left represents the parallel composition of equivalence classes, while on the right it represents the tensor product of the two matrices.Meanwhile, the sequential composition of this matrix representation is given by where on the left-hand side • represents the sequential composition of the equivalence classes, while on the righthand side it represents matrix multiplication.These two facts are proven in Appendix E. 4. The fact that Eq. ( 58) is not simply the sequential composition rule for FVect R , namely the matrix product of M T ′ and M T , implies that this matrix representation is not a subtheory of FVect R , nor even some other diagrampreserving representation of the GPT.This form of composition has, however, appeared numerous times in the literature, for example in Refs.[3,31,37,49].There is, moreover, a well known trick to turn this representation into a diagram-preserving representation in FVect R : one simply defines a new matrix representation by replacing It is then easy to verify that these do indeed compose using the standard composition rules (tensor products for parallel composition and matrix multiplication for sequential composition), and to verify that the identity process is represented by the identity matrix. Putting all of this together we arrive at the following. Theorem 2.8.Any tomographically local GPT has a diagram-preserving representation in FVect R given by the map on systems and the map on processes, where for some basis of states { P A j } and effects { E B k }, and where This result is implicit in the work of Refs.[31,34] and more explicit in the quantum case of [37]. Effectively this means that we can view any tomographically local GPT simply as a suitably defined subtheory Op ⊂ FVect R .For the remainder of this paper we restrict our attention to tomographically local GPTs, and we will moreover abuse notation and simply denote the linear maps in this representation by T rather than by M T •N 1 A , and similarly, the vector spaces as A rather than by R m A .That is, we will neglect to make the distinction between the quotiented operational theory and its representation as a subtheory of FVect R , as preserving the distinction is unwieldy and typically unhelpful.Quantum theory is an example of an operational theory, and it is well known that the GPT representation of quantum theory is tomographically local.The latter is a subtheory of FVect R , as B(H) is a real vector space and completely positive trace non-increasing maps are just a particular class of linear maps between these vector spaces.Classical theory, the Spekkens toy model [50], and the stabilizer subtheory [51] for arbitrary dimensions are also tomographically local.Examples of GPTs which are not tomographically local are real quantum theory [52] and the real stabilizer subtheory. Representations of operational theories and GPTs One often wishes to find alternative representations of an operational theory or a GPT, e.g., as an ontological model or a quasiprobabilistic model (to be defined shortly).A key motivation for studying ontological models is the attempt to find an explanation for the statistics in terms of some underlying properties of the rele-vant systems, especially if this explanation can be said to be classical in some well-motivated sense.In this section, we introduce the definition of ontological models and quasiprobabilistic models, and in the next section we discuss under what conditions one can say that such representations provide a classical explanation of the operational theory or GPT which they describe. Ontological models An ontological model is a map associating to every system S a set Λ S of ontic states, and associating to every process a stochastic map from the ontic state space associated to the input systems to the ontic state space associated with the output systems. It is important to distinguish between ontological models of operational theories and ontological models of GPTs, as was shown in Ref. [4].In particular, the former allows for context-dependence while the latter does not.See App.A for a detailed discussion of this point.Definition 2.9 (Ontological models of operational theories).An ontological model [53] of an operational theory Op is a diagram-preserving map ξ : Op → SubStoch , depicted as ξ :: from the operational theory to the process theory SubStoch, where the map satisfies three properties: 1.It represents all deterministic effects in the operational theory appropriately: It reproduces the operational predictions of the operational theory (i.e., is empirically adequate). That is, for all closed diagrams: Pr(E,P ). It preserves the convex and coarse-graining relations between operational procedures. E.g., if T 1 is a procedure that is a mixture of T 2 and T 3 with weights ω and 1−ω, respectively, then it must hold that This diagrammatic definition of an ontological model reproduces the usual notions [1] of ontological representations of preparation procedures and of operational effects.In particular, an operational preparation procedure is an operational process with a trivial input, and by diagram preservation of ξ, this is mapped to a process in SubStoch with a trivial input, that is, to a probability distribution over the ontic states: Similarly, an operational effect is an operational process with a trivial output, and by diagram preservation of ξ is mapped to a substochastic process with a trivial output, that is, to a response function over the ontic states: Definition 2.10 (Ontological models of GPTs ).An ontological model ξ of a GPT Op is a diagram-preserving map ξ : Op → SubStoch , depicted as ξ :: from the GPT to the process theory SubStoch, where the map satisfies three properties: 1.It represents the deterministic effect for each system appropriately: It preserves the convex and coarse-graining relations between operational procedures. E.g., if then it must hold that In analogy with the discussion above, one has that normalized GPT states on some system are represented in an ontological model by probability distributions over the ontic state space associated with that system, while GPT effects are represented by response functions. The state spaces in SubStoch form simplices, and so we will sometimes refer to an ontological model of a GPT as a simplex embedding.This terminology is a natural extension of the definition of simplex embedding in [4]. Quasiprobabilistic models We now introduce quasiprobabilistic models of a GPT.One could analogously define quasiprobabilistic models of an operational theory (as diagram-preserving maps from Op to QuasiSubStoch).However, given that the expressive freedom offered by the possibility of contextdependence is sufficient to ensure that every operational theory admits of an ontological model, and hence a positive quasiprobabilistic model, there is no need to make use of the additional expressive freedom offered by allowing negative quasiprobabilities, and hence no motivation to introduce such models.On the other hand, in the case of GPTs, there does not always exist an ontological model, hence quasiprobabilistic models are a useful conceptual and mathematical tool for assessing the classicality of a GPT. Definition 2.11 (Quasiprobabilistic models of GPTs). A quasiprobabilistic model of a GPT Op, is a diagrampreserving map ξ : Op → QuasiSubStoch , depicted as ξ :: where the map satisfies three properties: 1.It represents the deterministic effect for each system appropriately: 2. It reproduces the operational predictions of the GPT (i.e., is empirically adequate), so that for all closed diagrams, Pr( E, P ). It preserves the convex and coarse-graining relations between operational procedures. E.g., if then it must hold that One can see that the only technical distinction between an ontological model of a GPT and a quasiprobabilistic model of a GPT is that in the latter, the probabilities are replaced by quasiprobabilities, which are allowed to go negative. In analogy with the discussion at the end of Section 2.3.1, one has that GPT states on some system are represented in a quasiprobabilistic model by quasidistributions over the sample space associated with that system, that is, functions on Λ normalised to 1 but where the values can be negative, while GPT effects are represented by arbitrary real-valued functions over the sample space. Three equivalent notions of classicality The only ontological models that constitute good classical explanations are those that satisfy additional assumptions.One such principle is that of (generalized) noncontextuality [1].It was argued in Refs.[1,2,4,54] that an ontological model of an operational theory should be deemed a good classical explanation only if it is noncontextual.We now provide the definition of a noncontextual ontological model in the framework we have introduced here. Definition 3.1 (A noncontextual ontological model of an operational theory ).An ontological model of an operational theory ξ nc : Op → SubStoch satisfies the principle of generalized noncontextuality if and only if every two operationally equivalent procedures in the operational theory are mapped to the same substochastic map in the ontological model.That is, if Another way of stating this condition is that the map ξ nc does not depend functionally on the context of any processes in the operational theory, so that for all T := ( T ,c T ) one has ξ nc (T ) = ξ nc ( T ). Ontological models of GPTs (as we have defined them) cannot be said to be either generalized-contextual or generalized-noncontextual (in contrast to ontological models of operational theories, which can).This is because the domain of our notion of an ontological model of a GPT has no notion of a context on which the ontological representation could conceivably depend.(This was first pointed out in Ref. [4], and we explain it further in Appendix A.) However, Ref. [4] showed (in the context of prepare-and-measure scenarios) that the principle of noncontextuality nonetheless induces a notion of classicality within the framework of GPTs: namely, the GPT is said to have a classical explanation if and only if it admits of an ontological model.(Not all GPTs admit of an ontological model, even if the operational theory from which they are obtained as a quotiented theory do.This is a consequence of the representational inflexibility resulting from the lack of contexts on which the representation might depend. 10) We now extend this result (Theorem 1 of Ref. [4]) from prepare-and-measure scenarios to arbitrary scenarios. Proposition 3.2. There is a one-to-one correspondence between noncontextual ontological models of an operational theory, ξ nc : Op → SubStoch , and ontological models of the associated GPT, ξ : Op → SubStoch . Proof sketch.The idea of the proof is captured by the 10 Accordingly, the Beltrametti-Bujaski model [55] can be viewed as an ontological model of the single qubit subtheory qua operational theory, but not as an ontological model of the single qubit subtheory qua GPT.This can be seen by noting that this model is explicitly contextual while the single qubit subtheory qua GPT has no contexts.Equivalently, it can be seen by noting that the single qubit subtheory qua GPT does not admit of any ontological model.The same can be said of the 8-state model of Ref. [56] relative to the stabilizer qubit subtheory: it is an ontological model of the stabilizer qubit subtheory qua operational theory (a contextual ontological model) but not of the stabilizer qubit subtheory qua GPT.The latter has no contexts and does not admit of any ontological model. where C is defined as a map, which is not diagrampreserving (hence the dashed arrow), and which takes any process T in the GPT Op to some process T = ( T ,c T ) in the operational theory.There always exists at least one such map C (in general, there exist many), and all of these satisfy ∼•C = Id (in general, no choice of C will satisfy C •∼ = Id).Now, consider an operational theory Op and the GPT Op it defines. Given an ontological model ξ of Op, one can define a noncontextual model ξ nc of Op via ξ nc := ξ • ∼.The map constructed in this manner cannot depend on the contexts of processes in the operational theory, since these are removed by the quotienting map ∼.As such, the map ξ nc necessarily satisfies Eq. ( 68), and hence is indeed noncontextual. Given a noncontextual ontological model ξ nc of Op, one can define an ontological model ξ of Op via ξ := ξ nc • C. Because the map ξ nc does not depend on the context, the map constructed in this manner does not depend on the choice of C, and is unique. For completeness, we prove in Appendix C that ξ nc := ξ • ∼ indeed satisfies the relevant constraints to be an ontological model of an operational theory, and similarly, that ξ := ξ nc •C satisfies the relevant constraints to be a valid ontological model of a GPT. Finally, we note that this notion of classicality of a GPT is closely linked to the positivity of quasiprobabilistic models.This result can be seen as an extension of the equivalence in Ref. [2] from the prepare-and-measure scenario to arbitrary compositional scenarios. Definition 3.3 (Positive quasiprobabilistic model of a tomographically local GPT). A positive quasiprobabilistic model of a tomographically local GPT Op is a quasiprobabilistic model ξ+ : Op → QuasiSubStoch in which all of the matrix elements of the quasisubstochastic maps in the image of ξ+ are positive, that is, if and only if all of the quasisubstochastic maps in the image of ξ+ are substochastic. Simply by examining the definitions, it is clear that a positive quasiprobabilistic model ξ+ : Op → QuasiSubStoch of a GPT is equivalent to an ontological model ξ : Op → SubStoch of that GPT.It follows that: Proposition 3.4. There exists a positive quasiprobabilistic model of a GPT Op if and only if there exists an ontological model of Op. Although Proposition 3.4 follows immediately from the relevant definitions, we have nonetheless highlighted it here.This is because a generic quasiprobabilistic model of a GPT has no meaningful conceptual relationship to an ontological model of a GPT, and so it is conceptually important to understand in what special cases the two notions coincide.Furthermore, we hope that highlighting this fact will encourage more dialogue between those researchers studying quasiprobabilistic models and those studying ontological models. Structure theorem With this framework in place, we can prove our main results.We start with a general theorem, leveraging the fact that Op ⊂ FVect R , as stated in Theorem 2.8.We then specialize to the various physically relevant cases. where for each system A, χ A : A → V A is a invertible linear map within FVect R .Moreover, the χ A are uniquely determined by Eq. (69). Note that we have colored the linear maps χ A to make it immediately apparent that they came from the associated diagram-preserving map. The proof consists of three main arguments, provided explicitly in Appendix B and sketched here. First, we leverage tomographic locality of the GPT, as well as convex-linearity and diagram preservation of the map, to prove that one can represent the action of the map on a generic process in terms of its action on states and effects Second, using convex-linearity of the map, we prove that one can represent the action of M on states and effects simply as some linear maps within FVect R ; that is, which relies on the isomorphism between vectors (or covectors) in V and linear maps from R to V (resp.V to R).Note that χ B and ϕ A are uniquely fixed by Eq. (71), which means that there can be no other choice made for the χ A appearing in Eq. (69). Next, we leverage empirical adequacy, that for all P and E, together with the fact that they span the vector space and dual, to show that that is, that ϕ A is the left inverse of χ A . Finally we consider the representation of the identity as, which is a consequence of diagram preservation, to prove that, which means that ϕ A is also the right inverse of χ A , and hence that it is unique such that we can write ϕ A = χ −1 A .This shows that the only freedom in the representation is in representation of the states, via the choice of linear maps χ S , of the theory; after specifying these, one can uniquely extend to the representation of arbitrary processes.It also shows that the representation M is necessarily invertible as we can always define the inverse of M by using the inverses of the χ's. One key consequence of this result is the following corollary, whose significance we investigate in Section 4.3. Corollary 4.2. The dimension of the codomain, V A of the map χ A is given by the dimension of the GPT vector space A. Proof.The linear map χ A is invertible, so the dimension of its domain and of its codomain must be equal, and its domain is the GPT vector space. Note that the proof of the structure theorem and this subsequent corollary do not require the full generality of diagram preservation, only the (mathematically) much weaker conditions that: , and (76) We will give justifications of these (for the case of ontological models and quasiprobabilistic models) in Sec. 5, and will discuss further consequences of general diagram preservation in Sec.4.4. Diagram-preserving quasiprobabilistic models are exact frame representations As mentioned in the introduction, SubStoch and QuasiSubStoch are subprocess theories of FVect R , SubStoch ⊂ QuasiSubStoch ⊂ FVect R .This implies that our main theorem applies to these special cases.The fact that the codomain is restricted can then equivalently be expressed as a constraint on the linear maps χ A .In the case of quasiprobabilistic representations we obtain: Proposition 4.3. Any diagram-preserving quasiprobabilistic model of a tomographically local GPT can be written as for invertible linear maps {χ S : S → R Λ S } within FVect R for each system, where these satisfy χS = . (78) Proof.Since ξ satisfies the requirements of Theorem 4.1 we immediately obtain Eq. (77).For the particular case of the deterministic effect, we have that Recall that, by definition, a quasiprobabilistic model satisfies Eq. ( 65): Combining these gives Composing both sides of this with χ S gives Eq. (78). The extra constraint of Eq. ( 78) is not part of the general structure theorem because an abstract vector space does not have a natural notion of discarding.Such a privileged notion is found within, for example, SubStoch, as the all ones covector, which represents marginalization. 11ince the χ S are just invertible linear maps, this map can be seen as merely transforming from one representation of the GPT to another.Critically, however, one must note that the vector spaces in QuasiSubStoch are all of the form R Λ , and so they come equipped with extra structure-namely, a preferred basis and dual basis. Hence, these representations are effectively singling out this preferred basis for the GPT. To see this more explicitly, denote the preferred basis and cobasis for R Λ as ( This means that, for any system A in the GPT, we can write χ A as: whereby condition (78) becomes Similarly, we could run the same argument using χ −1 where (by Eq. ( 81)) ∀λ, Moreover, this decomposition, together with reversibility and hence A .We can then represent the action of ξ as: which can be viewed as a quasistochastic map defined by the conditional quasiprobability distribution Finally, we note that Proposition 4.3 also implies that any quasiprobability representation constructed using an overcomplete frame necessarily fail to be diagrampreserving. Diagram-preserving quasiprobabilistic models of quantum theory We now consider the case of quantum theory as a GPT.The basis { F λ } λ∈Λ is a basis for the real vector space of Hermitian operators for the system while the cobasis, { D λ } λ∈Λ is a basis for the space of linear functionals on the vector space of Hermitian operators.The Riesz representation theorem [57] guarantees that every element D λ of the cobasis can be represented via the Hilbert-Schmidt inner product with some Hermitian operator, which we will denote as D * λ , such that for all ρ: The condition in Eq. ( 86) becomes λ D * λ = 1, the condition in Eq. (88) becomes tr( F λ ) = 1, and the condition of Eq. (90) becomes It is clear, therefore, that { F λ } λ and { D * λ } λ constitute a minimal frame and its dual (in the language of, for example, Refs.[3]).Hence, this representation is nothing but an exact frame representation, that is, one which is not overcomplete.That is, a transformation T , represented by a completely positive trace preserving map E T , will be represented as a quasistochastic map defined by the conditional quasiprobability distribution: It is easy to see that any set of spanning and linearly independent vectors summing to identity will define a suitable dual frame { D * λ }, and then the frame { F λ } itself is uniquely defined by Eq. ( 94).(Note in particular that the elements of the frame need not be pairwise orthogonal, nor must those of the dual frame.) It has previously been shown that all quasiprobabilistic models of quantum theory are frame representations [3].What we learn here is that diagram-preserving quasiprobabilistic models are necessarily the simplest possible frame representations, namely those that are not overcomplete. Structure theorems for ontological models In the case of ontological models of a GPT, we obtain: Proposition 4.4. Any diagram-preserving ontological model of a tomographically local GPT can be written as and where moreover every pair (χ −1 A ,χ B ) defines a positive map from the cone of transformations from A to B in the GPT Op to the cone of substochastic maps from Λ A to Λ B in SubStoch. The proof is given in Appendix D. Apart from positivity, the proof follows immediately from Proposition 4.3. One can interpret this map from the GPT to the ontological model as an explicit embedding into a simplicial GPT as discussed in [4], but generalized to the case in which both the GPT under consideration and the simplicial GPT have arbitrary processes, not just states and effects.This follows from the positivity conditions, empirical adequacy, and the preservation of the deterministic effect. As in the case of quasiprobabilistic models, we can write out χ A as: where { D A λ } λ forms a basis for the vector space of the GPT defined by the operational theory.The positivity condition for χ A discussed above immediately implies that D A λ is a linear functional which is positive on the state cone, and the normalization condition immediately implies that their sum over λ is the deterministic effect.By a similar argument, each F A λ is a vector in the vector space of states which is positive on all GPT effects. In the case of a GPT which satisfies the no-restriction hypothesis [32] (e.g., quantum theory and the classical probabilistic theory), this means that the D A λ are effects (forming a measurement) and that the F A λ are states.In the quantum case, the notion of positivity that we have expressed here reduces to positivity of the eigenvalues of the Hermitian operators.This provides another immediate proof that quantum theory, as a GPT, does not admit an ontological model-it would require an exact frame and dual frame for the space of Hermitian operators which are all positive, but it is known that such a basis and dual do not exist [3]. We have shown (Prop 3.2) that every noncontextual ontological model of an operational theory is equivalent to an ontological model of the GPT defined by the operational theory.Combining this with proposition 4.4, it immediately follows that: Corollary 4.5.For operational theories whose corresponding GPT satisfies tomographic locality, any noncon-textual ontological model can be written as where { D A λ } λ forms a basis for the vector space of the GPT defined by the operational theory. Previously, the notion of a noncontextual ontological representation seemed to be a highly flexible concept, but this corollary demonstrates that it in fact has a very rigid structure.Every noncontextual ontological model can be constructed in two steps: i) quotient to the associated GPT, ii) pick a basis for the GPT such that it is manifestly an ontological model.Furthermore, the only freedom in the representation is in representation of the states in the theory (via this choice of basis); after specifying this, one can uniquely extend to the representation of arbitrary processes. Consequences of the dimension bound We can specialize Corollary 4.2 to the case of quasiprobabilistic representations or ontological representations of GPTs.In this case, it states that the dimension of the GPT vector space for a given system A, dim(A), is equal to the dimension of the codomain of the map χ A defining the quasiprobabilistic or ontological representation of A, that is, the dimension of R Λ A .In the case of an ontological model, this dimension is simply |Λ A |, the cardinality of the set Λ A of ontic states of A, so the number of ontic states is equal to the dimension of the GPT space.Moreover, by considering Proposition 3.2, this immediately implies that for any operational theory whose corresponding GPT satisfies tomographic locality, if there exists a noncontextual ontological model thereof, then it must also have a number of ontic states equal to the dimension of the GPT state space.In the language of Hardy [39], this exactly means, for each system A, that the "ontological excess baggage factor", must be exactly 1.In other words, demanding noncontex-tuality rules out ontological excess baggage.Since Hardy showed that all ontological models of a qubit must in fact have unbounded excess baggage, his result can immediately be combined with ours to give a new proof that the full statistics of processes on a qubit do not admit a noncontextual model. In particular, our result implies that a diagrampreserving noncontextual ontological model of a qubit must have exactly 4 ontic states.This result extends to any subtheory of a qubit whose corresponding GPT is tomographically local, e.g. the stabilizer subtheory.Hence, it constitutes a stringent constraint on ontological models of the qubit stabilizer subtheory qua operational theory.For instance, it immediately guarantees that the 8-state model of Ref. [56]-which, as the name suggests, has 8 ontic statesmust be contextual.Indeed, the 8-state model was previously shown to be contextual by a different argument which focused on the representation of transformation procedures in prepare-transform-measure scenarios [20].Furthermore, our bound improves an algorithm first proposed in Ref. [4].In particular, Ref. [4] gave an algorithm for determining if a GPT admits of an ontological model by testing whether or not the GPT embeds in a simplicial GPT of arbitrary dimension.The lack of a bound on this dimension means that there is no guarantee that the algorithm will ever terminate.Ref. [58] solves this problem by providing such a bound, namely, the square of the given GPT's dimension.Our result strengthens this bound, reducing it to the given GPT's dimension.In fact, our bound is tight, as there can never be an embedding of the GPT into a lower dimensional space.These results simplify the algorithm dramatically: rather than testing for embedding in a sequence of simplicial GPTs of increasing dimension, one can simply perform a single test for embedding in a simplicial GPT of the same dimension as the given GPT. Yet another application of the dimension bound follows from the results of Ref. [59].Ref. [59] demonstrates that the number of classical bits required to specify the ontic state in any (necessarily contextual) ontological model of the qubit stabilizer subtheory is quadratic in the number of qubits.This is contrasted to the case of the qutrit stabilizer subtheory, wherein there exists a (noncontextual) ontological model with linear scaling in the number of qutrits.The quadratic scaling result for the stabilizer qubit subtheory implies that, for a collection of qubits, the number of ontic states is necessarily greater than the dimension of the space of quantum density operators.Together with our dimension bound, this fact is sufficient to deduce the contextuality of the qubit stabilizer subtheory.Moreover, the fact that there exists a noncontextual ontological model for the qutrit stabilizer subtheory [60], together with our dimension bound, is sufficient to deduce the linear scaling in this case. Diagram preservation implies ontic separability and more Returning to representations of a GPT by some M : Op → FVect R satisfying the conditions of Theorem 4.1: if we make use of additional instances of diagram preservation beyond the three instances which we used in proving Theorem 4.1, then we can derive additional constraints on the representation. One important and immediate consequence of diagram preservation for composite systems is that the composite system AB is represented by the tensor product of the representations of the components: That is, the sample space of a composite system is the Cartesian product of the sample spaces of the components. This constraint has particular significance if we consider the case of ontological models, ξ : Op → SubStoch , as this means that for an ontological model, the ontic state space of a composite system is the Cartesian product of the ontic state spaces of the components.We term the latter condition ontic separability (See Refs.[61,62]).It is a species of reductionism, asserting, in effect, that composite systems have no holistic properties.More precisely, the property ascriptions to composite systems are all and only the property ascriptions to their components.Yet another way of expressing the condition is that the properties of the whole supervene on the properties of the parts. The assumption of ontic separability for ontological models has been discussed in many prior works [61,62], and has been a substantive assumption in certain arguments.For instance, in Ref. [63], ontic separability was used to demonstrate that in a noncontextual ontological model, all and only projective measurements are represented outcome-deterministically. It is also worth noting that the assumption of preparation independence in the PBR theorem [64] follows from diagram preservation (e.g.Eq. ( 105)).This connection between PBR and preservation of compositional structure has been previously explored in Sec. 4 of [36], in which they use this connection to derive a categorical version of the PBR theorem. Moreover, by considering parallel composition we can also obtain the following: Proposition 4.6.Via diagram preservation, parallel composition implies an additional constraint on the linear maps, χ A , namely, Proof.To begin, let us define Next, applying this to a product state we have Since M is diagram-preserving, we have Recalling the definition of χ S again, we conclude that In a tomographically local GPT, the product states span the entire state space, and so this implies Eq. (100). Now, if we consider the case of quasiprobabilistic representations, ξ : Op → QuasiSubStoch , we obtain that: Given Eq. ( 84), this is equivalent to That is, diagram preservation implies that the frame representation must factorize across subsystems.In other words, the vector basis defining the frame representation must be a product basis. Converses to structure theorems In the above section we showed how all diagram-preserving quasiprobabilistic and ontological representations must have a particularly simple form given by a collection of invertible linear maps {χ A } satisfying certain constraints. We now prove what is essentially the converse to each of these results.Consider defining a map from Op to FVect R by Under what conditions on the set {χ A } is this map a quasiprobabilistic or ontological representation?To ensure that Eq. ( 108) defines a diagram-preserving map one must simply impose that: This condition, together with invertibility and linearity of the χ A , easily implies that diagram preservation and indeed all the assumptions of Theorem 4.1 are satisfied. To ensure that Eq. (108) defines a linear representation that is moreover a quasiprobabilistic representation, as in Def.2.11, one must impose that V A = R Λ A and that which implies that the conditions in Def.2.11 are satisfied.Finally, to ensure that Eq. ( 108) defines a quasiprobabilistic representation that is moreover an ontological representation, as in Def.2.10, one must introduce a positivity constraint for this map.Specifically, one must have that defines a positive map from the cone of transformations from A to B in Op to the cone of stochastic maps from This provides a simple recipe for constructing linear representations: one simply needs to choose a family of invertible linear maps for each fundamental system (i.e., one that cannot be further decomposed into subsystems), and then define the others as the tensor product of these (so that general χ A factorise over subsystems).Similarly, it provides a simple recipe for constructing quasiprobabilistic representations, using the same construction but where the χ A preserve the deterministic effect.In the case of noncontextual ontological models, however, the recipe is less simple: one must not only choose invertible linear maps which factorise over subsystems and preserve the deterministic effect; one must also check the positivity condition (which is nontrivial, since for any particular map χ A , one must check the condition for every χ B ). Categorical reformulation There is an elegant categorical reframing of our structure theorem, as suggested to us by one of the referees: Proof.The components of the natural isomorphism are given by the χ A , as these go from A → V A = M (A), and where we are abusing notation by denoting R(A) simply as A. Eq. ( 69) ensures that these do define a natural transformation.To see this, first recall that we are abusing notation by suppressing explicitly notating the canonical representation R, so Eq. ( 69) really tells us that which is what we need for this to be a natural transformation.Clearly it is moreover a natural isomorphism as every χ A is invertible (Eqs.( 73) and ( 75)).Eq. (100) then ensures that this is a monoidal natural isomorphism.To see this, note that the LHS of this equation is not quite the component of the natural isomorphism χ A⊗B , instead we have χ AB := µ −1 •χ A⊗B • µ, and so Eq. ( 100) is equivalent to the condition that χ A⊗B •µ = µ•(χ A ⊗χ B ), which is exactly what we need for this to be a monoidal natural isomorphism.Uniqueness of this natural isomorphism follows from the fact that the components χ A are uniquely determined, as noted in Theorem 4.1. This means that ontological models, should they exist, are essentially unique, in that they are unique up to a unique natural isomorphism.There is, however, an important subtlety going on here.This natural isomorphism is given by viewing ontological models as living in FVect R , and so the components of the natural isomorphism are just invertible real linear maps. Alternatively, however, one could demand a stricter notion of isomorphism between ontological models by viewing them as living in SubStoch, in which case the components of a natural isomorphism would be invertible stochastic maps, i.e., permutations of the ontic states.This is a much stricter notion of isomorphism, and it is likely that in this sense there are many different ontological models for a given GPT.In certain situations, however, even this stricter notion can be proven, see, for example, Ref. [19]. Revisiting our assumptions We have derived surprisingly strong constraints on the form of noncontextual ontological models of operational theories, and so it is important to examine the assumptions that went into deriving these constraints.These concerned both the types of operational theories under consideration and the types of ontological representations of these, and were summarized in Section 1.2.The majority of these are ubiquitous and well-motivated.The only notable restriction on the scope of operational theories we consider is the one induced by our assumption of tomographic locality (as discussed further in Section 5.2).Similarly, the only notable restriction on ontological (and quasiprobabilistic) models that is warranting of further discussion is that they are diagram-preserving. Revisiting diagram preservation In the case of ontological models, we will provide below a motivation for the instances of diagram preservation that we required for our proofs.Since we have defined quasiprobabilistic models as representations of operational theories wherein the only difference from an ontological model is that the probabilities are allowed to become quasiprobabilities (i.e., drawn from the reals rather than the interval [0,1]), it follows that these same motivations are also applicable to them.It is worth noting that among the quasiprobabilistic representations for continuous variable quantum systems that are most studied in the literature, the Wigner representation satisfies our definition 12 while the Q [65] and P representations [66,67] do not (as they are defined by overcomplete frames).There are also examples of both types among quasiprobabilistic representations of finite-dimensional systems in quantum theory.In particular, Ref. [68] defines a family of discrete Wigner representations, some of which satisfy the assumption of diagram preservation, and some of which do not. Of special note among those that satisfy the assumption is Gross' discrete Wigner representation [69], which is the unique representation in this family that satisfies a natural covariance property.Ref. [19] further shows that this is the unique noncontextual ontological model for stabilizer subtheories in odd dimensions. Although we endorse diagram preservation in its most general form, it is worth noting that our main results (given in Section 4) require only the following very specific instances of that assumption: (i) diagram preservation for prepare-measure scenarios, ξ :: and diagram preservation for measure-and-reprepare processes ξ :: (ii) diagram preservation for the identity process ξ :: These are easily justified.Eq. ( 112) captures the idea that the ontic state is the complete causal mediary between the preparation and the effect.This assumption is built into the very definition of the standard ontological models framework (implicitly in early work [1,62] and explicitly in later work [70,71]), and is assumed in virtually every past work on ontological models. Eq. ( 113) is a similarly natural assumption.The natural view of the process E P (115) is that one has observed effect E and then one has independently implemented the preparation P .There need not be any system acting as a causal mediary between E and P .The natural ontological representation, therefore, is one wherein there is no ontic state mediating the two processes, as depicted in Eq. ( 113).Although we are not aware of this assumption having been made in previous works, it is directly analogous to the preparation-independence assumption made in Ref. [64] (which involved two independent states, rather than an independent effect and state). Eq. ( 114) can be justified by noting that within the equivalence class of procedures associated to the identity operation in the GPT, there is the one which corresponds to waiting for a vanishing amount of time.In any reasonable physical theory, no evolution is possible in vanishing time, and hence the only valid ontological representation of such an equivalence class of procedures is the identity map on the ontic state space. Because we consider the full assumption of diagram preservation to be a natural generalization of all of these specific assumptions, we have endorsed it in our definitions.See Appendix B of Ref. [35] for a more thorough defense of this full assumption. Necessity of tomographic locality The assumption of tomographic locality is common in the GPT literature, so we will not attempt a defense of it here.Nevertheless, it is natural to ask if the assumption is actually necessary to obtain our structure theorems.Here we provide an example which shows that it is.The operational theory we consider in our example is the real-amplitude version of the qutrit stabilizer subtheory of quantum theory 13 .In this subtheory, two systems are described by 45 parameters, whereas only 6 2 = 36 parameters are available from local measurements, which immediately implies that the theory is not tomographically local (just as the real-amplitude version of the full quantum theory fails to be tomographically local [52]). To begin with, consider the standard (complexamplitude) qutrit stabilizer subtheory.Gross's discrete Wigner function [69] provides a (diagram-preserving) quasiprobability representation for qutrits for which the stabilizer subtheory is positively represented.By Corollary 3.5, this corresponds to a noncontextual ontological model of the subtheory.Indeed, this ontological model has been examined in Ref. [60], where it is shown that it can be reconstructed from an "epistemic restriction".Since the standard qutrit stabilizer subtheory is tomographically local, these models obey our structure theorems.In particular, the representation of n qutrits uses 9 n ontic states, matching the dimension of the relevant space of density matrices.Now consider the subtheory consisting of only those qutrit stabilizer procedures that can be represented using real amplitudes.This does not introduce any new operational equivalences, and so the model discussed above is still noncontextual when restricted to this subtheory.But now our structure theorem does not hold, because this model still uses 9 n ontic states even though the density matrices now live in a 1 2 3 n (3 n +1)-dimensional space.Moreover, we can show that this sort of example is rather generic, that is, that there is no hope of obtaining a structure theorem with our dimension bound for any theory that is not tomographically local. Suppose that we have any ontological representation ξ of some GPT wherein each GPT system of dimension d is represented by an ontic state space of cardinality d.Then the GPT is necessarily tomographically local. To see this, consider the representation of GPT transformations from this state space to itself.Then, the map ξ is a linear map from the space of transformations to d×d substochastic matrices, which are d 2 dimensional.By empirical adequacy ξ is injective, and so the space of GPT transformations is at most d 2 dimensional.But the effect-prepare channels already span d 2 dimensions, so there cannot be any channels outside this span.Hence, by Corollary 2.7, the theory is tomographically local. Outlook These results can be directly applied to the study of contextuality in specific scenarios and theories.For instance, we have already seen that our dimension bound is a useful tool for obtaining novel proofs of contextuality (e.g., via Hardy's ontological excess baggage theorem [39] or for the 8-state model of Ref. [56]), and for providing novel algorithms for deriving noise-robust noncontextuality inequalities (namely, the algorithm in Ref. [4] but informed by our dimension bound).It remains to be seen whether other algorithms for witnessing nonclassicality, such as those in Ref. [71] or Ref [70], could be extended within our framework to more general compositional scenarios. Our formalism is also ideally suited to understanding the information-theoretic advantages afforded by contextual operational theories, such as for computational speedup, since it has the compositional flexibility to describe arbitrary scenarios, such as families of circuits which arise in the gate-based model of computation.In fact, our structure theorem is a major first step in simplifying the proof that contextuality is a necessary resource for the state-injection model of quantum computing [19,72].Ref. [19] shows that such a proof can proceed by applying our structure theorem to show that the only positive quasiprobabilistic models of the (classicallysimulable) stabilizer subtheory for odd dimensions are given by Gross's discrete Wigner function [69]; then, the known fact that the injected resource states necessarily have negative representation in this particular model establishes the result in a direct and elegant fashion. The key limitation of our results is the assumption that the GPT associated to the operational theory under consideration is tomographically local.There are two potential approaches to dealing with this limitation.On the one hand, one could provide an argument that theories which are not tomographically local are undesirable in some principled sense.For example, it seems likely that one can rule them out on the grounds that they violate Leibniz's methodological principle [10].From a practical perspective, wherein the goal is to experimentally verify nonclassicality in a theory-independent manner, one would instead be motivated to seek experimental evidence that nature truly satisfies tomographic locality, independent of the validity of quantum theory.One possible approach to this end would be to extend the techniques introduced in [73] to composite systems. A Context-dependence in representations of GPTs In the main text, we stated that the notion of an ontological model of a GPT that we have defined cannot be said to be either generalized-contextual or generalized-noncontextual (unlike our notion of an ontological model of an operational theory).We will now elaborate on this point. Consider the contexts that one may wish to associate to a GPT state.One of the examples which appears in the literature corresponds to different decompositions of the GPT state into mixtures of other GPT states, for example: Now consider any ontological representation map which has the GPT as its domain.In the GPT, all three terms in Eq. ( 116) are strictly equal, and hence all three map to the same probability distribution over Λ.As such, there is no possibility for the map to represent an s arising from the LHS mixture differently from how it represents an s arising from the RHS mixture. A natural question one might consider in light of this is how one should represent ensembles of states ontologically.The ensembles of relevance in the example just given are {(p i ,s i )}, {(q j ,s ′ j )}, and {(1,s)}; all of these are operationally equivalent: but not strictly equal.If one defines a new kind of ontological representation map which acts on such objects, then it could take these distinct ensembles to distinct probability distributions over Λ.One could then meaningfully talk about whether such a representation depended on context or not.However, the notion of ontological representation for a GPT that we have defined herein has as its domain processes within the GPT (such as states), not ensembles of such processes.This is also true for the more general quasistochastic representations of GPTs.As such, applying the notion of generalized contextuality to them is a category mistake, just as it would be a category mistake to ask whether a variable X depends on another variable Y if Y cannot possibly vary [4].Because standard quasiprobability representations (such as Wigner's or Gross's) are instances of our definition (and in particular, because they take the domain of the representation to be states and effects rather than ensembles of states or ensembles of effects), it is equally meaningless to ask whether they are noncontextual or contextual. Of course, one could define a map which has as its domain the set of ensembles of GPT processes.For such a map, it would be appropriate to ask whether or not the map is noncontextual.This is similar to what is done in the causalinferential framework of Ref. [35], where the central objects of study are ensembles of processes corresponding to an agent's knowledge of what process occurred (although with the difference that in this case we consider ensembles of unquotiented processes).In that context, we formalize the resulting notion of an ontological representation, as well as the natural generalization of the notion of 'noncontextuality' that arises for it. A similar story holds for the notion of context that is relevant for the study of Kochen-Specker contextuality.Consider two measurements, M 1 and M 2 , which we conceptualize as processes with a GPT input and a classical output.Suppose that these each have a particular outcome, labeled a and b respectively, which correspond to the same GPT effect: The fact that the effect associated to getting outcome a in measurement M 1 is strictly equal to the effect associated to getting outcome b in measurement M 2 implies that any map which has the GPT effect space as its domain must represent the two cases identically.Again, one finds that there is no possibility for a representation map with this specific choice of domain to depend on whether or not the effect was realized using measurement M 1 or M 2 . But, also as above, one could choose to consider a different kind of ontological representation map in which the domain is no longer the set of GPT processes per se, but something else, which includes, for instance, measurement-outcome pairs.In this particular case, we are interested in pairs (M 1 ,a) and (M 2 ,b) which are operationally equivalent, but not strictly equal.If one defines a new kind of ontological representation map which acts on such objects, then it could take distinct such objects to distinct response functions.One could then meaningfully talk about whether such a representation depended on context or not.This is what is typically done (if only implicitly) in the study of Kochen-Specker noncontextuality. B Proof of the structure theorem (Theorem 4.1) We now complete the proof of Theorem 4.1, as sketched in the main text. Proof.Since we are assuming tomographic locality of the GPT, Corollary 2.7 immediately gives Since M is convex-linear and preserves the zero processes 14 , and since the effect-state channels span the vector space, M can be uniquely extended to a linear map M .Hence, Now, using the linearity of M , we have Noting that in this diagram, M is only applied to objects in the domain of M , on which the two maps act identically (by the fact that the former is the linear extension of the latter), one has 14 This follows from the fact that one can construct the zero process by composing a state and an effect with the zero scalar as 0 B A = P •0• E.Then, by empirical adequacy of M , one has M (0) = 0, and so diagram-preservation of M then gives M (0 Accepted in Q uant u m 2024-03-06, click title to verify.Published under CC-BY 4.0. where the last step follows from the fact that M is diagram-preserving.In summary, we have shown that as claimed in Eq. ( 70). Next, we analyse M in the specific case of a state P i : Since the DP map M has a unique linear extension which takes the vector space of GPT states B to the vector space V B , and since both of these are in FVect R , one can uniquely re-interpret the action of M as a process χ within FVect R : In particular, we are using the fact that a linear map L : L(R,V ) → L(R,V ′ ) can always be uniquely represented by a linear map l : V → V ′ by exploiting the fact that L(R,V ) ∼ = V .The fact that χ B is the unique linear map satisfying Eq. ( 126), means that there is no possibility for making other choices for the χ A appearing in Eq. ( 69). Similarly, M on effects E j has a unique linear extension and takes functionals on GPT states to functionals on V A ; in other words, M is the adjoint of a process ϕ within FVect R : In particular, we are using the fact that a linear map L : L(V,R) → L(V ′ ,R) can always be uniquely represented by a linear map l : V ′ → V by exploiting the fact that L(V,R) ∼ = V * and that L(V * ,V ′ * ) ∼ = L(V ′ ,V ).Combining this with Eq. (124), we have All that remains is to show that χ A and ϕ A are inverses.Consider the special case that T is the identity, then Eq. (128) becomes Since M is diagram-preserving, it maps identity to identity, and so this becomes Now consider a state P followed by an effect E. This gives a probability, and since M is empirically adequate it must preserve this probability: and since M is diagram-preserving, Combining this with Eqs. ( 126) and ( 127) gives Since this holds for all E and P , tomographic locality implies that the E span A * and the P span A, and we have that Combining this with Eq. (130) gives that χ and ϕ are inverses of each other.Hence, we can write that ϕ A = χ −1 A and so rewrite Eq. (128) as which completes the proof. C Completing the proof of Proposition 3.2 The key argument required to establish Prop 3.2 was given just after the proposition itself, but we now complete the proof.We now prove that ξ nc := ξ• ∼ is indeed a valid ontological model of an operational theory if ξ is a valid ontological model of a GPT.To do so, we show that each of the three properties (enumerated in Definition 2.9) that ξ nc should satisfy is implied by the corresponding property (enumerated in Definition 2.10) that ξ is assumed to satisfy by virtue of being an ontological model of a GPT. First, recall that we assumed that all deterministic effects in the operational theory are operationally equivalent.Hence, the map ∼ will take any such deterministic effect to the unique deterministic effect in the GPT, which (by property 1 of Definition 2.10) must be represented by the unit vector 1.Hence, ξ nc represents all deterministic effects in the operational theory appropriately, namely as the unit vector 1. Second, recall that ∼ preserves the operational predictions of the operational theory; hence, the fact that (by property 2 of Definition 2.10) ξ preserves the operational predictions of the GPT implies that ξ nc := ξ• ∼ preserves the operational predictions of the operational theory. Third, recall that if, in the operational theory, P 1 is a procedure that is a mixture of P 2 and P 3 with weights ω and 1−ω, then it follows that under ∼, one has Hence, the fact that (by property 3 of Definition 2.10) the representations of these three processes under ξ satisfy implies that the representations of P 1 , P 2 , and P 3 satisfy Hence ξ nc satisfies all the properties of an ontological model of an operational theory.Conversely, we prove that ξ := ξ nc •C is a valid ontological model of a GPT if ξ nc is a valid noncontextual ontological model of an operational theory.To do so, we show that each of the three properties (enumerated in Definition 2.10) that ξ should satisfy is implied by the corresponding property (enumerated in Definition 2.9) that ξ nc is assumed to satisfy by virtue of being an ontological model of a GPT. First, consider the unique deterministic effect in the GPT.Applying C to this process yields one of the many deterministic effects in the operational theory.Because (by property 1 of Definition 2.9) ξ nc maps every one of these to the unit vector 1, it follows that ξ := ξ nc •C maps the unique deterministic effect to the unit vector 1. Second, recall that the context of a process is irrelevant for the operational predictions it makes, and that consequently, the map C preserves the operational predictions.Given that (by property 2 of Definition 2.9) ξ nc preserves the operational predictions, ξ := ξ nc •C also preserves the operational predictions. Third, consider three processes P 1 , P 2 , and P 3 such that P 1 = ω P 2 +(1−ω) P 3 in the GPT.Under C, one has processes C( P 1 ) = ( P 1 ,c 1 ), C( P 2 ) = ( P 1 ,c 2 ), and C( P 3 ) = ( P 1 ,c 3 ) in the operational theory, where c i are arbitrary contexts specified by the map C. The fact that P 1 = ω P 2 +(1−ω) P 3 implies that C( P 1 ) is operationally equivalent to the effective procedure P mix defined as the mixture of C( P 2 ) and C( P 3 ) with weights ω and 1−ω, respectively.(C( P 1 ) may not actually be this mixture, depending on its context c i , which depends on one's choice of C.) By property 3 of Definition 2.9, ξ nc must satisfy But since ξ nc is a noncontextual model and since C( P 1 ) is operationally equivalent to P mix , it follows that Hence we see that ξ := ξ nc •C satisfies property 3 of Definition 2.10, as required. If, rather than considering transformations from one GPT system to another, we consider just the states of a single system A then everything simplifies considerably.The vector space we consider in the domain is simply the vector space spanned by the GPT state space, and the positive cone is then just the standard cone of GPT states.The vector space we consider in the codomain is simply the vector space R Λ A with positive cone given by the cone of unnormalised probability distributions.Moreover, the linearly extended action of ξ is nothing but the linear map χ A so we find that χ A must be a positive map in the sense defined above. Similarly, if we consider the contravariant action of χ −1 A on the space of GPT effects (that is, by composing the effect onto the outgoing wire of χ −1 A ) then we arrive at a similar result.Here we find that the contravariant action of χ −1 A is a positive linear map from the dual of the GPT vector space ordered by the effect cone to the dual of R Λ A ordered by the cone of response functions. E Proofs for preliminaries E.1 Proof that quotienting is diagram-preserving In order to see that the quotienting map is diagram-preserving, we must first define what it means for processes in the quotiented theory to be composed.That is, given a suitable pair of equivalence class T and R, we must define R• T (assuming that the relevant type matching constraint is satisfied) and R⊗ T .We define these via composition of some choice of representative elements, r ∈ R and t ∈ T , for each equivalence class, as We now prove that the first of these four conditions (namely, the first equivalence in Eq. ( 153)) holds: where in the second step we are noting that is an example of a tester τ for any τ ′ and r.The argument for the other three conditions is analogous. This establishes that the notion of composition that we have defined is independent of the choice of representative elements, and so we can simply write We now prove Lemma 2.5, restated here: Lemma E.1.The operation □ can be uniquely extended to a bilinear map and the operation ⊠ can be uniquely extended to a bilinear map Proof.Here we show the proof for □.The proof for ⊠ follows similarly. To begin, note that the vectors R T with T : A → B span the vector space R m A→B , as we have taken F A→B to be a minimal fiducial set of testers.Consequently, we can always (nonuniquely) write an arbitrary U ∈ R m B→C as i u i R Ti for some transformations T i : B → C, u i ∈ R, and can write an arbitrary V ∈ R m A→B as j v j R T ′ j for some transformations T ′ j : A → B, v j ∈ R. Hence, we propose the linear extension be defined by U□V := ij u i v j (R Ti □R T ′ j ) = ij u i v j R Ti• T ′ j .For this to be a valid definition, however, it must be the case that this is independent of the chosen decomposition of U and V.We now show that this is indeed the case. To begin, let us consider two distinct decompositions in the second argument of □.That is, given we want to show that for all T . To begin, we use linearity of E A→B (as defined in Section 2.2.1) to give us that Unpacking the definition of K gives us that for all testers τ , Now, define for some arbitrary τ and transformation T .Substituting tester τ ′ into Eq.(169), we find that As this holds for all τ , and so, in particular, for our fiducial testers, we therefore have that Finally, using the fact that R T • Ti = R T □R Ti and similarly that R T • T ′ j = R T □R T ′ j we find which is our desired result.One can similarly show linearity in the first argument, namely, Putting these together, we obtain full bilinearity of □, namely as required. E.3 Proof of Lemma 2.6 We now prove Lemma 2.6, restated here: Lemma E.2. A GPT is tomographically local if and only if one can decompose the identity process for every system A, denoted 1 A , as where M 1 A is the matrix inverse of the transition matrix N 1 A of the identity process, that is, Proof.First, we prove that if a GPT satisfies tomographic locality, then the identity has a decomposition of the form in Eq. ( 176).We do this by defining a particular process f as a linear expansion into states and effects with the carefully chosen set of coefficients [M 1 A ] j i , and then we prove that f = 1 A . Take any minimal spanning set { P A i } i of GPT states and spanning set { E A j } j of GPT effects, and consider the transition matrix N 1 A with entries given by [N 1 A ] j i := Next, define M −1 The matrix inverse of N 1 A exists, since the rows of N 1 A are linearly independent.That is, we show that i a i [N 1 A ] j i = 0 if and only if a i = 0 for all i.First, note that we have but as the E A j span the space of effects and composition is bilinear (see Lemma 2.5), this means that i a i P A i = 0; then, since the P A i are linearly independent, we have a i = 0 for all i, as desired.Next we use this inverse to define a process f as A priori, there is no reason why this process must be a physical GPT process; however, it turns out to be the identity process, as we will now show.Consider the expression for some P A k and some E A l from the minimal spanning sets above.Substituting the expansion of f followed by applying the definition of M 1 A and N 1 A , one has But now it follows from Eq. (179 Hence, it holds that for all P A k and E A l in the minimal spanning sets above.By the fact that these sets span the state and effect spaces respectively, it follows that for all P and E. Now, in any GPT which satisfies tomographic locality, namely Eq. ( 49), two channels which give the same statistics on all local inputs and outputs are equal, and hence f is in fact the identity transformation.Hence, the identity transformation has a linear expansion, of the form given by Eq. ( 176), namely Next, we prove the converse: if the identity has a linear expansion as in Eq. (176) in a given GPT, then that GPT satisfies tomographic locality.To see this, consider two bipartite processes T and T ′ which give rise to the same statistics on all local inputs, so that For any tester τ of the appropriate type, one can write the probability generated by composing with T as simply by inserting the linear expansion of the identity on each system.Similarly, one can write Noting that the RHS of Eq. ( 188) splits into two disconnected diagrams, and the same holds for the RHS of Eq. ( 189), it follows from Eq. (187) that Since this is true for any two processes satisfying Eq. ( 187), the principle of tomographic locality (Eq.( 49)) is satisfied. E.4 Proof of Eqs. (57) and (58) To prove Eq. ( 57), one can decompose the four identities in the diagram and perform some simple manipulations of the resulting expression. To prove Eq. ( 58), one can insert four decompositions of the identity into the following diagram: Corollary 3 . 5 ( Three equivalent notions of classicality).Let Op be an operational theory and Op the GPT obtained from Op by quotienting.Then, the following are equivalent: (i) There exists a noncontextual ontological model of Op, ξ nc : Op → SubStoch .(ii) There exists an ontological model (a.k.a.simplex embedding) of Op, ξ : Op → SubStoch .(iii) There exists a positive quasiprobabilistic model of Op, ξ+ : Op → QuasiSubStoch .This generalizes the results of Refs.[2,4,5] from prepare-measure scenarios to arbitrary compositional scenarios. r 1 •t 1 = r 2 •t 2 and r 1 ⊗t 1 = r 2 ⊗t 2 be well defined, it must be independent of the choices of representatives, i.e. for any t 1 ,t 2 ∈ T and r 1 ,r 2 ∈ R, one has the case, then the quotienting map is a structure-preserving equivalence relation, or congruence relation, for the process theory.It is straightforward to show that the first equality in Eq. (152) is equivalent to the conditions nontrivial direction of this equivalence, consider the special case of these where r = r 1 (in the first) and where t = t 2 (in the second); then, one has second equality in Eq. (152) is equivalent to the conditions
25,130
2020-05-14T00:00:00.000
[ "Philosophy", "Computer Science" ]
Animal Models of Depression and Drug Delivery with Food as an Effective Dosing Method: Evidences from Studies with Celecoxib and Dicholine Succinate Multiple models of human neuropsychiatric pathologies have been generated during the last decades which frequently use chronic dosing. Unfortunately, some drug administration methods may result in undesirable effects creating analysis confounds hampering model validity and preclinical assay outcomes. Here, automated analysis of floating behaviour, a sign of a depressive-like state, revealed that mice, subjected to a three-week intraperitoneal injection regimen, had increased floating. In order to probe an alternative dosing design that would preclude this effect, we studied the efficacy of a low dose of the antidepressant imipramine (7 mg/kg/day) delivered via food pellets. Antidepressant action for this treatment was found while no other behavioural effects were observed. We further investigated the potential efficacy of chronic dosing via food pellets by testing the antidepressant activity of new drug candidates, celecoxib (30 mg/kg/day) and dicholine succinate (50 mg/kg/day), against standard antidepressants, imipramine (7 mg/kg/day) and citalopram (15 mg/kg/day), utilizing the forced swim and tail suspension tests. Antidepressant effects of these compounds were found in both assays. Thus, chronic dosing via food pellets is efficacious in small rodents, even with a low drug dose design, and can prevail against potential confounds in translational research within depression models applicable to adverse chronic invasive pharmacotherapies. Introduction The challenge to propose new powerful therapeutics for neuropsychiatric disorders, including antidepressants, has raised important questions regarding the efficiency of preclinical approaches currently being used [1][2][3]. Numerous limitations of the models of human neuropsychiatric pathologies have been intensively discussed during the last years [4][5][6]. Apart from a general problem of translational research, basic practical issues with animal models of neuropsychiatric conditions, however seemingly trivial, can essentially affect the validity of preclinical models, yet these can be addressed and resolved. As with translational models in small rodents, these issues concern laboratory and procedural settings in animal studies. A number of experimental conditions have been shown to result in potential confounds for the practical application of animal models. The principals of these factors are commonly considered to include the circadian phase of manipulations [7,8], cage enrichment [9][10][11], lighting conditions [12,13], handling [14][15][16], vibration [17], the adverse taste of food or water [18,19], and presence of and manipulations by an experimenter [20,21]. They are sometimes believed to result in the remarkable variability in results that are extensively reported by the literature [18,[22][23][24]. The method and duration of dosing of experimental animals are one of the important sources of such confounds [25][26][27]. Various types of invasive treatments in rodents were shown to induce pain, inflammation, and distress, despite the proper use of standardized methods of application, especially when prolonged dosing is employed [28][29][30]. Obviously, this raises issues that concern not only the quality of studies, in which such dosing methods are used, but also animal welfare and ethical aspects. Nonetheless, in many cases, long and invasive drug administration to small rodents is problematic to avoid. This often applies, for instance, when non-watersoluble compounds have to be chronically administered, for example, during experimental conditions for which the induction of a desirable syndrome in an animal and/or the occurrence of the therapeutic drug's effect require a long time. The latter experimental situations are particularly typical for testing drugs in rodent models of depression where, for example, the induction of some key elements of depressive syndrome may take 2-12 weeks [24,31,32] and the occurrence of an antidepressant's effect, with most of the classical antidepressants, develops after 3-4 weeks of treatment [31,33,34]. In order to avoid the negative effects of chronic invasive dosing on overall animal welfare and experimental outcomes from standard models of depression, we evaluated the efficacy of drug delivery via food pellets in mice. First, we studied the effects of a three-week daily intraperitoneal vehicle injection in the mouse forced swim test, a common scheme of testing for the antidepressant-like effects of various treatments [32,34,35]. As this manipulation resulted in an increase of floating scores, a measure of "behavioural despair, " indicating a "prodepressant" effect of daily intraperitoneal injections for the experimental animals, we probed an alternative way of dosing using food pellets. Though drug delivery with voluntarily consumed food is one of the common methods of dosing, its use in laboratory research is quite limited. Meanwhile, in many cases, this mode of pharmacological treatment is seen as advantageous because it enables the maintenance of a steady blood concentration for the drug, in contrast to bolus drug administration. However, it is sometimes viewed as insufficiently reliable due to its reliance on food intake and the variable bioavailability of some compounds depending on their delivery route [36,37]. However, given that, in a reasonable proportion of the experimental situations, the consummatory behaviour of laboratory animals is not altered and the standard pharmaca, whose bioavailability and metabolism are well known not to be sensitive to the treatment method, are used, dosing with the voluntary intake of food pellets can probably be exploited much more frequently. Apart from the obvious benefits of animal wellbeing, the delivery of investigational drugs with food pellets can increase the validity of translational models as it simulates a human equivalent therapeutic dosing route. In this study, we first used, via food pellets, a low dose of a classical antidepressant reference drug, imipramine (7 mg/kg/day), for which chronic administration via drinking water for 3 weeks was recently reported to evoke an antidepressant effect in a model of stress-induced anhedonia [38]. A low dose of antidepressant was selected because we sought to evaluate the usefulness of this dosing method at the lowest possible dosage limit which is used in other means of drug administration and because imipramine may exert side-effects when applied in higher concentrations [6]. The effects of imipramine delivered with self-made food pellets were tested in the forced swim test as well as, in order to exclude potential nonspecific effects of treatment, in a battery of behavioural tests including dark/light box, O-maze, novel cage, open field, and two-bottle sucrose test. Finally, to verify the applicability of this defined dosing method with food pellets, we tested the effects of new potential antidepressants: celecoxib, a non-water-soluble compound, at the dose of 30 mg/kg/day, which was selected based on previously published data [39,40], and dicholine succinate whose dose was applied at 50 mg/kg/day based on previous results [42], in the forced swim and tail suspension tests. Imipramine, applied at 7 mg/kg/day [6,36,41,42], and citalopram, 15 mg/kg/day [6,33,38], were used as pharmacological references. Animals and Housing. Three-month-old C57BL/6N male mice were supplied by Instituto Gulbenkian de Ciência, Oeiras, Portugal, and housed individually in standard laboratory conditions under a reverse 12:12 h cycle (lights on at 21:00). Behavioural tests took place from the onset of the dark phase of the light cycle (9:00 h). The testing was carried out in a dark quiet room in morning hours. All procedures were in accordance with the European Union's Directive 2010/63/EU, Portuguese Law-Decrees DL129/92 (July 6th), DL197/96 (October 16th), and Ordinance Port. 131/97 (November 7th). This project was approved by the Ethical Committee of the New University of Lisbon. Study Flow with Chronic Intraperitoneal Injections. This study used a broadly applied treatment, in small rodents, of chronic intraperitoneal injections [30]. We have chosen to expose mice to a three-week daily intraperitoneal injections of NaCl at volume 0.01 mL/g body weight (for scheme of study flow, see Figure 1(a)). Control mice were not treated but handled daily. Starting from the next day after this period, mice were tested in the two-day forced swim test as previously described [38,41]. Behavioural data were scored using Noldus EthoVision XT 8.5 (Noldus Information Technology, Wageningen, Netherlands). Number of mice per group is indicated in figure legend. Study Flow with Chronic Imipramine Delivery via Food Pellets. As a next step, we exposed mice to self-made food pellets that contained imipramine for four weeks. Prior to starting treatment, animals were balanced upon body weight. The calculation of the used concentration of imipramine in food pellets was based on a daily food intake of experimental mice that constituted 2.89 ± 0.26 g and a desirable dosage of 7 mg/kg/day. The selection of this dose was based on previously obtained data that showed the efficacy of the dose [38] and a lack of such with chronic imipramine delivery via drinking water at a dose of 2.5 mg/kg in mice. Control mice received a regular diet. Before the start and after four weeks of dosing, all mice were tested in the sucrose preference test, O-maze test, and the dark/light box, as described elsewhere [43,44]. After two and four weeks of dosing, locomotor activity of all mice was studied in the novel cage and open field tests, as described elsewhere [41,42,44]. At the end of behavioural testing, a two-day forced swim test with 6 min sessions was performed as previously described ( [38,41]; for scheme of study flow, see Figure 1(b)). Number of mice per group is indicated in figure legend. Study Flow with Chronic Delivery via Food Pellets of New Candidates to Antidepressants. Next, we subjected mice to food pellets that contained imipramine, citalopram, celecoxib, or dicholine succinate for four weeks. Prior to starting treatment, animals were balanced upon body weight. The latter two drugs are regarded as compounds with potential antidepressant activity [39,40,42]. The calculation of drug concentrations was based on daily food intake of experimental mice, and desirable doses were 7 mg/kg/day, 15 mg/kg/day, 30 mg/kg, and 50 mg/kg, respectively. Control mice received regular diet. A two-day tail suspension test and a two-day forced swim test were carried out during four consecutive days after the termination of the dosing period, as described elsewhere [41,43] (for scheme of study flow, see Figure 1(c)). Number of mice per group is indicated in figure legend. Preparation of Pellets. Imipramine hydrochloride (Sigma-Aldrich, Munich, Germany), citalopram (Lundbeck, Copenhagen, Denmark), or celecoxib (Pfizer, Berlin, Germany) was added to commercial chow (Mucedola SRL, Milan, Italy) that was turned to powder by a blender. Small amounts of distilled water were added, and food pellets of a similar size to commercial pellets were formed and dried overnight (16 h) at 60 ∘ . New pellets were prepared twice a week in order to refresh the food supply of experimental groups. The content of drugs was adjusted to the dose indicated above and was based on the consumption of normal diet that was averaged over 3 days. Food pellets containing dicholine succinate (Buddha Biopharma Oy Ltd., Helsinki, Finland) were prepared in a similar way, using a 7% solution of the compound; the content of drug was adjusted to the abovementioned daily dose of this drug. Percentage preference for sucrose is calculated using the following formula: Behavioural 2.6.7. Tail Suspension Test. The protocol used in this study was adapted from a previously proposed procedure [41,43]. Mice were subjected to the tail suspension by being hung by their tails with adhesive tape to a rod 50 cm above the floor for 6 min. Animals were tested in a dark room where only the area of the modified tail suspension construction was illuminated by a spotlight from the ceiling; the lighting intensity on the height of the mouse position was 25 Lux. The trials were recorded by a video camera positioned directly in front of the mice while the experimenter observed the session from a distance in a dark area of the experimental room. This procedure was carried out twice with a 24 h interval between tests. The latency of the first episode of immobility, the total duration of this behaviour, and mean velocity were scored using Noldus EthoVision XT 8.5 (Noldus Information Technology, Wageningen, Netherlands) according to the protocol that was previously validated [41]. In accordance with the commonly accepted criteria of immobility, the immobility behaviour was defined as the absence of any movements of the animals' head and body. The latency of immobility was determined as the time between the onset of the test and the first bout of immobility. Statistical Analysis. Data were analysed with GraphPad Prism version 5.00 for Windows (San Diego, CA, USA). Twotailed unpaired -tests were applied for two-group, two-tailed comparisons of independent data sets, as the distribution was normal. One-way ANOVA was used followed by a post hoc Dunnett for a comparison of more than two groups with a control; repeated measures ANOVA was used for analysis of repeated measures. The level of confidence was set at 95% ( < 0.05) and data are shown as mean ± SEM. Similar results were obtained in our previous experiments which demonstrated that three-and four-week daily injections in chronically stressed mice increased the number of individuals exhibiting signs of anhedonia, a reduced sensitivity to reward, in a sucrose preference test [18,24]. Other studies showed that chronic intraperitoneal injections in rats evoke ultrasonic vocalizations at 22 kHz range, indicative of a negative emotional state that was reduced by preexposure of experimental animals to handling [16]. These "prodepressive" like changes found in this study could be potentially induced by well-recognized pathogenetic elements of depression, such as stress of manipulation [46] and pain experience [47,48] and repeated situations of unescapable stress and helplessness [49], as well as inflammation [50]. Effects of Chronic Imipramine Delivery via Food Pellets on Floating Behaviours and Other Variables. In order to assess the efficacy of an alternative chronic dosing design that could preclude the adverse changes in behaviour described above, we evaluated the effects of four-week dosing of imipramine via food pellets in the forced swim test and supplementary behavioural paradigms. Animals subjected to imipramine treatment showed a significant increase in the latency to float and decreased immobility time, when compared to control animals (Day 1: = 0.0002, = 5.19 and = 0.0008, = 4.42; Day 2: = 0.18, = 1.43 and = 0.0011, = 4.28, resp.; Figure 3(a), unpaired two-tailed -test). Thus, an applied low dose of antidepressant treatment delivered with food pellets induced an antidepressant-like effect in the present study. This result is in line with our previous findings that showed that a 3-week low dose administration of imipramine to C57BL6J mice via drinking water reduced such depressive symptoms as stress-induced decease in sucrose intake and preference, hyperlocomotion, and elevated aggressive behaviour [38]. Similar behavioural results were obtained in the chronic stress depression model with CD1 mice [42] and in a model of elderly depression in 18-month-old C57Bl6N mice [41]. The low dose imipramine antidepressant effects were accompanied by preservation of normal activity of brain peroxidation enzymes which were suppressed by chronic stress [38]. These effects are typical for antidepressant effect manifestations induced by tricyclics in rodents [35,51]. Further, in order to rule out potential effects of imipramine administration on anxiety, locomotion, and liquid intake that were previously reported in mice treated with this drug at a dose of 15/mg/kg in C57BL6N mice, we performed supplementary tests in all mice. In both anxiety paradigms, dark-light box and O-maze, animals treated with imipramine showed no significant differences in their behaviour from the control group: in latency of the exit to the anxiety-related areas, lit box and open arms ( = 0.94, = 0.08 and = 0.59, = 0.55, resp.; unpaired two-tailed -test), time spent in the lit box and open arms ( = 0.80, = 0.26 and = 0.28, = 1.14, resp.; unpaired two-tailed -test), and numbers of exits to these zones ( = 0.87, = 0.17 and = 0.13, = 1.63, resp.; unpaired two-tailed -test, Figures 3(b) and 3(c)). In locomotory tests, in comparison with control mice, animals treated with imipramine exhibited normal vertical activity, as shown by the number of rearings in novel cage (Week 2: = 0.33, = 1.01; Week 4: = 0.54, = 0.63, unpaired two-tailed -test), as well as unchanged horizontal locomotion in the open field. In the latter test, no difference between groups was found in distance travelled Figure 3(d), unpaired two-tailed -test). In a two-bottle sucrose preference test, there were no significant differences in water intake, sucrose solution intake, and sucrose preference between the groups ( = 0.47, = 0.75; = 0.32, = 1.04; = 0.20, = 1.35, resp.; unpaired two-tailed -test, Figure 3(e)). Finally, body weight was not different between control and imipramine-treated groups ( = 0.20, = 1.37, data not shown, unpaired twotailed -test). There is no statistical significance using repeated measures ANOVA (data not shown). Thus, the employed dosing with imipramine did not affect basic physiological variables, such as locomotion, liquid consumption, and body weight. Also, it did not affect the parameters of anxiety and sucrose ingestion, as reported in some studies that employ higher amounts of tricyclics [6,31,52,53]. These results suggest that low dose imipramine treatment via voluntary food pellet intake can serve as an optimal pharmacological reference in animal models of depression that require prolonged antidepressant treatment of small rodents. Effects of Chronic Delivery via Food Pellets of New Candidates to Antidepressants in the Forced Swim and Tail Suspension Tests. Next, we sought to investigate whether the defined method of antidepressant dosing with food pellets can be applicable with the testing of new drug candidates, one of which, celecoxib, is not soluble in water and, therefore, is problematic to deliver to the animals chronically. As such, we exposed a cohort of animals to food pellets containing new drug candidates: dicholine succinate or celecoxib. In addition, we used imipramine or citalopram as the antidepressant references. In the forced swim test, one-way ANOVA revealed significant differences between the groups in the latency to float, total time spent floating, and velocity (Day 1: = 0.0054, = 4.24; = 0.049, = 2.60; and = 0.22, = 3.18, resp., Figure 4(a)). Post hoc Dunnett test showed that, on Day 1, in comparison with the control group, the latency to swim was increased in animals treated with imipramine or dicholine succinate ( < 0.05, = 3.17 and < 0.01, = 3.20), the duration of immobility was decreased in the imipramine-treated animals ( < 0.05, = 2.67), and velocity was elevated in the dicholine succinate-treated group Number of exits (n) Con Imifood Con Imifood Con Imifood Con Imifood Con Imifood Latency of exit (s) Con Imifood Con Imifood Con Imifood Con Imifood Con Imifood Con Imifood Velocity (m/s) Open field Open field Novel cage Con Imifood Con Imifood Number of rearings (n) Con Imifood Con Imifood Con Imifood Con Imifood Sucrose preference (%) Con Imifood Con Imifood Con Imifood Con Imifood Con Imifood Con Imifood (c) On Day 1 of the tail suspension test, there was a significant increase of the latency of immobility and velocity in imipramine-and dicholine succinate-treated groups, as compared to controls. All treated groups showed a significant reduction of total time spent immobile, as compared to control animals. (d) On Day 2 of the tail suspension test, in comparison to control group, an increase of the latency of immobility was found in imipramine-treated group and an increase of velocity was observed in both imipramine-and dicholine succinate-treated mice. All animals that received a treatment demonstrated a significant reduction of total time spent immobile, in comparison to control group. * < 0.05, * * < 0.01, and * * * < 0.001 versus control (one-way ANOVA with Dunnett post hoc tests). All groups were = 10. Con: control group; Imi-food: imipramine-treated group; Cit-food: citalopram-treated group; DS-food: dicholine succinate-treated group; Cel-food: celecoxibtreated group. All data are means ± SEM. ( < 0.05, = 2.62). On Day 2 of the forced swim test, one-way ANOVA showed a trend to a statistically significant difference in the latency of floating and no differences in the duration of floating or velocity ( = 0.059, = 2.46; = 0.48, = 0.89; and = 0.40, = 1.04, resp., Figure 4(b)). Dunnett post hoc test revealed a significant increase in latency to float in the imipramine-treated group ( < 0.05; = 2.96). As a reduction of the parameters of floating behaviour in the forced swim test is a well-established measure of antidepressant activity of various compounds [32,35], these data suggest that the applied treatment with imipramine or dicholine succinate induces an antidepressant effect and that the employed dosing was effective. Conclusions Thus, as a desirable alternative to invasive dosing, such as chronic intraperitoneal injections, the administration of various drugs via food pellets can be very efficient. The results from our study are in line with other successful attempts to avoid adverse drug delivery methodologies in translational research that showed, for example, the efficacy of treatment with analgesic therapy delivered via food in rats which were subjected to surgery [55]. The use of such methods could be particularly needed when repeated drug administration to stressed, operated, or immunodeficient laboratory animals is necessary and therefore could greatly improve not only animal welfare but also the validity of animal models.
4,821.6
2015-05-03T00:00:00.000
[ "Biology", "Psychology", "Medicine" ]
Comparison of the Technical Performance of Leather, Artificial Leather, and Trendy Alternatives : The market for biogenic and synthetic alternatives to leather is increasing aiming to replace animal-based materials with vegan alternatives. In parallel, bio-based raw materials should be used instead of fossil-based synthetic raw materials. In this study, a shoe upper leather and an artificial leather, and nine alternative materials (Desserto ® , Kombucha, Pinatex ® , Noani ® , Appleskin ® , Vegea ® , SnapPap ® , Teak Leaf ® , and Muskin ® ) were investigated. We aimed to compare the structure and technical performance of the materials, which allows an estimation of possible application areas. Structure and composition were characterized by microscopy and FTIR spectroscopy, the surface properties, mechanical performance, water vapor permeability, and water absorption by standardized physical tests. None of the leather alternatives showed the universal performance of leather. Nevertheless, some materials achieved high values in selected properties. It is speculated that the grown multilayer structure of leather with a very tight surface and a gradient of the structural density over the cross-section causes this universal performance. To date, this structure could neither be achieved with synthetic nor with bio-based materials. Introduction A circular economy aims at reusing consumed materials and ideally, product cycles become closed according to the cradle-to-cradle principle [1,2]. "Bio-based" means the use of biogenic raw materials to manufacture a variety of products instead of fossil gas, coal, or petroleum as part of the bioeconomy. Lastly, "biodegradable" means that a material can be degraded in the environment by microorganisms and physicochemical impact. Recently, the societies of the countries of the Global North have experienced a strong change in their mindset due to the discussion about climate change, finiteness of resources, the overutilization of ecosystems, and the pollution of the environment by non-degradable or harmful substances. This affects especially the consumer goods industry and the designers of new materials aim to replace fossil-based polymers with biogenic and fully biodegradable materials while being animal-free and without the use of any harmful substances. Ideally, the new materials are made from domestic waste, sawdust, or organic garbage [3][4][5]. Leather is a bio-based and biodegradable material with a tradition nearly as long as mankind. For centuries, it was used as a strong and long-lasting material with a broad spectrum of materials properties. Leather was used as protective and decorative clothing for sports goods and as technical material, e.g., for transmission belts, buckets, or as wineskin. Until the middle of the 19th century, leather occupied the materials property gap of a flexible material besides stone, metal, and wood as hard materials and various textiles, which were not waterproof. Processing allowed adjusting the leather properties from a hard board-like appearance, e.g., as sole leather to very soft touch textile-like glove leathers. To manufacture shoes, leather is made hydrophobic, and as wash leather, it absorbs much Leather shows a number of unique properties, which are highly valued for purposes such as strength and elasticity, water vapor permeability, abrasion resistance, durability, and longevity. In the past, synthetic materials competing with leather triumphed due to lower prices, they are often easier to be processed and can be manufactured as a continuous material according to industrial needs in roll-to-roll production lines. However, leather is still popular due to its beneficial properties, natural appearance, and a touch of noble material. Synthetic alternatives usually consist of textile support covered by two or more synthetic polymer layers ( Figure 1B). Nowadays, often polyester textiles coated by PVC or polyurethane films are used, making them a completely fossil-based material. The surface optic can be designed leather-like by embossing a grain structure. Many different terms are used to describe these materials in the market, e.g., artificial leather, synthetic leather, leatherette, imitation leather, faux leather, man-made leather, bonded leather, pleather, textile leather, or polyurethane (PU)-leather. Meanwhile, the usage of these terms is restricted in the European standard EN 15987. Here, we will use the term "artificial leather" to describe synthetic materials imitating the optical appearance of leather. In recent years, concerns over sustainability in any field of industrial production have led to a pressing rationale to enhance the use of natural materials and replace non-renewable fossil-based raw materials. Although leather is bio-based and renewable, these considerations did not lead to a renaissance of leather. Instead, leather got even more under pressure due to ongoing discussions over the greenhouse gas emission of cattle breeding, Leather shows a number of unique properties, which are highly valued for purposes such as strength and elasticity, water vapor permeability, abrasion resistance, durability, and longevity. In the past, synthetic materials competing with leather triumphed due to lower prices, they are often easier to be processed and can be manufactured as a continuous material according to industrial needs in roll-to-roll production lines. However, leather is still popular due to its beneficial properties, natural appearance, and a touch of noble material. Synthetic alternatives usually consist of textile support covered by two or more synthetic polymer layers ( Figure 1B). Nowadays, often polyester textiles coated by PVC or polyurethane films are used, making them a completely fossil-based material. The surface optic can be designed leather-like by embossing a grain structure. Many different terms are used to describe these materials in the market, e.g., artificial leather, synthetic leather, leatherette, imitation leather, faux leather, man-made leather, bonded leather, pleather, textile leather, or polyurethane (PU)-leather. Meanwhile, the usage of these terms is restricted in the European standard EN 15987. Here, we will use the term "artificial leather" to describe synthetic materials imitating the optical appearance of leather. In recent years, concerns over sustainability in any field of industrial production have led to a pressing rationale to enhance the use of natural materials and replace nonrenewable fossil-based raw materials. Although leather is bio-based and renewable, these considerations did not lead to a renaissance of leather. Instead, leather got even more under pressure due to ongoing discussions over the greenhouse gas emission of cattle breeding, the sustainability of leather production, and animal welfare. At the same time, an increasing number of people want to eat consciously meat-free or to do without any products of animal origin entirely. All these needs pose new challenges in culture and material development [3]. One strategy pursues the development of alternative nature-based, animal-free fibrous materials. This material is trama, the bulk material of some mushroom fruitbodies (e.g., Fomes fomentarius, Phellinus ellipsoideus). The extraordinary soft feel of the dry mycelium makes it a precious material for cups and handcraft accessories and already the Ice-man used it as a material in combination with leather [8,9]. Muskin ® is an example of this material. Due to the complicated harvest, the restricted availability of the mentioned fungi, and the limited areas that can be obtained it seems to be far from being able to replace leather. Further new ways are paved by using biotechnological processes to produce fiberbased materials. Namely, fungi and symbiosis of bacteria and yeast are used to produce fibrous networks aiming to imitate the fibrous structure similar to an animal skin as single materials or as support for a coating layer. Micro-cellulosic fiber networks are produced by bacteria (e.g., Acetobacter xylinum), the mycelium fiber networks of fungi hyphae consist of chitin, cellulose, and proteoglycans [5,10,11]. These mycelia grow on organic waste [11,12]. In a second strategy, it is tried to reduce the non-renewable content of artificial leather by replacing parts of the synthetic component polyvinylchloride (PVC) or polyurethane (PUR) of synthetic coatings with agricultural waste-derived products as filling material, such as grain, apple pomace (Vegea ® , Appleskin ® ), or milled cactus leaves (Desserto ® ). A third way to replace all fossil-based raw materials in a coated textile has been explored in Pinatex ® . Renewable fibers of pineapple leaves are processed into non-woven support coated with polylactic acid (PLA) produced from corn starch [13]. Regardless of the type of material, it can be leather, artificial leather, or a trendy alternative a couple of physical and mechanical limits are usually defined and have to be achieved. These limits must be evaluated in regard to the stresses associated with the production, processing, and use of the materials. In general, examinations to qualify materials and to quantify their properties need to be performed according to standardized testing procedures. Here, we present a comparative study of leather and alternative materials, which are used for similar final applications, focused on material structures, physical, and mechanical performance. Additionally, the materials were screened for hazardous substances by established standardized test methods with respect to shoe, glove, and apparel goods applications. Materials intended for automotive and upholstery applications are explicitly not included in this study, because they have to meet many superior specifications. We focused on material performance. Other important aspects such as the origin of the raw material (renewable or oil-based), the carbon footprint, the environmental footprint, traceability, and biodegradability are not dealt with in this study. Alternative materials of different sources were tested in comparison to a common shoe-upper leather as a reference and a conventional PUR-coated textile (artificial leather) as used for footwear as a second reference. All materials are commercially available and have already been applied in various types of final products. The materials were tested according to the appropriate internationally harmonized and accepted specifications for shoe, glove, and apparel goods [14][15][16]. Materials and Methods Nine materials, which are offered as an alternative for leather and represent different principal structures, have been investigated by light microscopy; their physical properties were measured and their chemical compositions were analyzed ( Table 1). The registered trademarks are specified in the table. Additionally, a shoe upper leather and a polyurethanecoated textile as artificial leather were tested as a reference. The Noani ® sample was obtained as a belt consisting of several materials, which had been combined by sewing. The physical characterization comprised standardized measurements of thickness, tensile strength, tear strength, flex resistance, water vapor absorption, and water vapor permeability [17][18][19][20][21][22]. The cross sections, surfaces, and the reverse side of the materials were portrayed by light microscopy at different magnifications. Chemical constitution and additives were investigated by FTIR (fourier transform infrared spectroscopy) and thermal desorption GC/MS (gas chromatography/mass spectroscopy). FTIR spectra were measured using a diamond ATR (attenuated total reflection) technique with 16 scans in a range of 4000-650 cm −1 . Evaluation of the spectra was based on own and various commercial databases. Volatile and harmful substances were measured according to VDA 278 [23] (VDA-Verband der Automobilindustrie, Germany). A total of 5 mg to 10 mg of material were heated to 120 • C for 60 min, and all volatile compounds were collected from the evaporate by cooling with liquid nitrogen to minus 100 • C. The substances were vaporized from the trap for 5 min at 280 • C, separated and characterized by GC/MS (GC: 50 • C for 2 min, 25 K/min to 160 • C, 10 K/min to 280, 10 min at 280 • C, column Ultra 2 (5% phenyl-methylsiloxane), 50 m × 320 µm × 0.52 µm, flow 1.3 mL/min constant pressure, MS: 29-450 m/z). For evaluation, 10 well-trained panelists assessed the materials to evaluate touch and feel. No material related information was provided to them in advance of the evaluation. The materials were presented to the panelists always in the same order either by laying it directly on a table board or placing it on a soft PUR foam of 4 mm thickness. The surfaces of the materials were blindly touched without any stretching or folding. The touch and feel properties of all materials were referenced to leather-temperature sensation (warmer or colder), deformation in z-direction (softer or harder), roughness, slipperiness/blocking behavior, pleasant or unpleasant, natural or artificial touch, high-or low-quality feeling. Material Composition and Structure All materials are composed of fibers. However, chemical constitution, arrangement, fiber size, and coating follow different principal concepts. The results of the light microscopic investigations, thermal desorption analysis, and FTIR spectra allow us to identify the compositions of the materials (Table 1). A selection of microscopic pictures of the cross sections representing the different principal structures is shown in Figure 2, and the different surface designs are presented in Figure 3. All microscopic pictures are added in high resolution in the supplements as well as an example of a FTIR spectrum of Desserto ® , a PUR coated material (see Supplementary Materials). The structures of Desserto ® (Figure 2A), Vegea ® , and Appleskin ® reflect the typical composition of PUR-coated artificial leather used for, e.g., shoe or upholstery application. The support of the investigated samples consisted of knitted or woven polyester textiles, except that of Vegea ® , which was made of cellulose. The textiles were coated with foamed polyurethane-based middle layers containing organic fillers based on cellulose. The materials are finished with polymer-based topcoats. The surfaces of the materials were partly embossed to achieve a leather-like optic and to adjust the haptic ( Figure 3E-H). Even Teak Leaf ® falls into the structural category of a coated textile. Here, the textile support on the reverse side is built by two non-woven layers. The middle layer is cellulose-based, the fibers are stuck together with an acrylic acid-based polymeric binder, and the basic support on the reverse side is polyester-based. On the top, the Teak Leaf ® material is coated by a transparent waxy polyolefin film. The leaf of teak mainly fulfills desired optical needs ( Figure 3D). Coatings 2021, 11, x FOR PEER REVIEW 5 of 15 or colder), deformation in z-direction (softer or harder), roughness, slipperiness/blocking behavior, pleasant or unpleasant, natural or artificial touch, high-or low-quality feeling. Material Composition and Structure All materials are composed of fibers. However, chemical constitution, arrangement, fiber size, and coating follow different principal concepts. The results of the light microscopic investigations, thermal desorption analysis, and FTIR spectra allow us to identify the compositions of the materials (Table 1). A selection of microscopic pictures of the cross sections representing the different principal structures is shown in Figure 2, and the different surface designs are presented in Figure 3. All microscopic pictures are added in high resolution in the supplements as well as an example of a FTIR spectrum of Desserto ® , a PUR coated material (see Supplementary Materials). The structures of Desserto ® (Figure 2A), Vegea ® , and Appleskin ® reflect the typical composition of PUR-coated artificial leather used for, e.g., shoe or upholstery application. The support of the investigated samples consisted of knitted or woven polyester textiles, except that of Vegea ® , which was made of cellulose. The textiles were coated with foamed polyurethane-based middle layers containing organic fillers based on cellulose. The materials are finished with polymer-based topcoats. The surfaces of the materials were partly embossed to achieve a leather-like optic and to adjust the haptic ( Figure 3E-H). Even Teak Leaf ® falls into the structural category of a coated textile. Here, the textile support on the reverse side is built by two non-woven layers. The middle layer is cellulose-based, the fibers are stuck together with an acrylic acid-based polymeric binder, and the basic support on the reverse side is polyester-based. On the top, the Teak Leaf ® material is coated by a transparent waxy polyolefin film. The leaf of teak mainly fulfills desired optical needs ( Figure 3D). Pinatex ® ( Figure 2D) and SnapPap ® consist of a cellulose-based fibrous non-woven. The fibers of SnapPap ® are bonded by an acrylic acid-based polymeric binder. In contrast, the investigated sample of Pinatex ® is coated with a thin polymeric layer. The fibrous structures of the non-woven fabric of both materials are visible at the surface due to the thin coating layer ( Figure 3I,K). Muskin ® ( Figure 2B) and Kombucha are single-layer materials without any textile support and without a topcoat. Both consist of polysaccharides. Muskin ® appears very porous composed of fine brown fibers with a fiber orientation perpendicular to the surface. These fibers appear at the surface as a fine lawn without a distinct structure ( Figure 3B). In contrast, Kombucha consists of one compact layer, which contains some inclusions and talcum and shows a glossy brown surface ( Figure 3C). It was found that Noani ® was not a single material but crafted of three distinct layers. The top is formed by an embossed microfiber material, the middle layer consists of leather board material and the backside is a PUR/PVC-coated textile that consists of three layers typical for conventional artificial leather. The use of a leather board material is surprising because Noani ® is referred to as a vegan product and the label "PETA approved" is embossed in the material. Because of this and since the Noani ® sample does not represent a single material, it will not be further discussed with respect to structural categories. Regardless of the actual composition, all engineered materials try to mimic a natural appearance ( Figure 3). Pinatex ® ( Figure 2D) and SnapPap ® consist of a cellulose-based fibrous non-woven. The fibers of SnapPap ® are bonded by an acrylic acid-based polymeric binder. In contrast, the investigated sample of Pinatex ® is coated with a thin polymeric layer. The fibrous structures of the non-woven fabric of both materials are visible at the surface due to the thin coating layer ( Figure 3I,K). Muskin ® ( Figure 2B) and Kombucha are single-layer materials without any textile support and without a topcoat. Both consist of polysaccharides. Muskin ® appears very porous composed of fine brown fibers with a fiber orientation perpendicular to the surface. These fibers appear at the surface as a fine lawn without a distinct structure ( Figure 3B). In contrast, Kombucha consists of one compact layer, which contains some inclusions and talcum and shows a glossy brown surface ( Figure 3C). It was found that Noani ® was not a single material but crafted of three distinct layers. The top is formed by an embossed microfiber material, the middle layer consists of leather board material and the backside is a PUR/PVC-coated textile that consists of three layers typical for conventional artificial leather. The use of a leather board material is surprising because Noani ® is referred to as a vegan product and the label "PETA approved" is embossed in the material. Because of this and since the Noani ® sample does not represent a single material, it will not be further discussed with respect to structural categories. Regardless of the actual composition, all engineered materials try to mimic a natural appearance ( Figure 3). Touch and Feel Properties The surface of Muskin ® and the upper material of Noani ® feel pleasant. Due to the presence of very fine fibers, both materials create a velvety feeling similar to suede leather. The materials with synthetic surfaces as in artificial leathers (Desserto ® , Appleskin ® , Touch and Feel Properties The surface of Muskin ® and the upper material of Noani ® feel pleasant. Due to the presence of very fine fibers, both materials create a velvety feeling similar to suede leather. The materials with synthetic surfaces as in artificial leathers (Desserto ® , Appleskin ® , Vegea ® , PUR-coated textile) show a soft feeling and can be deformed in the z-direction. However, the touch of these materials appears artificially with a sticky tendency. The surface of Pinatex ® and Teak Leaf ® appears synthetic, too. In addition, Desserto ® , Pinatex ® , and SnapPap ® are evaluated to feel rough. The surface of Kombucha appeared sticky. Thickness The thickness of the materials was determined by surveying the prepared microscopic cross sections. The overall thickness of all materials was found to range from 0.29 mm to 6.22 mm, which also shows the variety of the material types ( Table 2). The thickness of leather, the PUR-coated textile, and the trendy alternatives Desserto ® , Appleskin ® , Vegea ® , and Pinatex ® are in a typical range for materials used for shoes, gloves, and apparel goods applications. The materials Kombucha, SnapPap ® , and Teak Leaf ® appear very thin and in contrast, Muskin ® is very thick for the mentioned applications. In consequence, already from the results of the thickness measurements, significant differences in the material properties could be expected, e.g., in deformation properties. Tensile Strength and Tear Strength The most important mechanical properties for materials used for shoes, gloves, and apparel goods are tensile strength and tear strength [17,18]. The results for both parameters vary over a very wide range, whereas the category of naturally grown tissues shows the broadest range. Leather as the grown skin tissue shows a very high mechanical stability (tensile strength, tear strength), representing the highest value for the tensile strength and tear strength within that survey. Tensile strength exceeds the specification of >15 N/mm 2 for chrome tanned upper leather for shoes (ISO 20942) [14]. In contrast, the values of the Muskin ® sample are extremely low. The coated textiles show tensile strengths of 9 up to 20 N/mm 2 . The tensile strength of coated textiles depends mainly on the properties of the supporting fabric. The results show a reasonable choice of fabric for the intended use of these engineered materials regardless of the natural or artificial origin of the fabric fibers. The tensile strengths of the non-woven materials made of natural plant fibers range from 4 up to 25 N/mm 2 . Strength depends on fiber properties and fiber bonding. Despite a satisfying tensile strength, the tear resistance of SnapPap ® is low due to the short fiber length, which cannot be leveraged by the polymeric binder. Except for the non-woven materials, tear strength follows the same tendency as tensile strength. Flex Resistance Materials for shoes must resist intensive bending and convex and concave deformation during usage. The flexometer test is used to assess the long-time resistance against bending. Grade 0 is the best rating indicating that the material itself and the coating layers shows no cracks by flexing. A grade of ≤2 (only very small cracks in top layer of the coating) is usually accepted to pass that test. When a grade of >2 is observed, flexing has to be stopped and the number of flex cycles is noted. Leather, Pinatex ® , and PUR-coated textile fulfilled the specification of >80.000 flex cycles according to ISO 20942. The flex resistance of Teak Leaf ® , SnapPap ® , and Muskin ® material was found to be insufficient for the aimed applications. Water Vapor Permeability and Water Vapor Absorption A pleasant wearing comfort of shoes, gloves, or apparel is related to the water vapor permeability (WVP) of the material, which allows transporting the humidity of the body through the clothing material to its surface. The comfort is also enhanced by the ability of the materials to absorb water vapor. Comfortable water vapor permeability limits are specified in ISO 20942 to be >>0.8 mg/(cm 2 × h). Leather, Muskin ® , and SnapPap ® exceed the water vapor permeability by far, and Pinatex ® and PUR-coated textile still fulfill the ISO 20942 requirement. The WVP of all other materials is insufficient. Water vapor absorption (WVA) of Kombucha, leather, and Muskin ® is high, presumably due to their polar nature of the natural polymers. The other materials, especially those that contain a significant amount of synthetic polymer with less polarity show much lower water vapor absorption. Harmful Substances The materials were examined for potentially hazardous substances by means of thermal desorption analysis. In several samples (Appleskin ® , Pinatex ® , Desserto ® , Vegea ® , SnapPap ® , Teak Leaf ® ) synthetic and biogenic raw materials had been combined. However, the processing of fossil-based raw materials often requires the application of solvents, crosslinking agents, or plasticizers to achieve suitable material properties. All tested materials emitted volatile organic compounds when applied to the thermal desorption screening procedure. Restricted substances were identified in the samples of PUR-coated textile (reference), the similarly constructed materials Desserto ® , Appleskin ® , and Vegea ® , but also in Pinatex ® . The PUR-coated textile contained considerable amounts of dimethylformamide (DMFa) and toluene and traces of N,N-dimethylacetamide. In Appleskin ® , butanone oxime and traces of DMFa were detected. Desserto ® contained the five restricted substances butanone oxime, toluene, free isocyanate, folpet (an organic pesticide), and traces of the plasticizer Diisobutyl phthalate (DIBP). Toluene was detected in the sample of Vegea ® and DIBP in that of Pinatex ® . Discussion The materials that have been tested in this study are used to manufacture fashionable consumer goods as shoes, bags, clothes, and accessories. In this regard, aspects of (1) functional properties and (2) the appearance have to be discussed. While the construction of the material mainly influences the functional properties, the appearance is markedly a result of the surface properties. Both groups of properties varied on a very broad scale, though the materials are offered for similar final applications. Analyzing the composition, the construction, the surfaces, and the feel allowed us to compare the materials in regard to their possible performance. For this purpose, the materials were investigated by standardized testing procedures for leather since the materials are offered as a leather alternative. Structure Based on the structural design and with respect to functional properties, the investigated samples can be grouped into three completely different material concepts, which are (a) grown animal-free materials, (b) multi-layered coated fabrics as for artificial leather combined with plant-based additives, and (c) non-woven fabrics with or without a surface finish. As derived from the results of the FTIR analysis, the functional layers of the coated fabrics are mainly PUR based. In contrast, fully bio-based naturally grown materials investigated in this study did not nearly fulfill the mechanical requirements expected, e.g., for shoe upper materials. The question arises, why animal-free materials (a), which are directly manufactured by preserving a grown structure combined with more or less intensive processing, show only poor mechanical resistance compared to leather. The resulting structures of grown natural materials are intended to be an alternative for the fibrous structure of animal skin [11,24,25]. A typical representative is the fungusbased material Muskin ® taken from Phellinus ellipsoideus. The microscopic pictures show that the mycelium is composed of hyphae that represent the structure-forming component. Our measurements showed mainly polysaccharides as chemical structures. Therefore, it can be assumed that chitin forms the hyphae. The FTIR spectrum exactly fits that of other investigations [26]. The latter interpreted the spectra in more detail and also assigned proteins, lipids, and nucleic acids. However, the mechanical stability of the mycelium is limited. This may be caused by the limited stability of the hyphae itself, and the processed mycelium as shown elsewhere [11,27]. The pictures of the cross section show that the fibers are oriented perpendicular to the surface. This also leads to poor mechanical stability. Presumably, the mechanical performance could be improved if the fibers were aligned along the direction of the mechanical load. In contrast, Kombucha is a traditional Japanese beverage, which is prepared by a symbiosis of bacteria and yeasts metabolizing sugars into organic acids, ethanol, and carbon dioxide. In parallel, bacteria (e.g., Acetobacter xylinum) are secreting high molecular weight polysaccharides, which lead to a gel-like consistency. The intensively growing microorganisms can be harvested and the secreted polymers are used as a biogenic structure-forming material after a drying step [10]. The mechanical stability of the Kombucha material sample was much higher than that of Muskin ® but missed the requirements for shoe upper materials as well. While Muskin ® shows a loose and open structure, which allows water vapor to diffuse through the material, Kombucha is very tight but it absorbs higher amounts of water. Both microorganism-based materials appear homogenous in their cross sections. The raw material of leather, the animal skin is composed of collagen, a structureforming protein. Leather is built from intertwined fibrils and fibers, which are additionally cross-linked by leather tanning. The fibrous structure of leather shows a gradient in its material density from the grain to the reverse side. The final layer on top is composed of very thin and tight collagen fibers. The grown skin tissue shows a very high mechanical stability (tensile strength, tear strength), which is by 100 to 1000 times higher than that of the microorganism-based materials [5,10,11,25,26,28]. The strength of leather can be related to the stability of the collagen fibers themselves and to the weaving and crosslinks between the fibers. From the biological point of view, hyphae of microorganisms and the animal skin fulfill very different tasks in the respective organisms. The performance of natural materials had been summarized in the past in Ashby plots. Figure 4 provides an impressive overview of a couple of different natural materials and their mechanical limits [29]. Natural flexible load transferring materials appear with a density of~1 Mg/m 3 and range from 0.1 MPa to more than 1000 MPa (marked red). The tensile strength of parenchymatous plant tissues is low (marked blue). Collagen-based materials such as skin and tendon appear to be very stable (red ellipse). High stability is also observed in structures of plants, which have to transfer load (wood, cellulose fibers). It can be deduced from these considerations that grown natural materials will only exhibit flexible and mechanical resilience when load transferring structures are used. The animal skin covers a broad set of aimed functions. It has to protect the body for a long time against mechanical impact, it is flexible to allow mobility, and often it has to regulate the temperature and water balance. Plant fibers have to absorb the weight of the plant against gravitation and in load direction. In contrast, the tissue-like structures of microorganisms appear as parenchymatous materials, based on fiber networks, which primarily offer active cells a matrix for metabolism (bacteria, fungi) and transportation of nutrients (hyphae). Therefore, they appear at the bottom of Figure 4. To overcome these mechanical deficiencies, it was proposed to stabilize the fiber network of Muskin ® by crosslinking agents [25,26], or to adjust the softness of Kombucha materials by the addition of plasticizing agents [10]. However, this contradicts the multiscale idea, the variation of the density and the orientation of the fibers along their load direction, which would presumably better help to overcome the observed limitations. It can be deduced from these considerations that grown natural materials will only exhibit flexible and mechanical resilience when load transferring structures are used. The animal skin covers a broad set of aimed functions. It has to protect the body for a long time against mechanical impact, it is flexible to allow mobility, and often it has to regulate the temperature and water balance. Plant fibers have to absorb the weight of the plant against gravitation and in load direction. In contrast, the tissue-like structures of microorganisms appear as parenchymatous materials, based on fiber networks, which primarily offer active cells a matrix for metabolism (bacteria, fungi) and transportation of nutrients (hyphae). Therefore, they appear at the bottom of Figure 4. To overcome these mechanical deficiencies, it was proposed to stabilize the fiber network of Muskin ® by crosslinking agents [25,26], or to adjust the softness of Kombucha materials by the addition of plasticizing agents [10]. However, this contradicts the multiscale idea, the variation of the density and the orientation of the fibers along their load direction, which would presumably better help to overcome the observed limitations. The second concept to achieve highly functional leather-like materials with a high content of bio-based components takes up the principle of artificial leather. These materials simulate the structure of leather as multilayer material. The functions can be separated between the textile support, which has to fulfill the mechanical function (tensile strength, tear strength), the middle layer, by which feel and softness are adjusted, and the topcoat, which takes over the final optical appearance. A grain structure similar to that of leather is mostly achieved by embossing. Therefore, to increase the biogenic part of the materials, it would be effective to replace the polyester support with natural fibers. As shown in Figure 4, the natural fibers may show excellent mechanical stabilities. However, only The second concept to achieve highly functional leather-like materials with a high content of bio-based components takes up the principle of artificial leather. These materials simulate the structure of leather as multilayer material. The functions can be separated between the textile support, which has to fulfill the mechanical function (tensile strength, tear strength), the middle layer, by which feel and softness are adjusted, and the topcoat, which takes over the final optical appearance. A grain structure similar to that of leather is mostly achieved by embossing. Therefore, to increase the biogenic part of the materials, it would be effective to replace the polyester support with natural fibers. As shown in Figure 4, the natural fibers may show excellent mechanical stabilities. However, only Vegea ® and Teak Leaf ® use a cellulose-based fabric as support. The renewable content in these multilayer materials could also be increased by replacing synthetic polymers in the coating layers. Desserto ® and Appleskin ® adopt this principle. A part of PUR is replaced by agricultural by-products, which are used as fillers. A detailed analysis of the origin of the natural component and its content in relation to the bulk of the material was not possible, however. Nevertheless, the bulk of the materials remains to consist of polyurethane. In the case of Teak Leaf ® , a natural-appearing surface is created by imparting the natural leaf, which is covered by synthetic waxes. However, because of the missing elastic foam layer as used in artificial leather and its replacement by the plant leaf, the flex resistance is hampered and appears not adequately adjusted from a functional point of view. As a third strategy to replace leather with animal-free materials for which plant-based non-woven fabrics are used. The fossil-based polyester textile support is exchanged by natural fiber alternatives as cotton, linen, etc. Ideally, the fabric would be finished with a biobased polymer. Pinatex ® , for example, is manufactured from pineapple leaf fibers, which are laboriously processed before they are coated by a thin polymer film that can be either fossil-based or from renewable resources to improve usability. Pinatex ® promotes its polymer finish to be polylactide, which can be produced fully bio-based [13, [30][31][32]. However, our analysis showed at least a remarkable content of PUR/acrylate in the finish. The very thin surface coating does not completely cover the fibrous non-woven, which leads to a hard surface with a fibrous appearance that withstands the flex test, however. The material appears more similar to a textile non-woven and the low mechanical resistance can be correlated directly to the low binder content of the fibers of the non-woven support. SnapPap ® is based on cellulosic fibers as well, but in contrast to Pinatex ® , the matrix is bound with acrylic acid-based polymers. Both materials neither simulate a leather structure nor do they appear as a leather surface. Therefore, they should rather be estimated as coated or impregnated textile than as artificial leather or leather alternative. Figure 5 shows impressively the performance of the different materials in comparison to the references. Alternative materials have specific advantages, but none of the materials combines high mechanical strength and flex resistance with high water vapor permeability as in the case of leather. Surfaces and Appearance The surfaces of the materials are their "face." This is impressively shown with the applied leaf in Teak Leaf ® as an eye-catcher. The resulting structures appear optically interesting but do not fulfill the same function as the foam layer in engineered coated tex- Surfaces and Appearance The surfaces of the materials are their "face." This is impressively shown with the applied leaf in Teak Leaf ® as an eye-catcher. The resulting structures appear optically interesting but do not fulfill the same function as the foam layer in engineered coated textiles. Other plant materials, which are used in similar constructions to achieve interesting optical properties are, e.g., cork based. They have not been investigated in this study but have to be supported as well by textiles to achieve suitable physical properties [33][34][35]. Therefore, the Teak Leaf ® solution appears more design-driven than it takes functional aspects into account. To achieve sufficient functional properties of the surface of flexible materials often a final topcoat is applied, which then determines the optic and partially haptic and other useful properties of the materials, e.g., flex resistance, abrasion behavior, and soiling behavior. The water vapor permeability also depends on the tightness of the topcoat or of watertight layers in between. The thicker this layer the less the material allows vapor and gas permeation [36]. Leather as investigated usually shows only a very thin top coat to improve the soiling behavior. The flex resistance is very high. The water vapor permeability is in an appropriate range. Muskin ® and SnapPap ® show very open structures, which allow water vapor perfectly to diffuse. However, their performance characteristics, particularly the poor flex resistance, limits the long-time use. All other investigated materials are very tight against water vapor permeation. Used as a shoe material, this would lead to sweating of the feet and would reduce the comfort [37,38] but the materials would be tight against rain if used as, e.g., bag or rain jacket. The microscopic surface of SnapPap ® and Pinatex ® shows the fibrous non-woven fabric, which takes over the mechanical properties of the material. SnapPap ® and Pinatex ® do not appear leather-like. They show hard surfaces and an exposed fibrous non-woven structure. Applying a thicker polymer coating and embossing a grain structure would presumably lead to a leather-like optic. In this case, considerations regarding water vapor permeability for coated textiles have to be taken into account. Conclusions None of the alternative materials achieved the properties of leather according to the applied reference values, although many of them are offered as a leather alternative ( Figure 5). The question of why it is difficult to achieve these properties by alternative natural materials is answered with the different biological functions of the used materials. Leather is a multi-scale material, which is designed by nature to fulfill load transferring and metabolic functions. It shows a gradient in the tightness of the structure composed of differently fine hydrophilic protein fibers. Each part of the structure takes over a specific function. The reticular layer, which consists of coarse fiber bundles is responsible for the high mechanical resistance (tensile and tear strength). The destruction of the fiber network of the grown tissue leads to a decrease of the mechanical stability by 10 times [28]. This can only slightly be improved when the fibers are again bound by binding agents (e.g., middle layer leather board of Noani ® ). The more compact and finer fibers of the papillary layer of leather and the grain membrane cause the leather-like appearance and the tight structure on top. Nevertheless, the water vapor permeability is high if no tight synthetic topcoat is applied. The hydrophilic fibers of leather can absorb much water, which leads to high comfort if compared with the synthetic alternatives. The biogenic non-woven textiles Pinatex ® , SnapPap ® , and Kombucha show similar water absorption values as leather but lack mechanical and flexural strength. Therefore, it remains a challenge and an aim to reproduce the function of the bionic structure of the skin with alternative biological techniques as has already been mentioned many years ago [29]. When agricultural byproducts are added to polymer layers of artificial leathers, the biobased content of the materials is raised but no physical advantage over the reference material can be measured. Only a proper life cycle analysis would allow assessing the associated advantage.
9,273.6
2021-02-13T00:00:00.000
[ "Materials Science", "Environmental Science" ]
Performance evaluation of IB-DFE-based strategies for SC-FDMA systems The aim of this paper is to propose and evaluate multi-user iterative block decision feedback equalization (IB-DFE) schemes for the uplink of single-carrier frequency-division multiple access (SC-FDMA)-based systems. It is assumed that a set of single antenna users share the same physical channel to transmit its own information to the base station, which is equipped with an antenna array. Two space-frequency multi-user IB-DFE-based processing are considered: iterative successive interference cancellation and parallel interference cancellation. In the first approach, the equalizer vectors are computed by minimizing the mean square error (MSE) of each individual user, at each subcarrier. In the second one, the equalizer matrices are obtained by minimizing the overall MSE of all users at each subcarrier. For both cases, we propose a simple yet accurate analytical approach for obtaining the performance of the discussed receivers. The proposed schemes allow an efficient user separation, with a performance close to the one given by the matched filter bound for severely time-dispersive channels, with only a few iterations. Introduction Single-carrier frequency-division multiple access (SC-FDMA), a modified form of orthogonal frequency-division multiple access (OFDMA), is a promising solution technique for high data rate uplink communications in future cellular systems. When compared with OFDMA, SC-FDMA has similar throughput and essentially the same overall complexity.A principal advantage of SC-FDMA is the peak-toaverage power ratio (PAPR), which is lower than that of OFDMA [1,2].SC-FDMA was adopted for the uplink, as a multiple access scheme, of the current long-term evolution (LTE) cellular system [3]. Single-carrier frequency domain equalization (SC-FDE) is widely recognized as an excellent alternative to OFDM, especially for the uplink of broadband wireless systems [4,5].As other block transmission techniques, SC-FDE is suitable for high data rate transmission over severely time-dispersive channels due to the frequency domain implementation of the receivers.Conventional SC-FDE schemes employ a linear FDE optimized under the minimum mean square error (MMSE) criterion.However, the residual interference levels might still be too high, leading to performance that is still several decibels from the matched filter bound (MFB).Nonlinear time domain equalizers are known to outperform linear equalizers and DFE are known to have good performance-complexity tradeoffs [6].For this reason, there has been significant interest in the design of nonlinear FDE in general and decision feedback FDE in particular, with the IB-DFE being the most promising nonlinear FDE [7,8].IB-DFE was originally proposed in [9] and was extended for a wide range of scenarios in the last 10 years, ranging from diversity scenarios [10,11], MIMO systems [12], CDMA systems [13,14], and multi-access scenarios [15,16], among many other.Essentially, the IB-DFE can be regarded as a low complexity turbo equalizer [17][18][19][20] implemented in the frequency domain that do not require the channel decoder output in the feedback loop, although true turbo equalizers based on the IB-DFE concept can also be designed [21][22][23].An IB-DFE-based scheme specially designed for offset constellations (e.g., OQPK and OQAM) was also proposed in [24].In the context of cooperative systems, an IB-DFE approach was derived to separate the quantized received signals from the different base stations (BSs) [25]. Works related to IB-DFE specifically designed for SC-FDMA-based systems are scarce in the literature.In [26], the authors proposed an IB-DFE structure consisting of a frequency domain feedforward filter and a time domain feedback filter for single-user SC-FDMA systems.An iterative frequency domain multiuser detection for spectrally efficient relaying protocols was proposed in [27], and a frequency domain soft-decision feedback equalization scheme for single user SISO SC-FDMA systems with insufficient cyclic prefix was proposed in [28]. In this paper, we consider a broadband wireless transmission over severely time-dispersive channels, and we design and evaluate multi-user receiver structures for the uplink single-input multiple-output (SIMO) SC-FDMA systems that are based on the IB-DFE principle.It is assumed that a set of single antenna user equipment (UE) share the same physical channel to transmit its own information to the base station, which is equipped with an antenna array.Two multi-user IB-DFE-based processing schemes are considered, both with the feedforward and feedback filters designed in space frequency domain: iterative successive interference cancellation (SIC) and parallel interference cancellation (PIC).In the first approach, the equalizer vectors are computed by minimizing the mean square error (MSE) of each individual user at each subcarrier.In the second one, the equalizer matrices are obtained by minimizing the overall MSE of all users at each subcarrier.For both cases, we propose a quite accurate analytical approach for obtaining the performance of the proposed receivers. The remainder of the paper is organized as follows: Section 2 presents the multi-user SIMO SC-FDMA system model.Section 3 presents in detail the considered multi-user IB-DFE-based receiver structures.The feedforward and feedback filters are derived for both cases and analytical approach for obtaining the performance is discussed.Section 4 presents the main performance results, both numerical and analytical.The conclusions will be drawn in Section 5. Notation: Throughout this paper, we will use the following notations.Lowercase letters, uppercase letters, are used for scalars in time and frequency, respectively.Boldface uppercase letters are used for both vectors and matrices in frequency domain.The index (n) is used in time while the index (l) is for frequency.(.) H , (.) T , and (.) * represent the complex conjugate transpose, transpose, and complex conjugate operators, respectively, E[.] represents the expectation operator, I N is the identity matrix of size N × N, CN(.,.) denotes a circular symmetric complex Gaussian vector, tr(A) is the trace of matrix A, and e k is an appropriate column vector with 0 in all positions except the kth position that is 1. System model Figure 1 shows the considered uplink SC-FDMA-based transmitter of the kth user equipment.We consider a BS equipped with M antennas and K single antenna UEs share the same physical channel, i.e., the information from all UEs is transmitted at the same frequency band.A SC-FDMA scheme is employed by each UE and the data block associated to the kth ) is selected from the data according to a given mapping rule.Then, the L-length data block symbols are moved to frequency domain obtaining {S k,l ; l = 0, …, L − 1} = DFT{s k,l ; l = 0, …, L − 1}.After that, the frequency domain signals are interleaved so that they are widely separated in the OFDM symbol, therefore increasing the frequency diversity order.Finally, an OFDM modulation is performed and a cyclic prefix is inserted to avoid inter-symbol interference (ISI).Without loss of generality, we concentrate on a single L-length data block, although in practical system several data blocks are mapped into the OFDM symbol. The received signal in frequency domain (i.e., after cyclic prefix removal, N-FFT, and chip demapping operations), at the mth BS antenna and on subcarrier l can be expressed as assuming that the cyclic prefix is long enough to account for channel impulse responses between the UEs and the BS.In (1), is the noise.In matrix format, (1) can be re-written as The channel vector of the kth user is defined as Multi-user IB-DFE receiver strategies In this section, we present in detail the multi-user iterative frequency domain receiver design strategies based on the IB-DFE concept [6].Two iterative approaches are considered: SIC and PIC. IB-DFE SIC approach Figure 2 shows the main blocks of the IB-DFE SICbased process.For each iteration, we detect all K UEs on lth subcarrier, in a successive way, using the most updated estimated of the transmit data symbols associated to each UE to cancel the corresponding interference.Thus, this receiver can be regarded as an iterative SIC scheme.However, as with conventional single-user IB-DFE-based receivers, we take into account the reliability of the block data estimates associated to UEs for each detection and interference cancellation procedure. From Figure 2, we can see that at the ith iteration, the signal received on lth subcarrier associated to the kth UE, before the L-IDFT operation is given by denoting the feedforward and feedback vector coefficients of the kth UE applied on the lth subcarrier, respectively.The vector is the DFT of the block of time domain average values conditioned to the detector output for user k and iteration i. Cleary, the elements of S k 0 ;l are associated to the current iteration for the UEs already detected (k ' < k) and associated to the previous iterations for the UE that is being detected, as well as the UEs still not detected in this iteration.For normalized QPSK constellations (i.e., s k,n = ± 1 ± j), the average values are given by [13] where and We should emphasize that although we only consider QPSK constellations, IB-DFE-based schemes in general and our techniques in particular can easily be extended to other constellations.For this purpose, we just need to employ the generalized IB-DFE design of references [29,30].The hard decision associated to the symbol Ŝk;l ≈ ρ k S k;l þ Δ k;l , which means that S k;l ≈ ρ 2 k S k;l þ ρ k Δ k;l , and in matrix form, we have S l ≈ P 2 S l þ PΔ l .It can be shown that the error Δ l = [Δ 1,l … Δ K,l ] T has zero mean and P = diag(ρ 1 , …, ρ K ), with correlation coefficients defined as being a measure of the estimates reliability associated to the ith iteration, approximately given by For larger constellations, an estimate of the correlation coefficient can be computed as in [29,30].For a given iteration and the detection of the kth UE, the iterative receiver equalizer is composed by coefficients These coefficients are computed to maximize the overall signal-to-interference plus noise ratio (SINR) at the FDE output and, therefore, minimize the bit error rate (BER). If we consider a normalized FDE (i.e., E this is formally equivalent to minimize the MSE.For a QPSK constellation with Gray mapping, the BER can be approximately given where Q(x) denotes the well-known Gaussian function and MSE k,l is the mean square error on the frequency domain samples given by For the sake of simplicity, the dependence on the iteration index is dropped in (12) and in the following equations.After some mathematical manipulations, it can be shown that ( 12) is reduced as The different correlation matrices of ( 13) are given by with R s ¼ σ 2 S I K and R N ¼ σ 2 N I M , being the correlation matrices of data symbols and noise on each carrier. From (11), we can see that to minimize the BER of each UE, we need to minimize the MSE of each UE on each subcarrier.However, only considering the MSE minimization may lead to biased estimates and thus to avoid it, we force the received amplitude of each user to one, i.e., 1 The constrained optimization problem can be formulated as min We use the Karush-Kuhn-Tucker (KKT) [31] conditions to solve the optimization at each step with all but one variable fixed.The Lagrangian associated with this problem can be written by where μ k is the Lagrangian multiplier [32].The KKT conditions are After straightforward but lengthy mathematical manipulation, we obtain the feedforward and feedback vector coefficients with the iterative index dependence, with The Lagrangian multiplier is selected, at each iteration i, to ensure the constraint 1 L It should be emphasizes that for the first iteration (i = 1), and for the first UE to be detected, P (0) is a null matrix and IB-DFE PIC approach Figure 3 shows the main blocks of the IB-DFE PICbased process.For each iteration, we detect all K UE on the lth subcarrier, in a parallel way, using the most updated estimated of the transmit data symbols to cancel the residual interference, which it could not be cancelled in the first equalizer block.Thus, this receiver can be regarded as an iterative PIC scheme [20].However, as with conventional IB-DFE-based receivers and the above SIC approach, we take into account the reliability of the block data estimates for each detection procedure. From Figure 3, the received signal on lth subcarrier of all UEs, before the L-IDFT operation is given by where i is a matrix of size MxK with all UEs' feedforward vector coefficients, K ;l T is a matrix of size KxK with all UEs' feedback vector coefficients, and For this approach, the matrices l are computed to minimize the average bit error rate (BER) of all UEs, and for a QPSK constellation, the average BER can be approximately given by BER≈Q Here, the MSE l is the overall mean square error on the frequency domain samples given by Replacing ( 21) in ( 23) and after some mathematical manipulations, it can be shown that ( 23) is reduced to with the correlation matrices R Y;S l and R S;S defined as Note that the correlation matrices R Y l , R S; S , and R S;Y l were already defined in (14).Contrarily to the SIC approach, to minimize the average BER, we need to minimize the overall MSE at each subcarrier.Here, to avoid the bias, we force the received amplitude to K, i.e., 1 L optimization problem can be formulated as min We also use the KKT conditions to solve the optimization problem.The Lagrangian associated with this problem is now given by where μ is the Lagrangian multiplier.The KKT conditions are After lengthy mathematical manipulation, we finally obtain the feedforward and feedback matrices with the iterative index dependence, with In this approach, the Lagrangian multiplier is selected, at each iteration i, to ensure the constraint 1 L Since all users are detected in parallel, for the first iteration (i = 1), P (0) is a null matrix and S 0 ð Þ l is a null vector.The complexity of the SIC approach is slightly higher than the PIC one.For the SIC, we need to invert a matrix of size MxM for each user on each iteration, while for the PIC one, we need to invert a matrix of size MxM for all users on each iteration, i.e., the SIC approach requires K − 1 more matrix inversions per iteration.Since in the receiver SIC structure, each user is detected individually and sequentially, the delay is also higher. Performance results In this section, we present a set of performance results, analytical and numerical, for the proposed IB-DFE-based PIC and SIC receiver schemes.Two different scenarios are considered: Scenario 1, we assume two UEs (K = 2) and a BS equipped with two antennas (M = 2).Scenario 2, we assume four UEs (K = 4) and a BS equipped with four antennas (M = 4). For both scenarios, the main parameters used in the simulations are N-FFT size of 1,024; L-DFT size set to 128 (this represents the data symbols block associated to each UE); sampling frequency set to 15.36 MHz; useful symbol duration is 66.6 μs, cyclic prefix duration is 5.21 μs; overall OFDM symbol duration is 71.86 μs; subcarrier separation is 15 kHz, and a QPSK constellation under Gray mapping rule, unless otherwise stated.Most of the parameters are based on LTE system [33]. The channel between each UE and the BS is uncorrelated and severely time dispersive, each one with rich multipath propagation and uncorrelated Rayleigh fading for different multipath components.Specifically, we assume a L p = 32-path frequency-selective block Rayleigh fading channel with uniform power delay profile (i.e., each path with average power of 1/L p ).The same conclusions could be drawn for other multipath fading channels, provided that the number of separable multipath components is high.Also, we assume perfect channel state information, synchronization and |α k | 2 = 1, ∀ k.The results are presented in terms of the average bit error rate (BER) as a function of E b /N 0 , with E b denoting the average bit energy and N 0 denoting the one-sided noise power spectral density.In all scenarios, we present the theoretical and simulation average BER performances for both proposed receiver structures: IB-DFE PIC and SIC.For the sake of comparisons, we also include the matched filter bound (MFB) performance. Figures 4 and 5 show the performance results for the first scenario, considering IB-DFE PIC and IB-DFE SIC, respectively.Starting by analysing the results presented in Figure 4, it is clear that the proposed analytical approach is very precise, especially regarding the first iteration.Note that for this iteration, the IB-DFE PIC reduces to the conventional MMSE frequency domain multi-user equalizer, since P (0) is a null matrix and |α k | 2 = 1, ∀ k is a null vector.Although there is a small difference between theoretical and simulated results when we have iterations, mainly due to errors in the estimation of variance of the overall error at the FDE output (see (7)) and the non-Gaussian nature of the overall error, our analytical approach is still very accurate, with differences of just a few tenths of decibels.As expected, the BER performance improves with the iterations, and it can be observed that for the iteration, the performance is close the one obtained by the MF, many for high SNR regime.Therefore, the proposed IB-DFE PIC scheme is quite efficient to separate the users and achieve the maximum system diversity order, with only a few iterations. From Figure 5, we can also see that the analytical approach proposed for the IB-DFE SIC structure is very accurate.The BER performance approaches, with a number of iterations as low as 4, very closely to the limit obtained with the MFB.This means mean that this receiver structure is also able to efficiently separate the UEs, while taking advantage of the space-frequency diversity inherent to the MIMO SC-FDMA-based systems.Comparing the SIC and the PIC approach, it is clear that for the first iteration the SIC approach outperforms the PIC one.It can be observed a penalty of approximately 1 dB of the PIC against the SIC, for a BER = 10 −3 .This is because the SIC-based structure to detect a given user takes into account the previous detected ones, with the exception for the first user.However, when the number of iteration increases, the performance of the PIC approach tends to the one given by the SIC approach.We can observe that the BER performance of both approaches is basically the same for four iterations.Figures 6 and 7 show the performance results for the second scenario, considering IB-DFE PIC and SIC, respectively.From these figures, we basically can point out the same conclusions as for the results obtained in the previous ones.We can see that similarly to the first scenario the proposed analytical approaches for both IB-DFE SIC and PIC structure are very accurate.However, comparing the results obtained for this scenario with the ones obtained for scenario 1, we can see that the overall performance is much better.This is because our receiver structures can take benefit of the higher space-diversity order available in this scenario, since they are efficient in removing both multi-user and inter-carrier interferences. The previous results indicate that IB-DFE receivers can have excellent performance, close to the MFB, for MIMO systems with QPSK constellations.One question that arises naturally is if this is still valid for larger constellations such as QAM constellations.In fact, the performance of a DFE for larger constellations can be seriously affected due to error propagation effects.As an example, we present in Figure 8 the performance results for 16-QAM constellations in the second scenario, considering IB-DFE SIC approach.Clearly, we are still able to approach the MFB, although we need more iterations, the convergence is less smooth and we only approach the MFB for lower BER (and, naturally, larger SNR).Although these good results might be somewhat surprising, we should have in mind that an IB-DFE is not a conventional DFE due to the non-causal nature of the feedback.Moreover, the error propagation effects are much lower in IB-DFE receivers due to the following issues: Symbol errors (which are in the time domain) are spread over all frequencies.Due to the frequencydomain nature of the feedback loop input, a symbol error has only a minor effect on all frequencies.The FDE is designed to take into account the reliability of estimates employed in the feedback loop.When we have a large number of symbol errors, the reliability decreases and the weight of the feedback part decreases.When we have a decision error, we usually move to one of the closer constellation symbols, i.e., the magnitude of the error is usually the minimum Euclidean distance of the constellation, regardless of the constellation size.This is especially important for larger constellations. As we pointed out, an IB-DFE can be regarded as a complexity turbo equalizer implemented in the frequencydomain which does not employ a channel decoder in the feedback loop.For this reason, it has a turbo-like behavior with good performance provided that the BER is low enough.That is why we can only approach the MFB for larger SNR. Conclusions In this paper, we designed and evaluated multi-user receiver structures based on the IB-DFE principle for the uplink SIMO SC-FDMA systems.Two multi-user IB-DFE PIC-and SIC-based processing schemes were considered.In the first approach, the equalizer vectors were computed by minimizing the mean square error (MSE) of each individual user at each subcarrier.In the second one, the equalizer matrices were obtained by minimizing the overall MSE of all users at each subcarrier.For both cases, we proposed a quite accurate analytical approach for obtaining the performance of the proposed receivers. The results have shown that the proposed receiver structures are quite efficient to separate the users, while allowing a close-to-optimum space-diversity gain, with performance close to the MFB (severely time-dispersive channels) with only a few iterations.The performance of both PIC and SIC receiver structures is basically the same after three or four iterations.However, the main drawback of the SIC approach is the delay in the detection procedure, which is larger than for the PIC, since it detects one user at each time.Thus for practical systems, where the delay is a critical issue, the PIC approach can be the best choice. To conclude, we can clearly state that these techniques are an excellent choice for the uplink SC-FDMA-based systems, already adopted by the LTE standard. Figure 2 Figure 2 Iterative receiver structure for UE k based on IB-DFE SIC approach. Figure 3 Figure 3 Iterative receiver structure based on IB-DFE PIC approach. Figure 5 Figure 5 Performance of the IB-DFE SIC structure for scenario 1. Figure 6 Figure 6 Performance of the IB-DFE PIC structure for scenario 2. Figure 7 Figure 7 Performance of the IB-DFE SIC structure for scenario 2. Figure 8 Figure 8 Performance of the IB-DFE SIC structure for scenario 2 and 16-QAM. l represents the channel be- tween user k and the mth antenna of the BS on subcarrier l, where H Performance of the IB-DFE PIC structure for scenario 1.
5,279.4
2013-12-01T00:00:00.000
[ "Computer Science", "Business" ]
Experience in applying digital modeling to improve component manufacturing A review of the application of digital modeling in the design of technological processes of aircraft parts manufacturing is presented. The examples of parts manufacturing by casting, volumetric and sheet forming methods, including the use in the mode of superplasticity are given. Technical solutions obtained on the basis of modeling results of manufacturing processes are applied in parts manufacturing. Digital models are compared with real parts. The possibility of detection of undesirable effects (corrugation, springing, porosity) on digital models is proved, which allows to correct the parameters of the technological process before the stage of real production. Introduction Rapid development of software and hardware systems for engineering analysis of various purposes is typical for the present time. Hardware is being rapidly improved. It becomes possible to use complex and time-consuming calculation algorithms to solve nonlinear problems in dynamic formulation. Such tasks include modeling of the processes occurring in the workpieces during the implementation of various methods of metal and alloy processing. To date, the world has accumulated experience in the use of digital technologies for virtual modeling of manufacturing processes of parts in the preparation of their production. For this purpose different software is used -both universal: ANSYS, Marc, and specialized -PamStamp, QForm, ProCast and others. Employees of the Irkutsk National Research Technical University have been working for more than ten years on the application of virtual technological modeling technology to improve the preparation of machine-building production of parts for various purposes. All works were carried out by order and in close contact with the industrial partners of the university -engineering enterprises of our country: Irkutsk and Ulan-Ude aviation plants, Irkutsk Heavy Engineering Plant. The main directions of work are modeling of casting processes, volume and sheet forming, both in cold and hot condition, including the use of superplasticity and diffusion welding. Below are some examples of the application of virtual simulation of metal and alloy processing [1]. Modeling of casting processes With the help of ProCast program the technological analysis of the process of casting the steel wheel of mining equipment with a diameter of two meters was performed (Fig. 1). Figure 1. Electronic geometric model of the wheel Результаты моделирования показали, что при применении типовых схем литья в песчаные формы могут образовываться многочисленные раковины в зонах пересечения обода и спиц (рис. 2). Figure 2. Distribution of casting defects (pores) in parts After analyzing more than a dozen variants of the casting process, a solution was found that helped to ensure that the wheel was cast without defects. One more example of the development of the sand casting process of the "front knot" part made of aluminum alloy (Fig. 3) [2,3,4]. According to the results of modeling, a variant of casting model and casting system design was found, which provided defect-free casting of the part (Fig. 4) Modeling of a more complex casting process -casting into shell molds produced by melting models can be illustrated by the example of casting a "lever" type part (Fig. 5,6) [5,6]. By means of changes in the initial design of the stearic melted-out model (the rotation of the part model by 180 degrees in the vertical plane relative to the casting system is proposed) it was possible to exclude the formation of porosity in the knee of the lever. The study of the finished part confirmed the correctness of such a technical solution -casting defects remained only in the casting system. (Fig. 6). Modeling of three-dimensional forging processes For virtual modeling of the manufacturing process of parts by methods of volume forging was used QForm program of the Russian developer. The drawbacks of the existing process of production of steel "angle" type parts were modeled and revealed [5]. On the basis of the results of modeling (Fig. 7) in the technological process for the purpose of manufacturing a workpiece close to spherical shape it was proposed to introduce an additional forging operation. Thus it was possible to reduce the material consumption on 32 % and to raise quality of stamping ( fig. 8). In addition, QForm simulated the state of the deformation tools ( Fig. 9), which showed that the lower tool at the end of the second stress transitions reach the limit values and the risk of stamp failure is high. The further search for tool configuration led to the decision to change the shape of the tool engraving to a smoother shape in the area of the forging flange by increasing the radii of transitions to and within the flange, increasing the thickness of the flange, as well as increasing the slope at the ends of the flange. The picture of stresses in the changed design is shown in Fig. 10. Modeling of sheet-forming processes In the production of many mechanical engineering products, various types of sheetmetal forging are used. The production of titanium alloy parts is particularly difficult. An example of manufacturing of such a part are fairings (Fig. 11, 12) made of 0.5 mm thick sheet [6,7]. Such a part was made by a molding-out in rigid stamps on a sheet-forming hammer from the sheet blanks heated up to the temperature of 550 -700 °С, for 5 -7 blows with manual landing of the formed corrugations after each blow. This forming process was simulated in the specialized PamStamp program. The results showed the following. For single part #1, 6 strokes are required. Residual corrugations are observed (Fig. 13) [8,9,10]. Virtual geometric trimming of the allowances showed that the corrugations remain on the parts, and there is a significant springing (residual deformation) and distortion of the part shape, requiring additional finishing works ( fig. 14). Thus, virtual modeling of the process of forming the workpiece showed that forging in hard dies can be made fairings, but it is a rather labor-intensive process, complicated by a number of negative factors: the formation of hard-to-remove corrugations on parts, a large number of transitions with intermediate annealing, a high level of springing, the need for additional heat treatment in cramped conditionsthermal fixation. In the manufacture of parts from the sheet is spread stamping elastic medium. This is due to lower tooling costs: one of the tools replaces rubber, liquid or gas. The production of complex shaped sheet parts by this method is also a difficult task. This is due to the corrugation and springing. To develop a rational process of manufacturing such parts in a traditional way will require a large amount of experimental work. PamStamp, a specialized program for modeling die casting processes, allows you to define the contour of a rational workpiece, determine the nature and location of corrugations, evaluate the levels of springing, adjust the working surface of the tool by the size of this spring. Thus, for the part "Diaphragm" (Figs. 17,18) was simulated forming with the subsequent piercing of the window in the zone of location of the folding [9][10][11]. Based on the results of modeling, a real part was made and a good convergence of the virtual process of forming with the real forging was confirmed. Good results have been received at virtual and full-scale development of the process of elastoforming of a detail "Bottom" (fig. 19, 20). The manufactured part is considered suitable. Particularly challenging are the processes of hot stamping in special material states of the parts, such as superplastic ones. Thus, as an alternative option for the manufacture of fairings (Figs. 21, 22) it was proposed to use group pneumatic-thermal forming in the mode of superplasticity [12][13][14][15]. It is proposed to form 4 parts at once for one forming transition -2 of each type in one tooling. The number of parts manufactured simultaneously in this case was determined by the size of the working area of the equipment at the disposal of the university staff. Modeling of the group pneumatic-thermal forming in the mode of superplasticity was carried out in the program PAM-STAMP. The optimal law of forming pressure change from process time was calculated by its means. Virtual pneumatic thermoforming has shown that parts can be manufactured without corrugations and residual springs in superplasticity mode. In addition, it is established that the deformation capacity of the metal, despite the small thickness of the workpiece, is sufficient for forming without destruction. Both on the models and on the cut parts the corrugation formation did not appear ( fig. 23, 24). As a result of experimental works it is established that at formation of details in a mode of superplasticity there was no corrugation formation that excludes finishing works on their elimination, besides, the effect of a springing is not shown. The group moulding has provided manufacture of four details at once for one operation that allows to raise productivity of technological process. With large machine footprints and the use of an additional die for upward forming, a much larger number of parts can be produced at the same time. However, the expected reduction in labor intensity of the new technology can achieve more than 30%. Thus, the offered variant of technological process of manufacture of fairings provides manufacture of low-tech details from hard-to-cut alloys. Good results have been achieved in modeling and real development of the process of pneumatic thermoforming of complex pipeline elements (semi-pipes) made of titanium alloy ( fig. 25, 26, 27, 28). The capabilities of IRNITU allowed to test the production technology of the wedge-shaped panel with different orientation of the inner set of titanium alloy. Thickness of external sheets is 2 mm. The thickness of the inner sheet is 1 mm. Conclusion With the help of the universal program of engineering analysis of MSC.Mark virtual modeling of the process of pneumothermal formation of three-layer panel with transverse (Fig. 29, 31) and longitudinal set (Fig. 30, 32) was performed [8,9], control programs for pneumothermal formation equipment were calculated. The panels were manufactured without defects . Thus, with the help of virtual modeling of various machining processes it was possible to find rational schemes of technological processes, constructions of blanks and technological equipment quickly enough and at the minimum, in comparison with traditional empirical approaches, expenses. Besides, thanks to virtual modeling of machining processes it was possible to reduce labor input of manufacture of details not less than on 30 ... 70 % depending on a method of processing and complexity of a design of a detail.
2,413.6
2019-11-08T00:00:00.000
[ "Engineering", "Materials Science", "Computer Science" ]